Employers and AI

Dan RhodesDan Rhodes
Written by
Dan Rhodes
and
-
August 2, 2023

ChatGPT, Bard and other AI applications are on everyone’s radar these days. Some of us use them, others contemplate using them and many are discussing the ethical issues surrounding them. Back in 2019, the European Commission’s High-level Expert Group On Artificial Intelligence discussed the ethics and trustworthiness of AI and noted that 'trustworthiness' is a ‘prerequisite for people and societies to develop, deploy and use AI systems’ [1] and that, without the necessary trust in AI, ‘unwanted consequences may ensue and their uptake might be hindered, preventing the realisation of the potentially vast social and economic benefits that they can bring’ [2].

The foundation of trustworthy AI is based on four ethical principles: (1) respect for human autonomy, (2) prevention of harm, (3) fairness, and (4) explainability.

Similar principles have been around for a while, in 1942 Isaac Asimov promulgated the following three laws:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm;
  • A robot must obey orders given to it by human beings, except where such orders would conflict with the first law;
  • A robot must protect its own existence, as long as such protection does not conflict with the first or second law.

However, Asimov’s laws did not consider the matter of 'explainability' which the European Commission has included. Rightly so, because it is key for building ‘trust’ in AI.

Interestingly, the ICO came up with similar principles for developers of AI:

  • be fair;
  • be transparent;
  • be accountable;

In other words, consider the impact a negative decision made by an AI system would have on individuals, as well as the wider society. Without explanation, transparency, or accountability, it is likely that a decision made by an AI system would be regarded as unfair.

We already know that some employers are using AI systems to aid with recruitment. However, if an AI system is trained to make decisions based on characteristics of previous successful candidates, failing to consider the relevant skills and traits of an individual, leading to a candidate being rejected, then it could be seen as a bias and discriminatory outcome. Moreover, even the use of neutral criteria such as postcode in training AI, could lead to bias.

Whilst we are all still waiting for an AI Act to be introduced, which should presumably consider our fundamental human rights and balance them against the need for innovation, we can, in the meantime, rely on existing laws, such as GDPR, to remedy bias and discrimination.

For example, take Article 22 of the GDPR which states that data subjects have the right not to be subjected to a decision that is based solely on automated processing, including profiling.  Furthermore, Articles 13(2)(f), 14(2)(g) and 15(1)(h) of the GDPR provide that the data controller (in our example, the recruiter), must inform data subjects of:

…automated decision-making, including profiling . . . and meaningful information about the logic involved, as well as the significance . . . and envisaged consequences of such processing’.

Does automated processing include AI? Based on the strict interpretation of the language used by legislators and some case law that investigates similar matters, it is likely that AI would be included. Article 22 is vital in the ‘explainability’ factor to uncover any biases and harmful discrimination in an AI system.

Interestingly, as of 5th July 2023, the State of New York has made it mandatory for any company operating and hiring in New York, that is using AI and other machine learning technology as part of their hiring process, to perform an annual audit of their recruitment technology.  Only time will tell whether this will be a sufficient tool to determine fairness, transparency and accountability, but for now, it appears that the State of New York is leading the way forward.


References:

1 & 2: https://ico.org.uk/media/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection-2-0.pdf

Loved this article? Share it with your network: