How do you say giant companies employing thousands and tens of thousands of personnel choose from hundreds of thousands of CVs piled up at the same time and recruit them?
In fact, recruitment processes are a process that must be carried out professionally, regardless of the small or large size of a company. But when it comes to the recruitment strategies of large companies, the situation can change. Of course, every company has recruitment techniques that reflect their corporate culture, although they sometimes appear as a general talent-general culture test, they differ from institution to institution from time to time. Plenty of 5-6-step tests have left the place of interviews from the first stages to the use of artificial intelligence in the recruitment process. However, the result is achieved by transforming the criteria determined by that company into algorithms with artificial intelligence. Since the algorithm is the way designed to solve a certain problem or achieve a certain purpose, candidates who cannot meet certain criteria, whose reason and quality are uncertain, without the need for detailed examination in the recruitment process, can be unfairly eliminated by automatic recruitment systems without even having to meet face to face.
Although it is thought that artificial intelligence technologies can be impartial and objective and can highlight high potential candidates, it is highly likely that the engineers who develop the algorithms of these systems are exposed to conscious or unconscious biases. For example, it could cause an algorithm to be biased towards a particular gender, race, or age group. It really depends on the mentality and intervention of the person who engineered the algorithm; in other words, this is also called a kind of ” algorithmic discrimination” .
Exactly for this reason, an interesting development took place in the USA recently and the New York City Local Law No. 144 started to be implemented as of July 2023.
This law, called NYC 144 for short, stipulates that employers who use an automated employment decision tool or “AEDT” (automated hiring system) to assist with employment decisions must verify whether such tools are acting biased.
As long as a New York resident is applying for a job using the AEDT (“Automated Employment Decision Tools”), any governing employer or employment agency will be eligible for NYC 144 compliance. will have to provide explanations and procedures. In short, a regulation requiring employers who hire, place or promote staff using algorithms to submit these algorithms for an independent audit and to make the results public was offered as an assurance to employees residing in New York.
Here, the criteria by which employees reach average scores, skin color, ethnicity, gender, education, culture, marital status, religion, etc. The situation of being exposed to different treatment from different people is questioned. The aforementioned law, wanting to prevent discrimination, has led to debates as it brings with it the provision of many technical infrastructures and related deadlocks.
As you may recall, many steps are being taken to prevent discrimination in the world. In 2014, Amazon tried to create an AI recruiting tool that automatically scans resumes for key terms, but had to disable it three years after the system was found to be biased against women.
While artificial intelligence tools such as OpenAI’s ChatGPT are taking the world by storm, it is also known that large companies such as investment bank Goldman Sachs and Unilever continue to use the artificial intelligence software of HireVue, a video analysis tool, to scan job application interviews during the recruitment process. .
Considering that artificial intelligence lacks personal connection, is biased in data analysis and therefore susceptible to discrimination, the NYC 144 Act regulations are promising for employees. Indeed, the penalty for violating NYC 144 includes fines of not more than $500 for the first violation and not less than $500 and no more than $1,500 for each additional violation occurring on the same day as the first violation, and for each subsequent violation.
Situation in Turkey, Progress in the Use of Artificial Intelligence within the Framework of the Prohibition of Discrimination:
With the Turkish Human Rights and Equality Institution Law No. 6701 (TİHEK), extensive regulations have been made in our national legislation regarding the prohibition of discrimination. As a matter of fact, 1 of the said law. The purpose of this law is to protect and develop human rights on the basis of human dignity, to guarantee the right of individuals to be treated equally, to prevent discrimination in the enjoyment of legally recognized rights and freedoms, to operate in line with these principles, to fight torture and ill-treatment effectively, and to take national measures in this regard. It is the establishment of the Human Rights and Equality Institution of Turkey in order to fulfill its role as a mechanism, the regulation of the principles regarding its organization, duties and authorities.
6 of the same law, titled employment and self-employment. In the article; “The employer or the person authorized by the employer; against the employer’s employee or the person applying for this purpose, the person who is in a workplace or applying for this purpose to gain practical work experience, and the person who wants to work in any capacity or to gain practical work experience, against the person who wants to get information about the workplace or job, to be informed, apply, choose cannot discriminate in any of the work-related processes, including the criteria for employment, employment conditions, and employment and termination processes.” It is called.
We would like to point out that one of the three main mandates of TİHEK is the prevention of discrimination. In this context, TİHEK; operates in our country within the framework of protecting and promoting human rights, securing the right of people to be treated equally and preventing discrimination; It examines and decides ex officio or upon application, and monitors the consequences of non-discrimination violations.
When we take a look at the studies carried out at the national level regarding the prevention of discrimination arising from the use of Artificial Intelligence , the Human Rights Action Plan 2021-2023, which came into effect with the Presidential Circular No. 2021/9 published in the Official Gazette dated 30.04.2021, comes to the fore. Article 8.10 of the aforementioned Plan titled ” Protection of Human Rights in the Digital Environment and Against Artificial Intelligence Applications “. With the goal numbered “Artificial intelligence, the legislative framework will be established by taking into account international principles, ethical principles will be determined, and measures will be taken to protect human rights in this area.” is called.
With the Presidential Circular numbered 2021/18 published in the Official Gazette dated 20.08.2021, the National Artificial Intelligence Strategy 2021-2025, which is the first national strategy document of our country in the field of artificial intelligence, was put into effect. The Strategy Paper also focuses on an effective Artificial Intelligence ecosystem in order to prevent discrimination arising from the use of Artificial Intelligence, and the creation of an appropriate ethical and legal framework that takes into account the technological nature of Artificial Intelligence.
On the other hand, there have been developments in the field of KVKK. Recommendations on the Protection of Personal Data in the Field of Artificial Intelligence has been published on the website of the Personal Data Protection Authority on September 15, 2021 . As is known, Artificial Intelligence studies and applications based on personal data processing must comply with personal data protection regulations. In addition, if sensitive personal data that may cause discrimination or victimization about the person concerned is processed, it requires much more strict protection than other personal data.
With the increasing use of Artificial Intelligence technologies, it is an expected result that these technologies will produce discriminatory decisions and therefore, the possibility of situations such as deprivation of work or service opportunities and violation of the presumption of innocence will increase.
There is no need for a law similar to the NYC 144 Law in our country yet, because the current legislation, both the Labor Law, the Code of Obligations, and the Turkish Human Rights and Equality Institution Law No. 6701, and the candidates’ gender, race, age, religion, disability or similar status in all recruitment processes. requires non-discrimination for any reason. Employers should follow the principle of equality in the recruitment process and evaluate candidates according to job-related qualifications. The use of artificial intelligence in recruitment should also be evaluated within this framework, and it should not cause discrimination in a way that may cause victimization of individuals as a result of both the recruitment process and the special quality personal data learned about the candidates since the end of the recruitment process and which may require tighter protection.
In order to avoid the above-mentioned negativities, it would be appropriate to prepare algorithms in accordance with all legal criteria from the very beginning with the support of lawyers who are experts in the fields of data protection, labor law and informatics law in software development processes, and analysis tests of existing software would be appropriate. Thus, more fair, non-discriminatory, egalitarian and correct employment processes will be introduced.
K&P Legal Law Firm