Legal risks in AI in recruitment and how to avoid them
“We must address, individually and collectively, moral and ethical issues raised by cutting-edge research in artificial intelligence and biotechnology, which will enable significant life extension, designer babies, and memory extraction.”
Merging recruitment with technological innovation always comes with potential legal risks, and this can occur parallel to the time during which employers are looking for faster, more accurate, and cost-effective recruitment platforms that leverage the latest technological innovations to streamlines the hiring process through artificial intelligence (AI) tools.
Employers are turning to AI to transform recruitment and generate a seamless hiring process. This includes automating the candidate sourcing, candidate pool screening, and using AI assessment tools, such as conversational chatbots and video interviewing tools that can measure a candidate’s strengths based on factors such as facial expression, word choice, body language, and vocal tone. However, the use of artificial intelligence (AI) and other automated decision-making (ADM) technology doesn’t come without risk. Employers should tread carefully while implementing such HR Tech solutions.
In this article, we examine the legal considerations and safeguards that are currently being undertaken by the United States of America, Europe, and Singapore.
Legal considerations are generally rooted in two areas of law:
Both these laws are heavily disaggregated civil law considerations.
The Employment law precedents are also the source of significant tort-based liability (in the US) owing to the low threshold set for discriminatory action.
A big area of concern in employment law is bias and discriminatory effects (as opposed to discriminatory intent as is usually the concern in the human decision.
Privacy law is increasingly being aggregated under the GDPR framework set out, and the developing landscape of the field makes compliance even more nuanced and imperative. However, in the US there is once again a very low tort liability threshold that opens up the risk of liability much more broadly.
The two key areas of the Privacy Law to keep in mind:
AI and The Privacy Paradigm: In a field where personal data is necessary to product functionality, the following concerns occur: Is the privacy threshold raised? If non-discrimination as a result of opt-out isn’t viable, is that still a breach of privacy?
Facial and voice analysis and privacy: Facial analysis in AI recruitment has already come under fire for bias, however, the feature has also been routinely criticised as an invasion of privacy as it performs in-depth analysis that arguably is not possible in a human interface interview. Additionally, the storing, processing, and analysis of biometric data presents another privacy consideration entirely.
The U.S. Equal Employment Opportunity Commission
The EEOC has made clear that employers using AI in their hiring process can be liable for unintended discrimination, and AI vendors regularly include non-liability clauses in their contracts with employers. Therefore, employers need to validate AI tools and take steps to ensure that they do not cause inadvertent discrimination when hiring. Employers should test the capabilities of the AI algorithm in the pilot system to see if the results are biased. For large employers, you can use your company’s, Chief AI Officer. Small employers may prefer to contract with a data scientist. In any case, these individuals need to work with their employer’s attorney to validate the data, check for prejudices, and determine the risk of liability while protecting information in agreement with legal liability.
Although AI has not yet been federally regulated for adoption, Illinois has just passed the first law of its kind, called the Artificial Intelligence Video Interview Act. Beginning January 1, 2020, the law requires employers to analyze candidate video interviews using AI to:
Employers must notify applicants that AI will be used in their video interviews.
Employers must explain to applicants how the AI works and what characteristics the AI will be tracking in relation to their fitness for the position.
Employers must obtain the applicant’s consent to use AI to evaluate the candidate.
Employers may only share the video interview with those who have AI expertise needed to evaluate the candidate and must otherwise keep the video confidential.
Employers must comply with an applicant’s request to destroy his or her interview video within 30 days.
EU and AI use in recruiting and hiring:
EU officials stated that AI technology needs proactive regulation now, as it may become difficult to regulate AI later due to the rapid advances in the technology, and insisted on finding a balance between reasonable, commercial, and operational interests of companies, and privacy and anti-discrimination rights of employees.
AI systems providers would need to supply detailed documentation about how their systems work to ensure that they follow the proposed rules and that failure to comply would mean facing penalties and fines of up to 30 million euros (approximately US$36 million) or even higher for large global organizations.
If the EU proposal passes, it will create a more standardized, ethical, and transparent approach to using AI in the recruitment and hiring process, noted Eric Sydell, executive vice president of innovation at software company Modern Hire.
The Commission proposes to ban completely AI systems that:
manipulate persons through subliminal techniques or exploit the fragility of vulnerable individuals, and could potentially harm the manipulated individual or third person;
serve for general purposes of social scoring, if carried out by public authorities; or
are used for running real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes.
Singapore’s AI Governance Framework
Singapore introduced its Model Artificial Intelligence Governance Framework in January 2019 at the World Economic Forum (WEF) in Davos. The two guiding principles of the framework state that decisions made by AI should be “explainable, transparent and fair”; and AI systems should be human-centric. These principles are then developed into four areas of guidance.
The first is establishing or adapting internal governance structures and measures to “incorporate values, risks, and responsibilities relating to algorithmic decision-making”.
The second determines the level of human involvement in AI decision-making and helps organisations decide what their risk appetite is.
The third area of guidance focuses on operations management and deals with factors that should be considered when “developing, selecting and maintaining AI models, including data management”.
The final area shares strategies for communicating with stakeholders and management on the use of AI solutions.
The framework translates ethical principles into pragmatic measures that businesses can do.
Why you can trust impress.ai
The European Union (EU) General Data Protection Regulation (GDPR) is a set of industry regulation that became effective on May 25th 2018. The purpose of the legislation was to give EU citizens greater control over the data that they provide online. GDPR covers companies that are operating within the EU and for companies that offer services within the European Union electronically, that track / store personal data in aggregate. With impress.ai operating primarily in non-EU jurisdictions, it provides both GDPR compliant and non-GDPR compliant versions of impress.ai’s’ recruitment automation Software-as-a-Service. Hiring companies, that are clients of impress.ai can require impress ai’s SaaS to be GDPR compliant as part of the service agreement.
Join our mailing list and stay updated on recruitment automation news & trends.
Thanks for your interest! We'll get back to you soon