2025 Governors and Legislatures (Projected)
image/svg+xml Skip to main content
Search image/svg+xml

Key Takeaways:

  • States and localities have begun to explore legislation and regulations that regulate the use of artificial intelligence in hiring and promotions. Several states and the District of Columbia introduced bills during the 2023 legislative sessions to regulate the use of AI in the hiring process, although none have passed. However, this issue is likely to proliferate during the 2024 state sessions and beyond.
  • One common provision in these bills is the implementation of regular assessments and safeguards. Bills in multiple states propose conducting yearly assessments of AI hiring tools to ensure fairness and prevent discrimination, similar to a law enacted in New York City in 2021.
  • Other bills attempt to stem potential harms from the use of AI tools in hiring, such as those in Illinois and Rhode Island. The Illinois bill would require that employers who use predictive data analytics in employment decisions cannot consider an applicant’s race, or use zip code as a proxy for race, in hiring decisions.

In recent months, artificial intelligence (AI) programs like ChatGPT and Lensa AI have raised many questions about how AI will impact our everyday lives. However, AI is already widely used in some areas to help automate and streamline certain tasks. One area where AI is currently being used is in hiring. A 2022 study conducted by the Society for Human Resource Management that surveyed over 1,500 of their members found that 79% used AI to assist in hiring and recruitment. Of those members, 64% said they used AI to screen and review applicant resumes and 25% used it to pre-select candidates for interviews. 

It is easy to see how using AI to help in hiring can be an attractive option. Using AI to assist in screening hundreds of resumes or to search through databases to find qualified candidates could save a hiring manager hours of time. However, AI hiring also has potential downfalls. For example, AI is only as good as the information fed into it, so qualified candidates could be overlooked and subsequently discarded early in the hiring process if the parameters used to help identify candidates are too broad, too narrow, or otherwise not being properly used. Additionally, similar to concerns around facial recognition, using AI hiring tools could lead to biased and potentially discriminatory hiring practices if the AI system has not been properly trained or is not used properly. 

Given these potential concerns around using AI during the hiring process, lawmakers at the state, local, and federal level have begun looking at ways to begin to regulate and put in safeguards for AI hiring tools. On July 5, 2023, New York City will begin enforcing its newly issued rules for employers who use AI during the hiring process. Additionally, several states considered bills this session that aimed to regulate and provide transparency for AI hiring tools. 

New York City’s Artificial Intelligence Hiring Law

In December 2021, the New York City Council adopted Local Law 144. Local Law 144 requires that any employer or employment agency that uses an AI tool for hiring or promotions must have an independent auditor conduct an annual bias audit of the AI tool. and make a summary of the results available on their website. Additionally, a summary of the audit must be made available on the employer’s website and candidates must be given notice that an AI tool is being used during the hiring or promotion process. 

Following the passage of Local Law 144, the New York City Department of Consumer and Worker Protection (DCWP) issued proposed rules to implement the law and sought feedback through public comments. DCWP issued its final rules on April 6, 2023. The final rules will go into effect, and enforcement of Local Law 144 will begin, on July 5, 2023. 

In the final rules, DCWP provided more information on the required annual bias audit. The final rules clarify that the bias audit must examine the impact the AI tool has on sex, race, the intersectionality of sex and race, as well as any other groups candidates may be put in that the AI considers. The final rules also provide additional information on the data that can be used to conduct the bias audit. Historical data from an employer’s past use of the artificial intelligence  tool should be used to conduct the bias audit. However, the rules do allow employers to rely on historical data from other employers if the employer has never used an AI hiring tool before or they do not have enough of their own data to conduct a statistically significant audit. Additionally, employers may use test data, which the final rule defines as data used to conduct a bias audit that is not historical data, if there is insufficient historical data to conduct a statistically significant bias audit. However, if test data is used, the summary of results for the audit must explain why historical data was not used and how the test data used was generated and obtained. 

State Artificial Intelligence Legislation 

In addition to New York City, states have also begun to consider ways to regulate the use of AI during the hiring process. This year, nine states and the District of Columbia have considered laws that would impose safeguards for the use of AI tools in the hiring process. However, none of the bills that have been introduced have passed their initial chamber of introduction. As most states will be ending their sessions in the coming weeks, it is unlikely that these bills will become laws this year. However, given the increasing interest in AI, for hiring and other purposes, it is likely that states will consider similar bills in the future. 

Several of the bills that have been introduced have similar requirements to the New York City law. California, New Jersey, New York, Massachusetts, and the District of Columbia have each introduced bills that require annual assessments of AI hiring tools to ensure they are not producing disparate or discriminatory results. These requirements are similar to the annual bias audit required in New York City. Additionally, other bills attempt to stem other potential harms from the use of AI tools in hiring. For example, in Illinois HB 3773 would require that employers who use predictive data analytics in employment decisions cannot consider an applicant’s race, or use zip code as a proxy for race, in hiring decisions. Additionally, in Rhode Island HB 6286 requires that any company using AI ensures that it is not used to engage in discrimination or bias based on protective characteristics sex as race, gender, or age. 

New York City was  not the first jurisdiction to enact laws related to the use of artificial intelligence in hiring. In 2020, Illinois’s Artificial Intelligence Video Interview Act went into effect. The Act requires that employers who ask applicants to record video interviews and use AI to analyze the videos must notify applicants of the use of artificial intelligence to analyze the interview, provide each applicant with information prior to the interview explaining how the artificial intelligence  works and what characteristics it uses to evaluate applicants, and requires employers to obtain consent from applicants to be evaluated by AI. Additionally, this law prohibits employers from using AI to evaluate applicants who have not consented to the use of artificial intelligence  analysis.

Tracking State Artificial Intelligence Legislation 

MultiState’s team is actively identifying and tracking this issue so that businesses and organizations have the information they need to navigate and effectively engage with the emerging laws and regulations addressing artificial intelligence. If your organization would like to further track state artificial intelligence legislation, or other technology issues, please contact us.