News

Expanding use of artificial intelligence into Employment and Labor practice: Legislative response and legal implications

November 21, 2025

AuthorMarkeya A. Fowler

Related PeopleMarkeya A. Fowler

Practice AreasEmployment

Over the past fifteen years, artificial intelligence (AI) has become increasingly embedded in the legal field, often in ways that were not widely anticipated. While it was initially expected that legal professionals would need to familiarize themselves with AI tools to enhance efficiency in practice, few foresaw the necessity for attorneys to possess a deep understanding of AI’s technical functions in order to competently advise clients on the legal ramifications of its use. Although the application of traditional AI to accomplish tasks such as resolve analytical problems or sort large datasets has become routine, the evolution of the technology now enables generative AI to not merely process information, but to autonomously make decisions based on the information presented. Generative AI is capable of generating new content such as images or text by learning from data supplied to it.[i] The most popular examples of generative AI that have been integrated into professionals everyday use are Google’s Gemini and Microsoft’s Copilot. Every time a user drafts an email or a word document or simply completes a search on Google, generative AI is taking that information to provide suggestions and generating responses. Other examples of generative AI include Chat GPT, Meta AI, Claude and many more that are constantly under development.

In recognition of these risks, the Illinois Supreme Court has explicitly cautioned against the uncritical adoption of generative AI in legal proceedings and emphasized the necessity of protecting due process, equal protection and access to justice.[ii] The Illinois Supreme Court warned that AI-generated content, lacking evidentiary foundation or accuracy, may entrench bias, prejudice litigants and obscure truth-finding and decision-making.[iii] While the Illinois Supreme Court’s warning of the unintended consequences of AI was limited to those in the legal profession, the warning is one that should be considered by all users of generative AI.

In particular, there has been an increase of the use of AI in the employment sector, which has resulted in enhanced and efficient decision-making for many employers, particularly in the areas of recruitment and staff management. Employers are using AI to analyze candidate qualifications and employee performance. These practices are frequently justified on the basis that algorithmic decision-making can reduce human bias and produce objective outcomes. At first glance, this practice seems appropriate and can result in quick and efficient decisions, improving the flow of businesses.

Closer scrutiny of this practice by legislatures and legal professionals, however, revealed complex legal and ethical concerns. Critics of generative AI have recognized that the manner in which information is deciphered and sorted may border on improper or cross the line into illegal, replicating or amplifying existing biases. These same critics began to wonder what guidelines and information AI was using to make its decisions. How was the technology making its critical decisions? Was it possible the technology is biased and producing biased results?

At the legislative level, Illinois has recognized the possibility of the misuse of generative AI and the potential consequences and has responded in kind by regulating its use. In the employment context, key statutes that regulate the use of AI include the Artificial Intelligence Video Interview Act (AVIA) and the Illinois Human Rights Act (IHRA). These legislative enactments focus on transparency of use and prohibiting discrimination through AI. Although the Illinois General Assembly has introduced several bills aimed at establishing broad regulatory oversight of AI, none have yet to be enacted into law. These proposed bills included:

  1. Illinois Senate Bill 1792, which would amend the Consumer Fraud and Business Practices Act and require owners, licensees or operators of a generative AI system to display a warning on the system user’s interface to notify the user that outputs of the generative AI system may be inaccurate.
  2. Illinois Senate Bill 2255, which would prohibit the use of surveillance data in an automated decision-making system to set an individuals wage for an employee.
  3. Illinois House Bill 3567, which would prohibit an Illinois agency or any entity acting on behalf of a state agency from utilizing any automated decision-making system without continuous meaningful human review when performing any of the agency’s specified functions.

Nonetheless, the trajectory suggests that more comprehensive legislation is forthcoming as AI technologies continue to evolve.

Overview of current regulation of AI in Illinois

Artificial Intelligence Video Interview Act (AVIA)

In place since 2020, but unknown to many employers, the AVIA regulates employers’ use of AI to analyze an applicant’s video recordings for employment positions based in Illinois.[iv] Review of pre-recorded videos, where a candidate responds to a set of questions, is becoming a popular method for employers to assess a candidates qualifications based on their responses. This pre-assessment often determines if a candidate will move forward in the hiring process and receive a more formal interview.

When first enacted, the AVIA imposed three requirements upon employers to ensure transparency regarding the use of AI.[v] Before requesting an applicant to submit an interview video, employers were required to: i) notify applicants before any interview that AI may be used to analyze the applicant’s video and evaluate fitness for the position; ii) notify the applicant before the interview how the potential employer’s AI functions and the general characteristics which AI evaluated from the video; and iii) secure the applicant’s consent to be evaluated by AI.[vi] Employers may not use AI to evaluate applicants if the applicant refuses to provide consent.[vii] Oddly, the AVIA is silent on whether or not the employer has can obligation to still provide an interview if the candidate refuses to provide consent for the use of AI.

Notably, even when an applicant consents, the employer is restricted in its use of the video.[viii] The AVIA prohibits employers from sharing interview videos except with those persons whose expertise or technology is necessary to evaluate the video for the applicant’s fitness for the position.[ix]

AVIA also contains provisions for destruction of an applicant’s video.[x] Within thirty days of an applicant’s request, the employer must delete an applicant’s video including all electronically stored back-up copies.[xi] In addition, the employer must also instruct any persons who received copies of the applicant’s video to delete the videos.[xii]

Since its enactment, AVIA was amended to expand transparency and limit potential bias by any employers who opted to use AI video analysis.[xiii] In 2022, the legislature amended AVIA to include mandatory reporting for any employer who relied solely on artificial intelligence to evaluate applicants.[xiv]Employers are required to report every December 31: (i) the race and ethnicity of applicants who are and are not afforded the opportunity for an in-person interview after the use of AI analysis; and (ii) the race and ethnicity of applicants who are hired.[xv]

Illinois Human Rights Act (IHRA)

On January 1, 2025, the Illinois legislature updated the IHRA to prohibit discriminatory employment decisions using AI.[xvi] To prevent any confusion as to what AI is covered under the amended regulations, the Illinois legislature defined AI and generative AI. Under the IHRA, AI is defined as “a machine-based system that for explicit or implicit objectives, infers from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments.”[xvii] This definition explicitly notes that AI includes generative AI.[xviii] Generative AI is defined as “an automated computing system that when prompted with human prompts, descriptions or queries, can produce outputs that simulate human-produced content, including but not limited to, the following:

  1. Textual outputs such as short answers, essays, poetry or longer compositions or answers;
  2. Image outputs such as fine art, photographs conceptual art, diagrams and other images;
  3. Multimedia outputs, such as audio or video in the form of compositions, songs or short-form or long form audio or video; and
  4. Other content that would be otherwise produced by humans.”[xix]

Effective January 1, 2026, employers will be liable for civil right violations for an employment decision using AI with respect to recruitment, hiring, promotion, renewal of employment, selection for training or apprenticeship, discharge, discipline, tenure or the terms, privileges or conditions of employment that results in discrimination based on a protected class or that uses zip codes as a proxy for protected classes.[xx] Employers will also be required to provide notice to an employee that an employer is using AI for the prior stated reasons.[xxi] Protected classes under IHRA shall remain race, color, religion, national origin, ancestry, age, sex, marital status, order of protection status, disability, military status, sexual orientation, pregnancy, unfavorable discharge from military service, citizenship status, work authorization status, family responsibilities and reproductive health decisions.[xxii]

Potential future regulations in Illinois

As AI continues to be pushed as the key to the future and as companies find new and innovative ways to adopt AI to their business models, state legislatures will need to adapt and respond with regulations to control this expansive industry. Illinois is one of many states that seems to be taking a proactive approach to its regulation of AI, which could have far reaching implications for those who deploy AI and those whose personal information is the subject of AI’s analysis.[xxiii]

On February 7, 2025, the Illinois Senate introduced SB 2203, the Preventing Algorithmic Discrimination Act (PADA).[xxiv] PADA is intended to regulate deployers of AI who use the technology to make consequential decisions. If PADA were to pass, it would regulate private businesses, persons and government agencies who use AI to grant persons access to employment, education, housing, essential utilities, health care, including family planning, financial services, legal services and hearings, voting and access to benefits. Under PADA, users of AI for the purposes enumerated under the Act, would be required to: (i) notify persons who are the subject of any AI decision, (ii) maintain a governance program containing administrative and technical safeguards to measure and manage reasonably foreseeable risks of discrimination and (iii) submit all assessments of the AI tool to the Attorney General. As of publication, SB2203 has made no progress and is unlikely to pass this current legislative session.

Implications for employers

Given the increasing reliance on generative AI in both the private and public sectors, it is imperative that employers consider the manner in which AI is deployed, acquire an in- depth understanding of how it functions and deciphers information and never rely on AI to be the sole decision maker. If employers are already using AI in any employment related decisions, it is necessary for employers to review company practices to ensure oversight of AI systems and  review service providers agreements to understand how the system is reaching its decisions. Failure to do so may expose employers to significant legal liability.

AI systems are designed to autonomously process and analyze vast volumes of data. While beneficial, this process also increases the potential for unintended discrimination, which is now explicitly prohibited under the IHRA. If AI tools are the sole decision makers in recruitment, salaries, performance evaluations or disciplinary decisions, there is an increased risk that it may reinforce or exacerbate existing biases. Crucially, many employers may not be fully aware when AI decision-making crosses the line into discriminatory territory. Discrimination can manifest in two ways:

  1. Disparate Treatment  when individuals are treated differently based on protected characteristics such as race, gender, age or disability.
  2. Disparate Impact – when a seemingly neutral AI policy or algorithm disproportionately affects a protected group, even if unintentionally.

Both forms of discrimination can expose employers to costly civil litigation, regulatory investigations and class action lawsuits. In conjunction with the cost of litigation, employers may find themselves liable for claimant’s lost benefits, unpaid wages, compensatory damages and attorney’s fees.

In 2014, while developing AI software to examine candidate resumes, Amazon was forced to terminate a project after determining it would result in disparate impact.[xxv] The software was designed to review resumes and determine which applicants should be hired but Amazon discovered it was discriminating against female applicants in technical positions, such as software engineers.[xxvi] When developing the software, Amazon used its own employees’ resumes as a data set for desired qualifications and these employees were predominately male.[xxvii] When the software was asked to discover other resumes from the existing data set, it sought to reproduce the demographics of the existing workforce, discriminating against female candidates.[xxviii] The software accomplished this by downgrading resumes that listed women’s colleges or activities that contained women in the title.[xxix] Luckily for Amazon, this flaw in the software was discovered early enough that it had no real impact. Employers who choose to use similar methods to screen candidates, must be similarly vigilant.

Another source of exposure Illinois employers should remain cognizant of is the Illinois Biometric Information Privacy Act (BIPA).[xxx] BIPA requires users of software that capture biometric information to notify the individual their biometric information is being captured, specify the purpose of the collection, the period of time the biometric information will stored and obtain a written release, prior to collecting any biometric information.[xxxi] Employers who use AI facial recognition in conjunction with video interviews, such as to analyze facial expressions, speech patterns and other non-verbal cues to assess personality traits and confidence, could face liability under AVIA and BIPA. Violations of BIPA have led to substantial settlements and penalties in recent years including Facebook’s $650 million settlement,[xxxii] Google’s $100 million settlement,[xxxiii] and TikTok’s $92 million settlement.[xxxiv]

While Illinois’ smaller employers are unlikely to face settlements comparable to the country’s largest organizations, these substantial settlements should serve as a cautionary tale of the implications of unchecked AI. As the technology advances, so too must the diligence with which it is implemented and monitored.

Conclusion

As AI continues to evolve and embed itself in core business functions, the need for comprehensive oversight and responsible use becomes increasingly urgent. Illinois has taken a proactive stance in recognizing both the potential benefits and significant risks associated with generative AI, particularly in the employment sector. Through legislation like the Artificial Intelligence in Video Act and amendments to the Illinois Human Rights Act, the state has prioritized transparency, accountability and the prevention of discrimination. Looking ahead, proposed legislation such as the Preventing Algorithmic Discrimination Act signals a broader regulatory framework that could reshape how AI is governed across industries. Businesses that adopt AI must move beyond convenience and efficiency to fully understand AI’s capabilities and its limitations. Failure to do so could expose companies to substantial legal liability. In this rapidly changing landscape, staying informed and compliant is not just advisable … it is essential.

Client alert authored by Markeya A. Fowler, (312 849 4126), associate.

This Chuhak & Tecson, P.C. communication is intended only to provide information regarding developments in the law and information of general interest. It is not intended to constitute advice regarding legal problems and should not be relied upon as such.


[i] Merriam Webster Dictionary, https://www.merriam-webster.com/dictionary/generative%20AI

[ii] Illinois Supreme Court Policy on Artificial Intelligence, effective January 1, 2025, https://ilcourtsaudio.blob.core.windows.net/antilles-resources/resources/e43964ab-8874-4b7a-be4e-63af019cb6f7/Illinois%20Supreme%20Court%20AI%20Policy.pdf

[iii] Id.

[iv] 820 ILCS 42-5

[v] Id.

[vi] Id.

[vii] Id.

[viii] Id. at 42-10

[ix] Id.

[x] Id. at 42-15

[xi] Id.

[xii] Id.

[xiii] Id at 42-20

[xiv] Id.

[xv] Id.

[xvi] 775 ILCS 5/2-102

[xvii] Id at 101(M)

[xviii] Id.

[xix] Id. at 101(N)

[xx] Id. at 101 (L)(1)

[xxi] Id. at 101 (L)(1)

[xxii] Id. at 101 (E-1)

[xxiii] Texas enacted the Texas Responsible Artificial Intelligence Act, Colorado enacted the Colorado Artificial Intelligence Act and California, Georgia, Hawaii and Washington have bills pending.

[xxiv] https://www.ilga.gov/documents/legislation/104/SB/PDF/10400SB2203lv.pdf

[xxv] Why Amazon’s Automated Hiring Tool Discriminated Against Women, Rachel Goodman, October 12, 2018, https://www.aclu.org/news/womens-rights/why-amazons-automated-hiring-tool-discriminated-against#:~:text=But%2C%20according%20to%20a%20Reuters%20report%20this,technical%20jobs%2C%20such%20as%20software%20engineer%20positions.

[xxvi] Id.

[xxvii] Id.

[xxviii] Id.

[xxix] Id.

[xxx] 740 ILCS 14/15

[xxxi] Id. at 14/15 (b)

[xxxii] In re Facebook Biometric Info. Privacy Litigation, No. 3:15-cv-03747-JD (N.D. Cal. 2020).

[xxxiii] Rivera, et al. v. Google, 1:2016-cv-02714 (N.D. Ill 2018)

[xxxiv] In re TikTok, Inc., Consumer Privacy Litigation, No. 1:2020-cv-04699 (N.D. Ill 2024)