As employers continue to grasp the benefits, uses, challenges, and risks of using artificial intelligence (AI) in the workplace, they should take note of new laws in Colorado, Illinois, and Texas that will go into effect in early 2026. This article focuses on the employment law provisions and implications of each of these laws.

Colorado

The Colorado Artificial Intelligence Act (CAIA), which will become effective June 30, 2026, is widely considered to be the most groundbreaking and comprehensive legislation in the U.S. regarding the development and use of AI. The firm previously issued a client alert providing a detailed analysis of the CAIA’s requirements, which differ depending on whether an entity is a developer or deployer of an AI system. Most employers likely fall under the “deployer” or user category, and, therefore, should pay particular attention to those requirements.

Employers are subject to the CAIA when they deploy an AI system that makes, or is a substantial factor in making “a decision that has a material legal or similarly significant effect on the provision or denial … of employment or employment opportunities.” The CAIA defines “consumers” broadly to include any resident of Colorado. Therefore, the law’s protections likely will apply to both job applicants and employees who are Colorado residents. Virtually all employers that do business in Colorado—which potentially may include employing Colorado residents and/or considering job applicants who reside in Colorado—and utilize AI systems in employment decisions are thus required to comply with the CAIA (although the law excludes small employers with fewer than 50 employees and whose data is not used to train the AI system).

Covered employers are required to use reasonable care to protect Colorado residents from the known or foreseeable risk of “algorithmic discrimination.” Algorithmic discrimination is any condition in which the use of an AI system results in “an unlawful differential treatment or impact against an individual or group of individuals” on the basis of actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classifications protected under Colorado or federal law.

There is a rebuttable presumption that an employer that implements a CAIA-compliant risk management policy and program to govern the deployment of AI systems used reasonable care to avoid algorithmic discrimination. To benefit from the rebuttable presumption, the risk management policy must specify and incorporate the principles, processes, and personnel that the employer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program also must be reviewed and updated throughout the life cycle of the AI system.

To comply with the CAIA, employers are also required to: conduct impact assessments annually and within 90 days after any intentional and substantial modification to the AI system; provide notice to individuals when the company deploys an AI system to make, or to be a substantial factor in making, employment decisions; and notify the Colorado attorney general (AG) if the company determines the AI system caused algorithmic discrimination, within 90 days of such discovery.

The law does not appear to create a private right of action and is subject to enforcement by the Colorado AG. However, violations of the CAIA constitute an unfair or deceptive trade practice under Colorado’s consumer protection law, which allows for civil actions. Thus, further guidance is needed on whether employees and job applicants can assert a claim against an employer for violations of the CAIA.

Although almost 18 months have passed since the CAIA was signed into law, changes remain likely based on recommendations from Colorado’s Artificial Intelligence Impact Task Force. However, given its impending effective date, employers in Colorado should review their systems and processes to determine whether, and to what extent, AI is used in employment decision-making. Once an employer determines it is covered by the CAIA, it should begin preparing the initial impact assessment, as well as the notices to be provided to individuals regarding the company’s use of AI in employment decision-making, to ensure readiness when the law goes into effect.

Employers outside of Colorado should take note of the CAIA’s requirements, as we anticipate that other states will follow the same model in enacting their own AI laws in the future.

Illinois

On January 1, 2026, an amendment to the Illinois Human Rights Act (IHRA) will go into effect, regulating employers’ use of AI in recruitment, hiring, promotion, renewal of employment, selection for training or apprenticeship, discharge, discipline, tenure, and the terms, privileges, or conditions of employment. The law classifies an employer’s use of AI as a civil rights violation if such use has the effect of subjecting employees to discrimination on the basis of membership in a protected class or if zip codes are used as a proxy for protected classes. It is also a civil rights violation for an employer to fail to provide notice to employees of the employer’s use of AI in employment decision-making.

The enforcement and remedies provisions of the IHRA apply to the amendment. As such, the Illinois Department of Human Rights and the Illinois Human Rights Commission enforce the amendment, and employees may file a charge of discrimination against employers for violations.

Texas

On January 1, 2026, the Texas Responsible AI Governance Act (RAIGA) will take effect. Similar to Colorado’s AI law, the RAIGA regulates both the development and deployment of AI.

The RAIGA prohibits employers from using AI with the intent to unlawfully discriminate against a protected class in violation of Texas or federal law. For purposes of the RAIGA, “protected class” means “a group or class of persons with a characteristic, quality, belief, or status protected from discrimination by state or federal civil rights laws, and includes race, color, national origin, sex, age, religion, or disability.” Importantly, the law specifically states that if the use of AI has a disparate impact, this alone is insufficient to show an intent to discriminate. The law does not create a private right of action, and except where such authority has been designated to another state agency, the Texas AG has exclusive authority to enforce its provisions.

Although employers were likely already prohibited from circumventing anti-discrimination laws by using AI to make employment decisions, the Illinois and Texas legislators have removed any doubt that discrimination in employment through AI tools is unlawful. Training provided to employees in decision-making roles should address AI tools to ensure compliance with anti-discrimination laws in both human and technology-assisted employment decisions. To the extent an employer relies on third parties to develop its AI systems, the employer should work closely with the AI vendor to understand how the systems work and manage implementation in employment decision-making to minimize legal risks.

Insight Industries + Practices