You're using ChatGPT to screen resumes. Your ATS ranks candidates automatically. You're considering an AI interview tool that "predicts job fit." A friend tells you: "Be careful, there are AI hiring laws now." You think: "Is that real or just hype?"

Most founders think: "AI hiring laws are coming someday. I'll worry about it when they're actually enforced." What they don't realize: AI hiring laws are already in effect in multiple states right now. Companies are being audited, fined, and sued. If you're using AI to screen, rank, or assess candidates in California, Colorado, Illinois, New York, or Texas, you're subject to specific legal requirements, whether you know it or not.

The expensive truth: Illinois now allows candidates to sue you directly if your AI hiring tool discriminates, even unintentionally. California holds you liable for your vendor's AI discrimination. Colorado requires impact assessments and transparency notices starting June 30, 2026. Penalties range from $20,000 per violation to 7% of global revenue under EU rules (if you hire internationally). "I didn't know" is not a defense.

Here's what AI hiring laws exist right now in March 2026, what they require, and how to stay compliant.

Federal status: No comprehensive federal AI employment law exists. Trump's December 2025 Executive Order seeks to preempt state laws, but the Senate voted 99-1 to reject an AI moratorium. State laws remain fully enforceable.

What this means: You must comply with state-by-state requirements. Multi-state hiring means multi-state compliance.

What Counts as "AI in Hiring"?

These laws regulate "Automated Employment Decision Tools" (AEDTs) or "Automated Decision Systems" (ADS):

  • Resume screening software that ranks or filters candidates

  • AI-powered applicant tracking systems

  • Video interview analysis tools (facial recognition, speech analysis)

  • Skills assessment platforms with algorithmic scoring

  • Chatbots that screen candidates

  • Predictive analytics for "culture fit" or retention

  • Any system that uses machine learning, algorithms, or AI to screen, score, rank, or recommend candidates

What's NOT covered: Manual review of resumes, human-only interviews, basic email/calendar scheduling tools

Critical point: Even if a human makes the final decision, if AI influenced that decision (screened, scored, ranked), you're subject to these laws.

State-by-State Requirements (March 2026)

New York City: Local Law 144 (In Effect Since July 2023)

Who it applies to: Employers and employment agencies using AEDTs in NYC

Requirements:

Annual bias audit: Independent third-party must test AEDT for discriminatory impact on race/ethnicity and sex. Audit must be conducted within one year before use.

Publish audit results: Make bias audit results publicly available on website (selection rates by race/ethnicity and sex). Even if you use a third-party AEDT, you need to make sure they are audited and post their audit results on your website.

Notice to candidates: Inform candidates at least 10 days before use that AEDT will be used, what job qualifications it assesses, and what data sources it uses

Alternative process: Provide accommodation for candidates who request alternative selection process

Penalties: NYC is actively auditing compliance Consultils. Violations can result in civil penalties.

Example - 15-person startup, New York: Uses AI resume screening tool. Must: (1) Get annual bias audit from independent auditor, (2) Post audit results on website showing selection rates by race/sex, (3) Add notice to job postings: "This employer uses an automated employment decision tool (AEDT) to screen candidates. You may request an alternative selection process or accommodation."

California: FEHA Amendments (Effective October 1, 2025)

Who it applies to: All California employers using Automated Decision Systems (ADS)

Requirements:

No discriminatory ADS: Cannot use ADS that discriminates based on protected characteristics (race, sex, age, disability, etc.) Consultils

4-year data retention: Retain all ADS-related data for at least 4 years, including input data, outputs (scores/rankings), criteria used, and bias testing results Consultils

Vendor accountability: You are responsible for discriminatory outcomes caused by third-party vendors using ADS Consultils. "Our vendor did it" is not a defense.

Proactive testing: While not explicitly mandated, regulators will assess whether testing occurred and the testing's quality, scope, recency, results, and the employer's response to identified risks Manatt, Phelps & Phillips, LLP. Unsupported vendor assurances carry little weight.

Transparency notices: Inform candidates when AI influences hiring, promotion, or employment decisions

Penalties: Enforcement through California Civil Rights Department. Violations treated as employment discrimination under FEHA.

Example - 25-person startup, California: Uses AI-powered interview platform from vendor. Startup is liable if platform discriminates, even if vendor built it. Must: (1) Retain 4 years of data on who was scored how and why, (2) Conduct or review bias testing, (3) Notify candidates AI is used.

Illinois: HB-3773 (Effective January 1, 2026)

Who it applies to: All Illinois employers (even 1-employee companies)

Requirements:

No discrimination (including unintentional): Using AI that results in discrimination violates Illinois Human Rights Act, intent is not required The HR DigestManatt, Phelps & Phillips, LLP

Transparency: Inform employees and candidates when AI is used to make employment decisions (hiring, promotion, discipline, firing) The HR Digest

Human review: Final decisions cannot be fully automated

Critical difference: Illinois is the ONLY U.S. state with a private right of action: candidates who believe they were discriminated against through AI hiring tools can sue the employer directly in court Hirevire, without filing a government complaint first.

Penalties: Civil rights lawsuits, damages, attorney's fees. Litigation risk is highest in Illinois.

Example - 12-person startup, Texas, hiring remote workers in Illinois: Uses AI to screen candidates. Illinois candidate applies. Even though company is in Texas, they're subject to Illinois law for that candidate. If AI discriminates (even unintentionally), candidate can sue directly.

Colorado: AI Act SB 24-205 (Effective June 30, 2026)

Who it applies to: "Deployers" of high-risk AI systems (employment AI is classified as high-risk)

Requirements (starting June 30, 2026):

Risk management policy: Create documented policy for identifying and mitigating algorithmic discrimination Consultils

Impact assessments: Evaluate high-risk AI systems to identify and mitigate potential harm K&L Gates

Transparency notices: Inform candidates and employees when AI influences employment decisions (hiring, firing, promotion) K&L Gates

Appeal rights: Provide mechanism for candidates/employees to appeal AI-influenced decisions

Public statement: Make publicly available statement about types of AI systems used and how they manage discrimination risk

Human review: Ensure final employment decisions include meaningful human review, not fully automated

Penalties: Violations deemed "unfair trade practice," civil penalties up to $20,000 per violation Consultils. Example: Failing to notify 10 rejected applicants = $200,000 in potential fines. No private right of action; Attorney General enforces.

Example - 8-person startup, Colorado: Plans to use AI interview tool. Before June 30, 2026 must: (1) Document risk management policy, (2) Conduct impact assessment, (3) Add notice to job postings, (4) Create appeal process, (5) Post public statement on website.

Texas: TRAIGA (Effective January 1, 2026)

Who it applies to: Texas employers using AI in employment decisions

Requirements:

Governance framework: Establish oversight for AI deployment

Avoid discrimination: Cannot use AI that discriminates based on protected characteristics

State council oversight: Subject to Texas AI advisory council guidance

Business-friendly provisions:

  • Disparate impact alone is NOT sufficient to demonstrate discriminatory intent NatLawReview—must show intentional discrimination

  • No private right of action NatLawReview—candidates cannot sue directly

Example - 20-person startup, Texas: Uses AI for candidate screening. Must establish governance, avoid discrimination, but has lower litigation risk than Illinois (no private right of action, higher bar for discrimination claims).

Florida: No AI-Specific Employment Laws (As of March 2026)

Current status: Florida has no state-level AI employment laws in effect.

What still applies: Federal employment discrimination laws (Title VII, ADEA, ADA) apply if AI creates discriminatory outcomes. EEOC can enforce.

Future: Governor proposed AI regulations, but none specific to employment have passed yet.

Multi-State Hiring: Your Compliance Burden

Critical rule: Employers recruiting remote workers may be subject to these laws even if they are not physically located in the regulating jurisdiction Lexology.

Scenario - Startup in Florida, hiring nationally:

  • Candidate in NYC applies → Must comply with NYC Local Law 144

  • Candidate in California applies → Must comply with CA FEHA

  • Candidate in Illinois applies → Must comply with IL HB-3773 (highest risk—private right of action)

  • Candidate in Colorado applies → Must comply with CO AI Act (after June 30)

  • Candidate in Texas applies → Must comply with TRAIGA

You need compliance for the candidate's location, not your location.

Common AI Hiring Compliance Requirements (Across All States)

1. Bias Testing & Audits

What it means: Regularly test your AI tools for discriminatory impact on protected classes (race, sex, age, disability, etc.)

How to comply:

  • Require vendors to provide bias audit results

  • Conduct your own testing if using proprietary tools

  • Test at least annually (NYC requires yearly audits)

  • Document results and remediation steps

2. Transparency & Notice

What it means: Inform candidates when AI is used in hiring decisions

How to comply:

  • Add notice to job postings: "This employer uses AI-powered tools to screen candidates"

  • Specify what the AI assesses (skills, qualifications, culture fit)

  • Disclose data sources used

  • Provide notice at least 10 days before use (NYC requirement)

3. Human Review

What it means: Final employment decisions cannot be fully automated and must include meaningful human review Lexology

How to comply:

  • AI can screen, score, rank—but human makes final decision

  • Human must have authority to override AI recommendation

  • Document human review occurred

  • Train hiring managers on AI limitations

4. Vendor Accountability

What it means: You are responsible for discriminatory outcomes from third-party AI tools Consultils

How to comply:

  • Contractual requirements: Vendor must provide bias testing, comply with applicable laws

  • Due diligence: Review vendor's testing methodology and results

  • Documentation: Maintain records of vendor compliance

  • Monitoring: Regularly review outcomes for discriminatory patterns

5. Data Retention

What it means: Keep records of AI decisions, inputs, outputs, and testing

How to comply:

  • California: 4 years minimum

  • Retain: Candidate data, AI scores/rankings, decision criteria, bias test results

  • Secure storage: Protect candidate privacy under data protection laws

What Happens If You Don't Comply?

Illinois (Highest Risk)

  • Candidates can sue directly (private right of action)

  • Civil rights lawsuit

  • Damages + attorney's fees

  • Reputational harm

California

  • California Civil Rights Department investigation

  • Treated as employment discrimination under FEHA

  • Penalties, damages, corrective action

  • Vendor liability doesn't protect you

Colorado (After June 30, 2026)

  • Attorney General enforcement

  • $20,000 per violation

  • Example: 50 applicants not notified = potential $1M in fines

New York City

  • Civil penalties for non-compliance

  • Active enforcement and audits underway

Federal (All States)

  • EEOC can enforce Title VII, ADEA, ADA if AI creates discriminatory outcomes

  • Even without AI-specific laws, discrimination is still illegal

Practical Compliance Steps (Right Now)

Step 1: Audit Your Current AI Use

Ask:

  • What tools do we use that screen, score, rank, or assess candidates?

  • ATS with auto-ranking?

  • Video interview analysis?

  • Resume parsing with scoring?

  • Skills assessment platforms?

Document: List every AI tool in your hiring process

Step 2: Identify Applicable Laws

Based on where you hire:

  • Hiring in NYC? → Comply with Local Law 144

  • Hiring in CA? → Comply with FEHA

  • Hiring in IL? → Comply with HB-3773 (highest risk)

  • Hiring in CO? → Prepare for June 30, 2026 compliance

  • Hiring in TX? → Comply with TRAIGA

Step 3: Get Vendor Documentation

Request from every AI vendor:

  • Bias audit results

  • Testing methodology

  • Compliance certifications for applicable states

  • Data retention policies

  • Contractual liability provisions

If vendor can't provide: Consider switching vendors or conducting your own testing.

Step 4: Add Transparency Notices

Update job postings: "[Company] uses AI-powered tools to screen candidates for this position. The AI assesses [specific qualifications]. Data sources include [resume, application responses, assessments]. Candidates may request an alternative selection process by contacting [email]."

Step 5: Implement Human Review

Ensure:

  • No automated rejections without human review

  • Hiring managers trained on AI limitations

  • Authority to override AI recommendations

  • Documentation of human involvement in decisions

Step 6: Create Compliance Documentation

Maintain:

  • Risk management policy (required in CO)

  • Bias testing records

  • Vendor compliance documentation

  • Candidate notices sent

  • Human review documentation

  • 4 years of data (CA requirement)

What to Know About Hiring with AI

AI hiring laws are not coming; they're here.

As of March 2026:

  • NYC requires annual bias audits (since 2023)

  • California holds you liable for vendor discrimination (since October 2025)

  • Illinois allows candidates to sue you directly (since January 2026)

  • Texas requires governance frameworks (since January 2026)

  • Colorado requires impact assessments, notices, and appeals (June 30, 2026)

Federal preemption is uncertain. Trump's executive order seeks to override state laws, but the Senate rejected an AI moratorium 99-1. Compliance with state and local AI laws remains mandatory, and employers should not assume that federal action will simplify obligations in the near term Lexology.

Three actions this week:

  1. Inventory your AI hiring tools: List every system that screens, scores, or ranks candidates. Include your ATS, interview platforms, assessment tools.

  2. Request vendor compliance documentation: Email every AI vendor: "What bias testing have you conducted? Do you comply with NYC, CA, IL, CO, and TX AI hiring laws? Provide documentation."

  3. Add transparency notices: Update job postings to disclose AI use. This is required in multiple states and protects you legally.

The risk is real. Compliance is not optional.

Illinois candidates can sue you directly. California holds you liable for your vendor's mistakes. Colorado can fine you $20K per violation starting June 30.

AI is powerful. AI is also regulated. Use it, but comply with the law.

If you're using AI in hiring, you need to understand these requirements now. Not when you get audited. Not when you get sued. Now.

This content is provided for informational purposes only and does not constitute legal advice; for guidance on your specific situation, please consult with an employment attorney licensed in your state.

Reply

Avatar

or to participate

Keep Reading