Blog Logo
Blog Logo

Denise Attar

Associate

Contact information

View All News & Insights
BACK TO Denise’S PROFILE

AI in HR: From Brussels to Sacramento, What Employers Need to Know

https://darkred-crane-536476.hostingersite.com/people/denise-attar

Denise Attar, Esq., dattar@tuckerlaw.com, (412) 594-5513

Artificial intelligence is reshaping how companies recruit, evaluate, and manage talent. From resume screening to performance management, AI-driven tools are increasingly embedded in daily HR operations. Yet as adoption accelerates, regulators are stepping in, especially in Europe.

With the EU Artificial Intelligence Act (“EU AI Act”)[1] now in force, U.S. employers with any connection to EU candidates or employees face new compliance obligations that may arrive sooner than expected. Even though the United States has not yet enacted a comparable national law, recent developments in California suggest a similar regulatory framework may be emerging domestically.

Understanding the obligations under the EU AI Act, alongside emerging AI initiatives in California, can help U.S. employers anticipate potential compliance requirements and prepare for an evolving regulatory environment.

The EU AI Act: Global Reach and Phased Rollout

The EU AI Act, which took effect on August 1, 2024, is the first comprehensive legal framework for artificial intelligence worldwide. Its purpose: to ensure that AI systems used within the EU are safe, transparent, and respect human rights. But its reach extends well beyond Europe.

Any company, whether located in the EU or not, falls within the Act’s scope if its AI systems are used in the EU or if their outputs affect individuals there. For U.S. employers, this means that AI tools built or deployed domestically may still create compliance obligations if they process applications or employee data involving EU-based individuals. This applies whether you use a third-party U.S. recruitment platform that scores or ranks candidates, including EU residents, or whether you develop internal AI-driven systems to evaluate or assess employees that include EU personnel. In both situations, the system’s results are used in decisions that affect individuals in the EU, bringing those tools within the scope of the EU AI Act.

The EU AI Act follows a risk-based model, applying the most stringent requirements to “high-risk” AI systems, those with the potential to affect people’s rights or livelihoods. Unsurprisingly, this category includes employment-related AI tools used in hiring, promotion, performance management, or termination. This covers resume-screening algorithms, candidate-ranking models, and predictive tools for retention or promotion decisions.

Employers using such systems must perform risk and impact assessments, ensure ongoing human oversight, maintain traceability and logging, and guarantee transparency to individuals subject to these systems. They also must implement safeguards for accuracy, robustness, and cybersecurity.

The principle of human oversight is central: AI must not make employment decisions in a black box. A human must understand, question, and override algorithmic recommendations when necessary, and this oversight must be documented.

The AI Act also bans certain applications outright, those presenting an “unacceptable risk.” These include systems that exploit individuals’ vulnerabilities, use subliminal techniques to manipulate behavior, or engage in “social scoring” (ranking individuals based on behavior or perceived trustworthiness).

Importantly for employers, the law prohibits emotion recognition technologies in workplaces and schools, as well as systems that predict criminal behavior or job suitability through personality profiling. It also forbids untargeted scraping of facial images to build biometric databases. For employers looking to hire vendors offering AI-based hiring tools that rely on emotion detection, micro-expression analysis, or “cultural fit” predictions, these prohibitions are critical. Employers should review contracts carefully; if a vendor’s system uses emotional or biometric data without clear consent or legal basis, it may be illegal under EU law.

It is important to note that the EU AI Act operates in tandem with the EU’s other significant regulatory law, the General Data Protection Regulation (GDPR), not as a replacement. Employers processing personal data of EU residents must comply with both. Under GDPR Articles 13–22,[1] candidates have the right to be informed about automated processing, to object to it, and not to be subject to decisions based solely on automation that produce significant effects, such as hiring outcomes.

In practice, this means:

  • Employers must disclose AI use during hiring.
  • Fully automated resume rejections (with no human review) likely violate GDPR, Article 22.
  • Human review must be meaningful and documented.
  • Candidates may request explanations or corrections.

For multinational employers, the key takeaway is that the AI Act’s system-level requirements, when combined with GDPR’s data rights, creates a comprehensive regime of accountability and transparency.

The EU AI Act is being implemented in phases.[1] The upcoming February 2026 Commission guidance and the August 2026 compliance date remain the key markers for U.S. employers that may be considered deployers when using high-risk AI tools involving EU-based individuals.[2] These milestones are expected to clarify how the Act’s obligations apply in employment settings, and organizations with cross-border operations may find it useful to monitor these developments as they refine their AI governance approaches.

California’s Emerging AI Rules: Implications for Employers

California has emerged as one of the most active U.S. jurisdictions on AI oversight, relying on existing civil rights, consumer protection, and privacy laws to regulate AI systems in the absence of comprehensive federal rules. Three recent developments are particularly relevant to employers.

1. SB 53, Transparency in Frontier Artificial Intelligence Act[1]

Signed into law in September 2025, SB 53. The SB 53 goes into effect on January 1, 2026, and requires developers of large-scale AI models (“frontier models”) to implement transparency frameworks, safety evaluations, and risk reporting. While aimed primarily at developers, it underscores California’s broader intent to impose guardrails on AI systems that influence employment decisions downstream.

2. CPPA Regulations, Automated Decision-Making, and Cybersecurity Audits[2]

The California Privacy Protection Agency (CPPA) finalized new regulations in late 2025 under the California Consumer Privacy Act (CCPA). These rules require businesses using automated decision-making technology (ADMT) to conduct risk assessments and cybersecurity audits, particularly when ADMT is used for hiring, promotion, or termination.

The CPPA explicitly grants employees and job applicants the right to know when such technology is used, to understand how decisions are made, and to request human review. These rights parallel key provisions of the EU AI Act, particularly transparency and human-in-the-loop requirements.

3. CRC Regulations, Bias, and Fairness in Automated Employment Decisions[3]

The California Civil Rights Council (CCRC) Employment Regulations Regarding Automated Decision Making took effect on October 1, 2025. The regulation is aimed at ensuring that AI and automated employment decision tools do not result in discriminatory impacts. These regulations require employers to evaluate and document the fairness of AI systems, conduct pre-deployment testing, and provide notice to applicants and employees.

Together, SB 53, the CPPA regulations, and the CCRC’s rules create a layered enforcement structure. While these measures do not replicate the EU’s structure, together they create a multi-layered framework governing automated decision-making, discrimination risk, and transparency obligations. California’s approach is shaping early U.S. trends and is likely to influence emerging laws in states such as New York, Colorado, and Illinois.

What U.S. Employers Should Be Doing Now

For employers operating across borders, or even across state lines, the evolving AI regulations in the EU and California underscore the importance of staying informed. Organizations may wish to evaluate the feasibility of undertaking the following steps in light of their operational needs, resources, and the potential impact on employees and applicants:

  1. Inventory AI Systems
    Identify all AI-driven tools in your employment lifecycle. Document where they are used, what data they process, and whether any outputs affect EU residents or California employees.
  2. Assess Vendor Compliance
    Require AI vendors to provide bias testing, risk assessment reports, and compliance attestations.
  3. Implement Human Oversight Protocols
    Establish review checkpoints where humans validate AI outputs before final employment decisions. Maintain logs to demonstrate oversight.
  4. Develop Transparency Statements
    Inform candidates and employees when AI tools are used. Provide clear explanations of their function and the human oversight involved.
  5. Monitor Fairness and Bias
    Conduct periodic audits to detect disparate impacts on protected groups. Even without AI-specific laws, existing anti-discrimination statutes may apply.
  6. Coordinate Global Compliance
    If your company operates in the EU, align AI governance programs with the AI Act’s risk management and transparency principles. Doing so will position you for both EU and state-level compliance in the U.S.

Looking Ahead

The EU AI Act provides an international reference point, while California’s SB 53, CPPA, and CRC rules reflect an emerging U.S. framework at the state level. For employers, these developments illustrate that AI regulations in employment are continuing to evolve across multiple jurisdictions. Monitoring these changes and staying aware of upcoming guidance and deadlines can help organizations understand how their AI systems may be affected.

For employers who want to discuss these developments and explore potential implications for their operations, our Labor & Employment attorneys are available to help you navigate this evolving landscape.

Generative AI lent a hand in preparing this article, mostly to refine the headers and polish the language (including this very disclaimer). Verdict on whether it worked is left to the reader.


[1] Regulation (EU) 2024/1689, Artificial Intelligence Act, O.J. (L 1689), available at https://eur-lex.europa.eu/eli/reg/2024/1689/oj.

[2] General Data Protection Regulation, Regulation (EU) 2016/679, art. 22, 2016 O.J. (L 119) 1, available at https://gdpr-info.eu/art-22-gdpr/.

[3] Governance & Implementation: Application Timeline, European Commission, DIGITAL STRATEGY, https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai (last visited Nov. 20, 2025).

[4] Implementation Timeline, EU Artificial Intelligence Act, FUTURE OF LIFE INST., https://artificialintelligenceact.eu/implementation-timeline/ (last updated Aug. 1, 2024).

[5] Senate Bill 53, 2025–2026 Reg. Sess. (2025) (adding Chapter 25.1, Bus. & Prof. Code § 22757.10 et seq.), (Transparency in Frontier Artificial Intelligence Act), available at https://legiscan.com/CA/text/SB53/id/3270002.

[6]Cal. Code Regs. tit. 11, §§ 7120–7200 (eff. Jan. 1, 2026), available at https://cppa.ca.gov/regulations/ccpa_updates.html.

[7] Cal. Code Regs. tit. 2, § 11008.1 (eff. Oct. 1, 2025), available at https://calcivilrights.ca.gov/wp-content/uploads/sites/32/2025/06/Final-Text-regulations-automated-employment-decision-systems.pd




November 25, 2025

Serving our clients successfully since 1900

The same attributes that have anchored over a century of success are still our guiding principles today.

Stay up-to-date on the latest News & Insights by subscribing to our alerts

Enter your email address below and be notified when we post new information.