AI News

The State of AI Regulation in 2026: States Fill the Federal Void

As we settle into February 2026, the landscape of artificial intelligence regulation in the United States is shifting dramatically. While many industry observers anticipated a comprehensive federal framework to emerge this year, recent signals from the Federal Trade Commission (FTC) suggest a significant pause in Washington, D.C. However, nature abhors a vacuum, and state legislatures are moving swiftly to fill it.

For HR leaders and talent management professionals, the spotlight has moved from the Capitol to the coasts. Hawaii and Washington state have introduced robust new bills targeting the use of AI in hiring and workplace decisions. These proposals, specifically Hawaii’s HB 2500 and Washington’s HB 2157, represent a turning point: we are moving from theoretical discussions of "ethical AI" to concrete legal obligations regarding transparency, bias audits, and accountability.

Hawaii's Double-Barreled Approach to Algorithmic Accountability

Hawaii is often associated with tourism, but in 2026, it is positioning itself as a pioneer in digital rights. The state legislature has introduced two significant pieces of legislation—HB 2500 and SB 2967—that would impose strict new obligations on both the vendors who build AI tools and the employers who deploy them.

HB 2500 proposes a comprehensive framework for "algorithmic decision systems" (ADS). The bill defines ADS broadly to include any machine-based tool that assists or replaces human decision-making in consequential matters, specifically citing hiring, promotion, and employee discipline.

Under this proposal, the responsibility is shared but distinct. Developers of AI tools would be required to disclose foreseeable uses and known legal risks to the companies buying their software. For employers (defined as "deployers"), the bar is set even higher. Before using an ADS that materially affects employment, companies must provide advance notice to candidates and employees. Furthermore, if a decision is made based on the system's output, the employer must provide a post-use disclosure identifying the specific personal characteristics the system relied upon—up to the top 20 most influential factors.

Complementing this is SB 2967, introduced as a consumer protection bill. It categorizes employment-related AI as "high-risk." This bill goes beyond simple transparency; it demands that deployers conduct annual impact assessments to test for disparate impact and bias. It also creates a right to correction and human review, effectively banning "black box" automated rejections without a pathway for appeal.

Washington’s Anti-Discrimination Push

Meanwhile, in the Pacific Northwest, Washington state is advancing its own regulatory agenda with HB 2157. Known as a hub for tech innovation, Washington is attempting to balance industry growth with civil rights protections.

HB 2157 focuses heavily on preventing AI discrimination in high-stakes decisions, including hiring and medical insurance. Unlike transparency-first laws that simply require a disclaimer, this bill would mandate that companies take active steps to protect individuals from discrimination embedded in algorithmic models.

The bill applies to businesses with over $100,000 in annual revenue, a threshold that captures the vast majority of employers using enterprise HR software. It specifically targets the "chilling effect" of biased algorithms, requiring that developers and deployers prove their tools do not disadvantage protected classes. This legislative move follows the work of the Washington State AI Task Force, established earlier, which has been releasing recommendations on "AI workplace guiding principles."

Washington’s approach is particularly notable because it signals a potential clash with the tech industry. Business groups have already voiced concerns that such strict liability for algorithmic outcomes could discourage the use of efficiency-boosting tools entirely. However, bill sponsors argue that in the absence of federal guidelines, state oversight is essential to protect workers' rights.

The Federal Pause: A stark Contrast

The urgency seen in statehouses stands in stark contrast to the current mood at the federal level. In late January 2026, the FTC signaled a reduced appetite for new AI regulations. Speaking at a Privacy State of the Union Conference, FTC Bureau of Consumer Protection Director Chris Mufarrige stated there is "no appetite for anything AI-related" in the agency's immediate rulemaking pipeline.

This aligns with a broader deregulatory stance under the current administration, which emphasizes removing barriers to innovation. The FTC has indicated it will rely on "sparing" enforcement of existing laws rather than creating new AI-specific statutes. For example, while the agency recently settled a case regarding AI writing assistants, it effectively set aside broader rulemaking that would have standardized AI audits nationally.

For multi-state employers, this creates a complex compliance environment. Instead of a single federal standard, companies must now navigate a "patchwork" of state laws—complying with strict transparency rules in Hawaii, anti-discrimination mandates in Washington, and bias audit requirements in New York City, all while federal guidance remains minimal.

Comparison of Key State Proposals

To help Talent Management and HR Legal teams navigate these emerging requirements, the following table breaks down the core components of the new bills in Hawaii and Washington.

Feature Hawaii (HB 2500 / SB 2967) Washington (HB 2157)
Primary Focus Transparency & Explanation Anti-Discrimination & Protection
Trigger Use of "Algorithmic Decision Systems" Use in "High-Stakes Decisions"
Employer Obligations Advance notice; Post-decision factor disclosure Protect against discrimination; Proof of non-bias
Candidate Rights Right to correct data; Human review Protection from disparate impact
Audit Requirements Annual impact assessments (SB 2967) Implicit in proving non-discrimination
Threshold Broad application Revenue > $100,000/year

What This Means for Employers in 2026

The divergence between state ambition and federal inaction places employers in a precarious position. Waiting for a national standard is no longer a viable strategy. HR departments must proactively adapt their Employment Law compliance strategies to meet the most stringent state requirements, regardless of where their headquarters are located.

1. Audit Your AI Inventory
Many organizations suffer from "shadow AI," where individual hiring managers use unauthorized tools for screening or interviewing. HR leadership must conduct a complete inventory of all algorithmic tools used in the hiring lifecycle to determine which ones fall under definitions of "high-risk" or "consequential" decision-making.

2. Demand Vendor Transparency
The burden of proof is shifting to employers ("deployers"), but the technical data resides with the vendors. Contracts must be updated to require vendors to provide the specific disclosures mandated by laws like Hawaii's HB 2500. If a vendor cannot explain the "top 20 influential factors" of their model, they may effectively be illegal to use in certain jurisdictions.

3. Prepare for Human-in-the-Loop Requirements
Both Hawaii and Washington emphasize the need for human oversight. The days of fully automated rejection emails are numbered. Employers should establish workflows where AI serves as decision support, not the final decision-maker, and ensure that a human reviewer is available to handle appeals or corrections.

The Path Forward

The year 2026 will be defined by this tug-of-war between state-level protections and federal deregulation. While the FTC pauses, states like Hawaii and Washington are ensuring that Workplace AI does not operate in a legal gray zone. For forward-thinking companies, this is an opportunity to build trust. By voluntarily adopting high standards for transparency and fairness, organizations can future-proof their hiring practices against the inevitable wave of regulation that is just beginning to crash against the shore.

Featured