US Federal and State AI Laws for Private Sector
US Federal and State AI Laws for the Private Sector represent a rapidly evolving regulatory landscape aimed at ensuring responsible AI deployment while fostering innovation. **Federal Level:** At the federal level, there is no single comprehensive AI law. Instead, regulation is sector-specific and… US Federal and State AI Laws for the Private Sector represent a rapidly evolving regulatory landscape aimed at ensuring responsible AI deployment while fostering innovation. **Federal Level:** At the federal level, there is no single comprehensive AI law. Instead, regulation is sector-specific and agency-driven. The Executive Order on Safe, Secure, and Trustworthy AI (2023) directs federal agencies to develop AI safety standards, conduct risk assessments, and address issues like bias and privacy. The National Institute of Standards and Technology (NIST) AI Risk Management Framework provides voluntary guidelines for managing AI risks. The Federal Trade Commission (FTC) actively enforces against deceptive or unfair AI practices, particularly regarding algorithmic bias, data privacy, and misleading AI claims. Existing laws like the Equal Credit Opportunity Act and Civil Rights Act apply to AI-driven decisions in lending, employment, and housing. The Blueprint for an AI Bill of Rights outlines principles including safe systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives. **State Level:** States have been more aggressive in enacting AI-specific legislation. Colorado passed the Colorado AI Act (2024), requiring developers and deployers of high-risk AI systems to exercise reasonable care to avoid algorithmic discrimination. Illinois' Artificial Intelligence Video Interview Act requires employers to notify candidates when AI analyzes video interviews. California has proposed multiple AI bills addressing transparency, automated decision-making, and deepfakes. New York City's Local Law 144 mandates bias audits for automated employment decision tools. Texas and other states have introduced laws targeting AI-generated content and deepfakes. **Key Themes:** Common themes across federal and state legislation include transparency and explainability requirements, algorithmic bias prevention, consumer notification about AI usage, accountability mechanisms, and risk-based approaches focusing on high-risk AI applications. Private sector organizations must navigate this patchwork of regulations, ensuring compliance across multiple jurisdictions while maintaining competitive AI capabilities. Understanding these laws is essential for AI governance professionals to implement compliant AI systems.
US Federal and State AI Laws for Private Sector – Comprehensive Guide for AIGP Exam
1. Why US Federal and State AI Laws Matter
The United States does not (yet) have a single comprehensive federal AI law analogous to the EU AI Act. Instead, AI governance in the US is shaped by a patchwork of existing federal statutes, new federal executive orders, agency guidance, and a rapidly growing body of state-level legislation. For any AI governance professional, understanding this landscape is critical because:
• Organizations operating in the US must comply with multiple overlapping legal regimes at both federal and state levels.
• Non-compliance can result in enforcement actions by the FTC, EEOC, CFPB, HHS, and state attorneys general.
• The lack of a unified framework means companies must proactively map their AI use cases against all applicable laws.
• Exam questions on this topic test your ability to identify which laws apply to specific AI scenarios and how they interact.
2. What Are US Federal AI Laws Relevant to the Private Sector?
2.1 Existing Federal Statutes Applied to AI
Although not originally written for AI, several federal laws are being applied to AI systems:
a) Section 5 of the FTC Act (Unfair or Deceptive Practices)
• The FTC has been the most active federal agency in AI enforcement.
• It uses its Section 5 authority to pursue companies whose AI systems cause unfair or deceptive outcomes.
• Key principle: If you make claims about your AI (e.g., "bias-free"), you must substantiate them.
• The FTC has pursued algorithmic disgorgement (forcing companies to delete models trained on improperly obtained data).
b) Civil Rights Act of 1964 (Title VII) & EEOC Guidance
• Title VII prohibits employment discrimination based on race, color, religion, sex, or national origin.
• The EEOC has issued guidance clarifying that employers can be liable for disparate impact caused by AI-powered hiring tools, even when the tool is provided by a third-party vendor.
• Key concept: Employers remain liable even if they outsource decision-making to an algorithmic tool.
c) Equal Credit Opportunity Act (ECOA) & Fair Credit Reporting Act (FCRA)
• ECOA prohibits discrimination in credit decisions. AI-based lending models must not discriminate on prohibited bases.
• FCRA requires notice and an opportunity to dispute when adverse actions are taken based on consumer reports. If an AI system uses consumer report data, FCRA obligations apply.
• The CFPB has emphasized that creditors must provide specific and accurate reasons for adverse actions, even when decisions are made by complex AI models.
d) Americans with Disabilities Act (ADA)
• AI hiring tools and other automated decision systems must not discriminate against individuals with disabilities.
• The DOJ and EEOC have jointly issued guidance on how AI tools may violate the ADA (e.g., video interview analysis tools that disadvantage people with speech or movement disabilities).
e) Health Insurance Portability and Accountability Act (HIPAA)
• AI systems processing protected health information (PHI) must comply with HIPAA's privacy and security rules.
• HHS has issued guidance on AI use in healthcare contexts.
f) Fair Housing Act (FHA)
• AI-driven advertising and tenant screening tools must not discriminate in housing-related decisions.
• HUD has pursued cases involving algorithmic discrimination in housing ad targeting.
2.2 Executive Orders and Federal Frameworks
a) Executive Order 14110 (October 2023) – Safe, Secure, and Trustworthy AI
• The most significant federal AI policy action to date.
• Key provisions for the private sector include:
- Developers of the most powerful AI systems (dual-use foundation models) must share safety test results with the federal government under the Defense Production Act.
- NIST is directed to develop standards, guidelines, and best practices for AI safety and security (including red-teaming standards).
- Agencies are directed to issue guidance on AI use in regulated sectors (healthcare, finance, transportation, etc.).
- Emphasis on preventing AI-enabled fraud, deepfakes, and discrimination.
- Directs agencies to address AI's impact on the labor market and workers' rights.
b) NIST AI Risk Management Framework (AI RMF 1.0)
• While voluntary, the AI RMF is increasingly referenced by regulators and is considered a best practice.
• It organizes AI risk management into four functions: Govern, Map, Measure, Manage.
• It is important for exam purposes because regulators point to it as a benchmark for responsible AI development.
c) Blueprint for an AI Bill of Rights (OSTP, October 2022)
• Non-binding framework identifying five principles: Safe and Effective Systems, Algorithmic Discrimination Protections, Data Privacy, Notice and Explanation, and Human Alternatives/Fallback.
• While not legally enforceable, it signals policy priorities and may influence future legislation.
2.3 Agency-Specific AI Guidance
• FTC: Multiple blog posts, reports, and enforcement actions on AI claims, bias, and data practices.
• EEOC: Technical Assistance documents on AI and Title VII, ADA compliance.
• CFPB: Guidance on AI in lending, adverse action notices, and chatbot use.
• FDA: Framework for AI/ML-based Software as a Medical Device (SaMD).
• SEC: Proposed rules on predictive data analytics and conflicts of interest in broker-dealer and investment adviser AI use.
• DOT/NHTSA: Guidance on autonomous vehicles.
3. What Are Key US State AI Laws?
State legislatures have been extremely active. Key state laws include:
a) Illinois – Artificial Intelligence Video Interview Act (AIVRA, 2020)
• Requires employers to: notify applicants that AI is used to analyze video interviews, explain how the AI works, and obtain consent before the interview.
• Limits on sharing the video and requires destruction upon request.
b) Illinois – Biometric Information Privacy Act (BIPA)
• Although not AI-specific, it is highly relevant to AI systems using facial recognition and biometric data.
• Requires informed consent, has a private right of action, and has led to significant litigation and settlements (e.g., Clearview AI, Facebook).
c) New York City – Local Law 144 (Automated Employment Decision Tools, 2023)
• Requires employers and employment agencies using automated employment decision tools (AEDTs) to:
- Conduct an independent bias audit no more than one year before use.
- Publish the results of the bias audit on their website.
- Provide notice to candidates that an AEDT is being used.
• Applies to tools that substantially assist or replace discretionary decision-making in hiring and promotion.
d) Colorado – AI Act (SB 24-205, signed 2024)
• One of the most comprehensive state AI laws, modeled partly on the EU AI Act.
• Focuses on high-risk AI systems that make or substantially contribute to consequential decisions in areas like employment, education, financial services, healthcare, housing, insurance, and legal services.
• Requires developers and deployers to use reasonable care to protect consumers from known or foreseeable risks of algorithmic discrimination.
• Deployers must: conduct impact assessments, provide notice to consumers, offer an opportunity to appeal, and report to the Attorney General.
• Developers must: provide documentation, disclose known risks, and describe training data.
• Includes an affirmative defense for compliance with recognized AI risk management frameworks (e.g., NIST AI RMF).
• Effective February 1, 2026.
e) Texas – AI Advisory Council & Deepfake Laws
• Created an AI advisory council to study and monitor AI.
• Enacted laws criminalizing certain deepfake uses (e.g., election-related deepfakes, non-consensual intimate images).
f) California – Various AI-Related Bills
• Multiple proposals and enacted laws covering deepfakes, AI transparency, generative AI watermarking, and AI in healthcare.
• California's existing CCPA/CPRA applies to automated decision-making and profiling, granting consumers rights to opt out and access information about automated decisions.
• AB 2013 (2024) requires developers of generative AI to post documentation on training data on their websites.
g) Connecticut, Virginia, Montana, and Other States
• Several states have enacted or are considering laws requiring AI impact assessments, transparency in government AI use, and restrictions on specific AI applications like facial recognition.
• State consumer privacy laws (Virginia VCDPA, Connecticut CTDPA, etc.) include provisions on profiling and automated decision-making.
h) Deepfake and Synthetic Media Laws
• Multiple states have enacted laws targeting AI-generated deepfakes, particularly in the contexts of elections and non-consensual intimate imagery (e.g., Texas, California, Minnesota, Washington).
4. How This Regulatory Landscape Works in Practice
For private sector organizations, the practical implications are:
Step 1: Map AI Use Cases
• Identify all AI systems in use, their purposes, and the data they process.
Step 2: Identify Applicable Laws
• Determine which federal statutes, agency rules, and state laws apply based on: the sector (healthcare, finance, employment, etc.), the jurisdiction (where the company operates and where affected individuals reside), and the type of AI application.
Step 3: Conduct Risk and Impact Assessments
• Perform algorithmic impact assessments as required by specific laws (e.g., Colorado AI Act) or as a best practice aligned with the NIST AI RMF.
Step 4: Implement Transparency and Notice Requirements
• Provide required notices to individuals (e.g., NYC Local Law 144 candidate notice, Illinois AIVRA consent requirements).
Step 5: Establish Bias Auditing and Testing
• Conduct independent bias audits where required (NYC Local Law 144) and test for disparate impact under federal anti-discrimination laws.
Step 6: Ensure Vendor and Third-Party Accountability
• Maintain accountability even when AI tools are procured from third-party vendors (per EEOC, FTC, and state law requirements).
Step 7: Document and Report
• Maintain records of impact assessments, audit results, and compliance measures. Some laws require public disclosure or reporting to regulators.
5. Key Themes and Concepts for the Exam
• No single comprehensive federal AI law: The US relies on a sectoral approach plus executive action.
• Existing laws apply to AI: Anti-discrimination, consumer protection, and privacy laws are being actively enforced against AI systems.
• FTC is the primary federal enforcement agency for AI in the private sector.
• Employer liability persists even when using third-party AI tools (EEOC guidance).
• State laws are filling the gap: NYC Local Law 144, Colorado AI Act, and Illinois BIPA/AIVRA are among the most tested.
• Impact assessments and bias audits are becoming legal requirements, not just best practices.
• Notice and transparency are recurring obligations across multiple laws.
• The Colorado AI Act is one of the most comprehensive state AI laws and closely mirrors EU AI Act concepts like high-risk categorization.
• Executive Order 14110 is the most significant federal executive action on AI and has implications for dual-use foundation model developers.
• NIST AI RMF is voluntary but increasingly referenced as a standard of care.
6. Exam Tips: Answering Questions on US Federal and State AI Laws for Private Sector
Tip 1: Know the Key Laws and Their Scope
• Be able to match each law to its subject matter: FTC Act → deceptive/unfair practices; Title VII/EEOC → employment discrimination; ECOA/FCRA → credit and lending; BIPA → biometrics; NYC LL 144 → automated hiring tools; Colorado AI Act → high-risk AI broadly.
Tip 2: Focus on Enforcement Agencies
• Know which agency enforces what: FTC (consumer protection), EEOC (employment discrimination), CFPB (financial/credit), HUD (housing), FDA (medical devices), state AGs (state laws).
Tip 3: Understand the "Patchwork" Nature
• If a question asks about the US approach to AI regulation, emphasize the sectoral, multi-layered approach rather than a single comprehensive law.
Tip 4: Distinguish Binding Law from Guidance
• The Blueprint for an AI Bill of Rights is non-binding. The NIST AI RMF is voluntary. Executive Orders direct federal agencies but do not directly create private sector obligations (though they trigger agency actions that do). Federal statutes and state laws are binding.
Tip 5: Remember Vendor Liability
• A common exam trap: organizations cannot avoid legal liability by outsourcing AI decisions to vendors. Under EEOC guidance and many state laws, the deployer/employer remains responsible.
Tip 6: Know NYC Local Law 144 Details
• This law is frequently tested. Remember: independent bias audit, results published publicly, notice to candidates, applies to tools that "substantially assist" or replace discretionary hiring/promotion decisions.
Tip 7: Know Colorado AI Act Basics
• High-risk AI systems, consequential decisions, developer vs. deployer obligations, impact assessments, affirmative defense for following recognized frameworks, effective date of February 1, 2026.
Tip 8: Anti-Discrimination Is Cross-Cutting
• Multiple laws (Title VII, ADA, ECOA, FHA) all address AI-driven discrimination in their respective domains. If a question mentions discrimination or disparate impact, consider which specific anti-discrimination statute applies based on context (employment, credit, housing).
Tip 9: Watch for Adverse Action Notice Requirements
• Under FCRA and ECOA, consumers must receive specific reasons for adverse actions. The CFPB has stressed that "the algorithm decided" is not an acceptable explanation. AI-driven decisions require meaningful explanations.
Tip 10: Understand EO 14110 Key Provisions
• Dual-use foundation model reporting requirements, NIST safety standards development, red-teaming mandates, and sector-specific agency guidance are the most testable elements.
Tip 11: Eliminate Answers That Suggest Federal Preemption
• There is currently no federal AI law that preempts state AI laws. State laws operate alongside federal requirements, creating overlapping compliance obligations.
Tip 12: Process of Elimination on Scenario Questions
• When given a scenario, first identify the domain (employment, healthcare, finance, etc.), then the jurisdiction (NYC, Illinois, Colorado, federal), then the specific obligation (notice, audit, impact assessment, non-discrimination). This systematic approach will help you select the correct answer efficiently.
Tip 13: Biometric Privacy = Think BIPA
• If a question involves facial recognition, fingerprints, voiceprints, or other biometric identifiers in Illinois, BIPA is almost certainly the relevant law. Remember its private right of action, which distinguishes it from most other privacy laws.
Tip 14: Stay Current but Focus on Established Laws
• The exam is likely to focus on laws and guidance that are finalized and well-established rather than pending proposals. Focus your study on enacted statutes and published agency guidance.
By understanding the layered, sectoral nature of US AI regulation—and being able to quickly identify which federal or state law applies to a given scenario—you will be well-prepared to answer AIGP exam questions on this critical topic.
Unlock Premium Access
Artificial Intelligence Governance Professional
- Access to ALL Certifications: Study for any certification on our platform with one subscription
- 3360 Superior-grade Artificial Intelligence Governance Professional practice questions
- Unlimited practice tests across all certifications
- Detailed explanations for every question
- AIGP: 5 full exams plus all other certification exams
- 100% Satisfaction Guaranteed: Full refund if unsatisfied
- Risk-Free: 7-day free trial with all premium features!