
The Shifting Federal Landscape on AI Regulation
The landscape of AI legislation in the United States is rapidly evolving, creating new challenges and opportunities for businesses. Navigating this complex web of federal directives and state-level laws is crucial for maintaining compliance and leveraging AI responsibly. For US firms, understanding the core principles of these regulations is the first step toward building a future-proof AI strategy.
At the federal level, recent executive orders have set the tone for national policy, establishing foundational principles for AI safety, security, and trustworthiness. These directives have significant implications for companies developing or deploying high-impact AI systems.
President Biden’s Foundational Executive Order
President Biden’s 2023 Executive Order on AI established a comprehensive framework for governing artificial intelligence. It mandated federal agencies to create new standards to protect consumers, workers, and national security. A key outcome was empowering the National Institute of Standards and Technology (NIST) to develop guidelines for AI development and deployment, effectively setting a national benchmark for what constitutes safe and trustworthy AI.
Key Compliance Pillars for US Businesses
For businesses, compliance with emerging AI legislation hinges on understanding several key pillars, most notably the guidelines developed by NIST and new reporting requirements for high-risk AI models.
Understanding the NIST AI Risk Management Framework
The NIST AI Risk Management Framework (AI RMF) provides a voluntary but highly influential set of guidelines for managing risks associated with AI systems. It helps organizations:
- Govern: Create a culture of risk management.
- Map: Identify the context and potential risks of an AI system.
- Measure: Analyze and track identified risks using qualitative and quantitative methods.
- Manage: Treat risks by prioritizing actions to mitigate them.
Adopting this framework is becoming a de facto requirement for businesses seeking to demonstrate due diligence and build trust with consumers and regulators.
Navigating Reporting and Transparency Requirements
A significant aspect of the federal approach involves transparency. Companies developing powerful AI models that could pose a risk to national security are required to report their AI training activities and safety test results to the federal government. This includes IaaS (Infrastructure-as-a-Service) providers, who must report transactions with foreign entities training large-scale AI models.
A Patchwork of State-Level AI Laws
While federal policy sets a broad direction, states have been highly active in proposing and enacting their own AI laws. In 2025 alone, all 50 states introduced legislation related to artificial intelligence, leading to a complex compliance map for national firms. These laws often focus on specific applications of AI, such as:
- Deepfakes: Many states have passed laws to criminalize the malicious use of deepfakes, particularly in elections and non-consensual pornography.
- Automated Decision-Making: States like California and Connecticut are incorporating AI governance into their existing data privacy laws, requiring transparency when AI is used for significant decisions.
- Chatbot Disclosure: A common theme is requiring clear disclosure to users when they are interacting with an AI chatbot rather than a human.
Practical Steps for US Firms to Ensure Compliance
To stay ahead of the curve, US firms should take proactive steps. Start by conducting an inventory of all AI systems in use and assess their potential risks. Adopting the NIST AI RMF is a critical step toward building a robust governance structure. Finally, stay informed about both federal and state-level AI legislation, as the regulatory environment continues to change quickly. Developing an internal culture of responsible AI innovation is no longer just good practice—it’s essential for long-term success.
Would you like to integrate AI efficiently into your business? Get expert help – Contact us.