Executing Orders: Responsible AI Governance Meets Executive Actions

Anne Fernandez | Wednesday, July 30, 2025

Executing Orders: Responsible AI Governance Meets Executive Actions

For those closely following the rapid evolution of Artificial Intelligence, the recent actions by the US government are certainly front and center. On July 23, 2025, the White House unveiled "Winning the Race: America's AI Action Plan," alongside three accompanying Executive Orders (EOs). This comprehensive plan marks a significant shift in the federal approach to AI governance, building on President Trump's Executive Order 14179 from January 2025, which called for an "action plan" to bolster American global AI dominance.

Understanding and implementing durable AI governance processes is not just a regulatory hurdle, but a strategic imperative. As a practitioner deeply immersed in the nuances of technology and its societal implications, Dr. Dan Grahn has observed a fascinating divergence in global AI governance approaches, particularly between the European Union (EU) and the United States, which profoundly impacts how we build resilient AI strategies.

This article is based on a webinar presented by Dr. Dan Grahn on July 24, 2025. You can watch the full 1-hour webinar here - Executing Orders: Responsible AI Governance Meets Executive Actions. You can also watch our shorter clips:

The Diverging Paths of AI Governance: EU vs. US

When it comes to AI governance, the EU and the US are charting distinctly different courses. The EU is embracing a centralized, regulatory framework, akin to building a "fortress" around AI use. This approach is characterized by a comprehensive legal framework with clear boundaries and significant penalties, such as fines up to €35 million or 7% of global revenue, and a harmonized approach across its 27 member states.

In contrast, the US is adopting a more decentralized, market-driven model—a "frontier" approach. The American AI regulatory landscape is a dynamic, and at times, chaotic mix of federal executive orders, state legislation, and enforcement actions. This fragmentation means that instead of a single, overarching law, organizations face a patchwork of more than 50 different state-level approaches. The absence of comprehensive federal AI legislation has created a "federal vacuum" that states are rapidly filling, with over 45 states introducing nearly 70 AI bills in 2024 alone. States like Colorado, California, and Texas are developing unique approaches, focusing on duties of care, consumer rights, and targeted prohibitions respectively. This multi-jurisdictional compliance has become a permanent operational requirement for businesses.

A notable illustration of this fragmented approach was the Senate's rejection of a federal AI regulation moratorium on July 1st, 2025, with a vote of 99-1. This outcome effectively solidified the reality that state-level regulation will continue to spread, reinforcing the principle of federalism that is core to American governance.

The Impermanence of Executive Orders and the Need for Durable Processes

Presidential transitions in the US can lead to immediate and significant shifts in AI policy, directly impacting business strategies. We've seen this play out with varying executive orders: from Trump's EO 13859 in 2019 focusing on a light regulatory touch, to Biden's EO 14110 in 2023 emphasizing a comprehensive risk framework, and then a dramatic shift again with Trump's EO 14179 in January 2025, which prioritized barrier removal and replaced the previous "Safe, Secure, and Trustworthy AI" directive. Most recently, on July 23, 2025, "America's AI Action Plan" was released, a 23-page strategic document with over 90 federal policy recommendations aimed at accelerating innovation, building infrastructure, and establishing international leadership in AI.

Here are some of the main points of the US AI Action Plan:

  • Accelerating Innovation: Deregulate AI and curtail agency actions that might hinder its progress. Promote "free speech AI". Encourage open-weight models.
  • Building Infrastructure: Streamline permitting for AI-related infrastructure. Improve the energy grid and build military data centers. Increase compute weight.
  • International Leadership: Promote American AI exports. Counter Chinese influence. Evaluate frontier risks. Combat synthetic media in the legal system and invest in biosecurity.
  • Federal Procurement & Oversight: Requirements for federal contractors doing business with the government include adherence to federal AI procurement standards, "American AI" preference, and required Chief AI Officer oversight structures. Requirements for "high-impact systems" were also modified to remove "ideological constraints".
  • Preventing Ideological Bias: A key executive order, "Preventing Woke AI in the Federal Government," specifically directs federal agencies to procure LLMs that adhere to "Unbiased AI Principles," emphasizing truth-seeking, historical accuracy, scientific inquiry, and objectivity, and ensuring they are neutral and nonpartisan, not manipulating responses based on "ideological dogmas such as DEI."

This rapid transformation underscores a critical need: organizations must build AI governance processes that are durable enough to transcend these political cycles and shifting executive priorities.

Federal Enforcement in Action: FTC and EEOC

Despite the lack of a single, comprehensive federal AI law, federal agencies are actively applying existing laws to modern AI challenges.

The Federal Trade Commission (FTC) has launched "Operation AI Comply," a coordinated enforcement effort targeting deceptive AI-related practices. This initiative, branded in September 2024, utilizes the FTC Act's Section 5 authority regarding "unfair or deceptive practices". The message is clear: deploying AI does not grant legal immunity from traditional consumer protection laws. Case studies like DoNotPay, fined for unsubstantiated marketing claims about being the "world's first robot lawyer," and Rytr LLC, prohibited from marketing fake review generation services, demonstrate that AI marketing demands evidence, not hyperbole, and that companies are liable for enabling others' deceptive practices.

Similarly, the Equal Employment Opportunity Commission (EEOC) is actively targeting AI-driven employment discrimination. Their priority is algorithmic decision-making in hiring and workplace management, asserting a "no black box defense"—meaning opacity in AI systems does not excuse discrimination. The case of iTutorGroup, Inc., which settled for $365,000 for using automated hiring software programmed to reject applicants over specific ages, serves as a stark reminder that coding bias into algorithms violates federal employment law, and automated discrimination does not reduce legal liability.

Building Durable AI Governance Processes

Given this dynamic environment, how can organizations build resilient AI governance? The answer lies in combining robust frameworks with global insights.

The NIST AI RMF Foundation: The NIST AI Risk Management Framework (AI RMF) provides a flexible, principles-based approach to AI governance that enjoys bipartisan support and consensus-driven development. Its voluntary adoption and safe harbor references in state laws make it a politically resilient and stable foundation for governance in the US. I firmly believe the NIST AI RMF is the optimal framework for durable AI governance processes in the US, even after the recent Action Plan.

The NIST framework organizes AI governance around four core operational functions, mirroring enterprise risk management practices:

  • GOVERN: Establishing culture, policies, and accountability structures.
  • MAP: Identifying AI systems, their contexts, and potential impacts.
  • MEASURE: Analyzing, assessing, and monitoring AI risks over time.
  • MANAGE: Prioritizing and treating identified risks through controls and responses.

Leveraging EU Risk Categories: While the US framework is fragmented, organizations can "borrow" from the EU AI Act's proven taxonomy for classifying AI systems into four risk categories: Unacceptable, High, Limited, and Minimal. This classification system, based on the potential impact of AI on individuals' rights and safety, offers a practical tool for US organizations to triage their AI systems effectively.

An Integrated Strategy: The most effective approach for durable governance is to combine NIST processes with EU risk categories. This allows organizations to establish policies based on proven taxonomies, classify systems using EU high-risk categories for mapping, apply testing based on risk classification for measurement, and implement proportional controls for management. This creates a stable governance framework grounded in American processes with global insights.

A Roadmap for AI Implementation

To put this into action, consider the following roadmap:

  1. Establish an AI Governance Committee: This committee should have cross-functional leadership.
  2. Conduct a Comprehensive AI Inventory: This includes discovering "shadow AI" systems.
  3. Implement Risk Assessment: Utilize EU categories for triage.
  4. Develop Clear Policies: Ensure mandatory documentation standards.
  5. Define Roles and Responsibilities: Include human oversight protocols.
  6. Integrate Continuous Testing and Monitoring Systems.
  7. Institute Vendor Risk Management: Specifically for third-party AI tools.
  8. Foster a Culture of Responsibility: This can be achieved through targeted training programs.

Success metrics for these efforts include inventory completeness, risk assessment coverage (especially for high-risk systems), incident response time, training effectiveness, and vendor compliance.

Conclusion

The journey of AI governance in the US is marked by complexity and political volatility. However, by accepting this permanent complexity, building internal resilience through adaptable frameworks, integrating best practices like the NIST AI RMF and EU risk categories, and learning from current events like the rapid implementation of the AI Action Plan, organizations can forge durable AI governance processes. The goal is not just compliance with current regulations, but the creation of strategic, future-proof governance that can withstand the ever-shifting sands of AI policy.

Ascendient Learning's Generative AI training courses teaches organizations how to implement and manage AI initiatives with with a strong focus on responsible AI and robust AI governance. Contact us to see how we can help your organization navigate scalable, future-proof AI implementation.

Responsible AI with the NIST AI Risk Management Framework
Practicing Responsible Generative AI
Responsible Leadership of Generative AI Initiatives
Securing & Red-Teaming Generative AI Deployments
Git and GitHub: Their Uses and Differences

Git and GitHub: Their Uses and Differences

Git and GitHub are reliably proven development tools supported by a global online community where users can find how-to and troubleshooting tips. See how they stack up against each other in our blog.

Development