Git and GitHub: Their Uses and Differences
Git and GitHub are reliably proven development tools supported by a global online community where users can find how-to and troubleshooting tips. See how they stack up against each other in our blog.
For those closely following the rapid evolution of Artificial Intelligence, the recent actions by the US government are certainly front and center. On July 23, 2025, the White House unveiled "Winning the Race: America's AI Action Plan," alongside three accompanying Executive Orders (EOs). This comprehensive plan marks a significant shift in the federal approach to AI governance, building on President Trump's Executive Order 14179 from January 2025, which called for an "action plan" to bolster American global AI dominance.
Understanding and implementing durable AI governance processes is not just a regulatory hurdle, but a strategic imperative. As a practitioner deeply immersed in the nuances of technology and its societal implications, Dr. Dan Grahn has observed a fascinating divergence in global AI governance approaches, particularly between the European Union (EU) and the United States, which profoundly impacts how we build resilient AI strategies.
This article is based on a webinar presented by Dr. Dan Grahn on July 24, 2025. You can watch the full 1-hour webinar here - Executing Orders: Responsible AI Governance Meets Executive Actions. You can also watch our shorter clips:
When it comes to AI governance, the EU and the US are charting distinctly different courses. The EU is embracing a centralized, regulatory framework, akin to building a "fortress" around AI use. This approach is characterized by a comprehensive legal framework with clear boundaries and significant penalties, such as fines up to €35 million or 7% of global revenue, and a harmonized approach across its 27 member states.
In contrast, the US is adopting a more decentralized, market-driven model—a "frontier" approach. The American AI regulatory landscape is a dynamic, and at times, chaotic mix of federal executive orders, state legislation, and enforcement actions. This fragmentation means that instead of a single, overarching law, organizations face a patchwork of more than 50 different state-level approaches. The absence of comprehensive federal AI legislation has created a "federal vacuum" that states are rapidly filling, with over 45 states introducing nearly 70 AI bills in 2024 alone. States like Colorado, California, and Texas are developing unique approaches, focusing on duties of care, consumer rights, and targeted prohibitions respectively. This multi-jurisdictional compliance has become a permanent operational requirement for businesses.
A notable illustration of this fragmented approach was the Senate's rejection of a federal AI regulation moratorium on July 1st, 2025, with a vote of 99-1. This outcome effectively solidified the reality that state-level regulation will continue to spread, reinforcing the principle of federalism that is core to American governance.
Presidential transitions in the US can lead to immediate and significant shifts in AI policy, directly impacting business strategies. We've seen this play out with varying executive orders: from Trump's EO 13859 in 2019 focusing on a light regulatory touch, to Biden's EO 14110 in 2023 emphasizing a comprehensive risk framework, and then a dramatic shift again with Trump's EO 14179 in January 2025, which prioritized barrier removal and replaced the previous "Safe, Secure, and Trustworthy AI" directive. Most recently, on July 23, 2025, "America's AI Action Plan" was released, a 23-page strategic document with over 90 federal policy recommendations aimed at accelerating innovation, building infrastructure, and establishing international leadership in AI.
Here are some of the main points of the US AI Action Plan
This rapid transformation underscores a critical need: organizations must build AI governance processes that are durable enough to transcend these political cycles and shifting executive priorities.
Despite the lack of a single, comprehensive federal AI law, federal agencies are actively applying existing laws to modern AI challenges.
The Federal Trade Commission (FTC) has launched "Operation AI Comply," a coordinated enforcement effort targeting deceptive AI-related practices. This initiative, branded in September 2024, utilizes the FTC Act's Section 5 authority regarding "unfair or deceptive practices". The message is clear: deploying AI does not grant legal immunity from traditional consumer protection laws. Case studies like DoNotPay, fined for unsubstantiated marketing claims about being the "world's first robot lawyer," and Rytr LLC, prohibited from marketing fake review generation services, demonstrate that AI marketing demands evidence, not hyperbole, and that companies are liable for enabling others' deceptive practices.
Similarly, the Equal Employment Opportunity Commission (EEOC) is actively targeting AI-driven employment discrimination. Their priority is algorithmic decision-making in hiring and workplace management, asserting a "no black box defense"—meaning opacity in AI systems does not excuse discrimination. The case of iTutorGroup, Inc., which settled for $365,000 for using automated hiring software programmed to reject applicants over specific ages, serves as a stark reminder that coding bias into algorithms violates federal employment law, and automated discrimination does not reduce legal liability.
Given this dynamic environment, how can organizations build resilient AI governance? The answer lies in combining robust frameworks with global insights.
The NIST AI RMF Foundation: The NIST AI Risk Management Framework (AI RMF) provides a flexible, principles-based approach to AI governance that enjoys bipartisan support and consensus-driven development. Its voluntary adoption and safe harbor references in state laws make it a politically resilient and stable foundation for governance in the US. I firmly believe the NIST AI RMF is the optimal framework for durable AI governance processes in the US, even after the recent Action Plan.
The NIST framework organizes AI governance around four core operational functions, mirroring enterprise risk management practices:
Leveraging EU Risk Categories: While the US framework is fragmented, organizations can "borrow" from the EU AI Act's proven taxonomy for classifying AI systems into four risk categories: Unacceptable, High, Limited, and Minimal. This classification system, based on the potential impact of AI on individuals' rights and safety, offers a practical tool for US organizations to triage their AI systems effectively.
An Integrated Strategy: The most effective approach for durable governance is to combine NIST processes with EU risk categories. This allows organizations to establish policies based on proven taxonomies, classify systems using EU high-risk categories for mapping, apply testing based on risk classification for measurement, and implement proportional controls for management. This creates a stable governance framework grounded in American processes with global insights.
To put this into action, consider the following roadmap:
Success metrics for these efforts include inventory completeness, risk assessment coverage (especially for high-risk systems), incident response time, training effectiveness, and vendor compliance.
The journey of AI governance in the US is marked by complexity and political volatility. However, by accepting this permanent complexity, building internal resilience through adaptable frameworks, integrating best practices like the NIST AI RMF and EU risk categories, and learning from current events like the rapid implementation of the AI Action Plan, organizations can forge durable AI governance processes. The goal is not just compliance with current regulations, but the creation of strategic, future-proof governance that can withstand the ever-shifting sands of AI policy.
Ascendient Learning's Generative AI training courses teaches organizations how to implement and manage AI initiatives with with a strong focus on responsible AI and robust AI governance. Contact us to see how we can help your organization navigate scalable, future-proof AI implementation.
Git and GitHub are reliably proven development tools supported by a global online community where users can find how-to and troubleshooting tips. See how they stack up against each other in our blog.
When IT, L&D, and the C-suite collaborate, the company is better equipped to foresee and react to market shifts, adopt novel technologies and procedures, and maintain a technological edge over its rivals.
Taking a cloud native approach to developing applications—adopting a microservices architecture embracing the cloud and DevOps concepts — is the key to unlocking all advantages of the cloud.
For those closely following the rapid evolution of Artificial Intelligence, the recent actions by the US government are certainly front and center. On July 23, 2025, the White House unveiled "Winning the Race: America's AI Action Plan." This article discusses AI governance in the United States, comparing it to the EU's approach, analyzes the impact of these new AI policies, and offering strategies for building durable AI governance processes.
Generative AI (GenAI) has taken the business world by storm, and for good reason. It offers the potential to transform how we work, create, and innovate. But amidst the excitement, it's crucial to cut through the hype and understand what GenAI truly is, and what it's not.
Explore Agentic AI, the next evolution of AI that goes beyond content creation. Learn how it differs from Generative AI, its key characteristics, development process, real-world applications, and the future of this transformative technology.
Ethical AI ensures that AI systems treat everyone fairly, are responsible for their actions, and operate in a transparent way.
In this article, Denis Vrdoljak, a practicing data scientist, discusses the rise of AI and how it has been revolutionized over the years with the advent of LLMs (Large Language Models) like ChatGPT. Denis explores how these models have facilitated immense progress in various fields, including programming, content creation, and data analysis. He also considers the ethical implications of AI’s widespread implementation and what the future holds.
Learn how to prepare for what’s next in the IT training industry with insight information regarding training expenditures, the methods firms use to teach IT workers, and the effects of training on productivity and job satisfaction.
There’s never been a better time to build your credentials with IT certifications. These 20 highest-paying certifications can boost your salary and lead you to a better career in 2023.
Focusing on being proficient with certain programming languages and frameworks will help put you on the path to long-term sustainable success in career development.
Cloud Computing has become the “Gold” standard for enterprises to access IT infrastructure, hardware, and software resources. It offers a big shift to the way businesses think about IT resources.
Ascendient Learning is the coming together of three highly respected brands; Accelebrate, ExitCertified, and Web Age Solutions - renowned for their training expertise - to form one company committed to providing excellence in outcomes-based technical training.
With our winning team, we provide a full suite of customizable training to help organizations and teams upskill, reskill, and meet the growing demand for technical development because we believe that when talent meets drive, individuals rise, and businesses thrive.