The Trump Administration’s recent AI pronouncements decry “ideological bias or engineered social agendas” as antithetical to continued American AI leadership. Executive Order 14179, repealing prior Biden Administration Executive Order 14110 on AI safety, reflects that theme and so does Vice President Vance’s speech at the February 11 Paris AI summit. “We feel very strongly,” Vance remarked, “that AI must remain free from ideological bias.” The Trump Administration’s view appears to be that overzealous regulation, likely including nondiscrimination, safety, and transparency regulation, puts American AI development at a disadvantage. The release of DeepSeek undoubtedly reinforces such concerns. As White House Press Secretary Karoline Leavitt put it, “[DeepSeek] is a wake-up call to the American AI industry.”

But the Trump Administration’s position presents a possible quandary for private enterprise: compliance with existing state and international AI safety and nondiscrimination rules and guidance – echoed in international consensus – may incur U.S. federal regulatory ire. Fortunately, there are ways to navigate the growing conflict. By focusing on binding rules and using measures that serve both innovation and safety, regulated entities give themselves leeway to satisfy conflicting regulatory approaches.
The AI-Competition Regulatory Approach
Executive Order 14179 is best understood in relation to the Biden Administration executive order it repealed, which it describes as a “barrier” to “American AI innovation” and “America’s global AI dominance.” Biden-era policy required executive branch adherence to eight “guiding principles and priorities.” These included safety and security; promoting innovation and competition; responsible development; advancing equity and civil rights; consumer protection; privacy and civil liberties; risk-management; and American AI leadership. Under Biden, guiding principles were meant to be operationalized through agency-level guidelines aimed at “safe, secure, and trustworthy AI,” as well as changes to the visa application process for AI workers, review of AI use in law enforcement and public benefits contexts, and various other agency directives.
The Trump Administration’s approach marks an intentional departure from the Biden guiding principles; EO 14179’s only stated goal is to “solidify” America’s position “as the global leader in AI.” As a result, it is much shorter than the Biden-era executive order. And it has only two functions: first, in Section 4, it requires agency heads to submit an action plan to sustain and enhance America’s global AI position and second, in Section 5, it directs “all agencies” to review, suspend, and revoke any actions inconsistent with American AI dominance. That competition-focused approach has already shown up in agency rhetoric and policy direction. For instance, FTC member Melissa Holyoak recently suggested that DeepSeek may have violated unfair competition rules in building its dataset. States aligned with the Trump Administration’s approach have also signaled a competition-focused strategy. The Office of the Texas Attorney General, for example, is investigating allegations DeepSeek stole Americans’ data in what it describes as a malicious attempt to “undermine American AI dominance.”
The AI-Safety Regulatory Approach
Just as global AI competition has spurred the Trump Administration’s approach, it has also catalyzed AI safety concerns. Rules that address such concerns seek to minimize possible harm to individuals from AI use. A variety of state-level rules fall under that heading. For example, starting in February 2026, under Colorado law, developers and deployers of “high-risk” AI systems will have a duty to “use reasonable care to protect consumers” from the “foreseeable risks of algorithmic discrimination,” defined as differential treatment or impact that “disfavors” an individual or group based on protected categories. Virginia recently followed suit with a similar set of rules. And Maryland takes a kindred approach: impact assessments will be required for “high-risk” AI systems procured or deployed by state government starting July 1, 2025. Unfortunately, a focus on differential treatment or “impact” is unlikely to track the Trump Administration’s approach.
Likewise, state laws that focus on transparency are unlikely to cohere with the global competition lens through which the Trump Administration views AI development. California’s dataset transparency requirement is a prime example of a law that fosters safety but not winning an AI arms race. Per California law, starting January 1, 2026, generative AI systems’ data sources and dataset descriptors must be posted on a developer’s website prior to making the system publicly available.
States are not the only regulators interested in AI safety. One of the goals of the EU AI Act is to reduce bias and safety risk in high-risk AI and in training datasets. As evidence of this commitment, existing rules surrounding processing sensitive data are suspended if “strictly necessary” for bias detection and correction. Existing NIST guidance adopted under President Biden also promotes safety, highlighting the potential of AI to be used in offensive cyber attacks, generate harmful content, and develop weapons technology. In each case, safety rather than innovation at all costs is the goal.
Charting a Path Forward
Navigating between the AI-safety approach and the Trump Administration’s competition-focused approach will require careful risk-informed planning. At present, and especially since many of the state-level rules in the U.S. do not go into effect until 2026, the actual degree of conflict is low. But as executive orders turn into legislative action, state laws come online, and Europe continues to regulate, the risk of conflicting regulatory priorities will only increase. It is prudent to begin planning for that eventuality.
Step one is to assess which rules are directly binding; noncompliance with binding rules carries more risk than noncompliance with policy preferences. That assessment involves determining where your AI system will be deployed and where you do business. “Doing business” in Colorado while deploying or developing a “high-risk” AI system triggers the Colorado rules. California’s dataset transparency rules apply to entities that produce generative AI systems made available to Californians. The EU AI Act applies broadly to entities that make AI systems available in the EU; are in the EU; or produce, develop, or import models whose output is used in the EU. Depending on which laws apply, the possibility of conflict is increased or decreased.
Assessing which rules are binding also involves understanding what risk category your AI model falls into based on plausible use cases. AI systems that are not high-risk or generative face fewer binding rules. In general, AI systems which make “consequential decisions” rather than perform narrowly defined pattern recognition or procedural tasks are defined as high-risk. High-risk systems often include AI systems that are components of critical infrastructure; determine educational outcomes, employment, or access to benefits; or influence penal outcomes—all of which, for example, are high-risk use cases under EU AI Act rules. Rules apply to “generative” AI models instead of “high-risk” models often have broader scoping definitions. For example, California’s dataset transparency rules apply to AI systems that “generate derived synthetic content, such as text, images, video, and audio, that emulates the structure and characteristics of the artificial intelligence’s training data,” a fairly broad category of AI system.
Step two is to recognize that prioritizing AI innovation need not mean a tradeoff with existing safety, transparency, or nondiscrimination imperatives. There are technical and practical steps that can serve both regulatory approaches. Using synthetic training data, for instance, reduces data collection costs and data protection noncompliance risk while mimicking the effectiveness of real data. That aligns innovation goals with safety goals. Another approach is to ensure that decisions made with help from AI ultimately have an informed human in the loop. That may mitigate safety concerns while allowing AI systems to do what they do best: minimize manual parsing of large quantities of information for patterns and statistical predictions.
Step three is to monitor legislative developments spurred by the Trump Administration’s position.
[Wiew source]
Comments