top of page

AI Regulation and Legal Trends in the U.S, EU and China

Immagine del redattore: Gabriele IuvinaleGabriele Iuvinale

The Evolving Role of AI/ML in Healthcare

The use of artificial intelligence/machine learning (AI/ML) in healthcare is evolving rapidly and introducing new challenges. Not only are medical devices using AI for diagnostics, which has been around for decades, but we are also seeing new and innovative uses of AI, including generative AI within organizations, whether for coding, combing data for insights and trends, among many other applications.


GettyImages
GettyImages

The importance of AI and ML in healthcare cannot be overstated. These technologies are used every day by manufacturers to innovate new or enhanced products, which assist healthcare providers and improve patient care. However, the rapid advancement of AI/ML technologies has also led to significant regulatory and legal challenges as well as policy shifts in the U.S. as well as other countries, including Europe and China. On top of these regulations, existing privacy and cybersecurity risks, as well as intellectual property (IP) considerations, remain relevant, with active enforcement and litigation surrounding these areas trending. (See also our FDA-focused alerts for FDA considerations).


U.S. AI Regulations: Federal and State Developments

In the U.S., a Biden-era executive order on AI emphasizing consumer protection and guardrails was replaced in January with Executive Order 14179, "Removing Barriers to American Leadership in AI", which, as it self-describes, is intended to minimize the impact of federal rulemaking on AI innovation. This follows bipartisan efforts in 2024 establishing the AI Safety Institute at the National Institute of Standards and Technology (NIST) which has developed an AI framework similar to other NIST frameworks. Despite federal legislation on private sector AI use being proposed, there has been little momentum to date, with many legislators reluctant to implement barriers to the technology. In lieu of federal action, states like California and Colorado, and Utah, have passed their own laws regulating AI systems.


Most notably for the healthcare sector, the Colorado AI Act focuses on high-risk AI systems (including those which “are a substantial factor in making [a] consequential decision”), requiring developers to provide detailed documentation and facilitate impact assessments. Though the law grants exemptions for certain FDA-regulated products which meet “substantially equivalent” standards, and for HIPAA-regulated entities performing non-high risk healthcare recommendations, the law is very broad, and these exemptions have an uncertain impact. Due to concerns about overbroad language and ambiguity, a task force was appointed to evaluate and recommend changes to the law, and some recommendations have already been proposed, though it remains to be seen how that will be resolved. We may see additional states follow a similar path to Colorado, which echoes some of the same principles present in European legislation.


AI Regulation in the European Union

In the European Union (EU), the EU AI Act and the General Data Protection Regulation (GDPR) are key regulatory frameworks governing AI. The EU AI Act defines AI systems and places a high regulatory burden on high-risk AI systems, which include many medical devices. The Act emphasizes risk management, data governance, technical documentation, and transparency throughout the AI system's lifecycle. Also required for high-risk AI systems are conformance assessments, similar to the notified body requirements for medical devices. For medical device manufacturers, these rules must be complied with by August 2, 2026, and in many cases will include additional obligations layered over current medical device regulations.


China’s Approach to AI Regulation

China is also active in AI regulation, considering measures focused on balancing AI safety and security with innovation and leadership goals. To date, Chinese regulation has largely emphasized regulation of generative AI, though AI development more broadly will likely see further regulation at some point. Similarly to recent U.S. policymaking, China appears be seeking to balance AI safeguards and encouraging innovation.


Intellectual Property Considerations

Intellectual property (IP) disputes are another active area in AI law, with numerous cases ongoing in the U.S. related to the use of copyrighted works to train AI/ML models. IP considerations cut both ways – the unmanaged use of generative AI tools potentially risks unknowing loss of IP or trade secrets, while conversely life sciences companies also must consider carefully whether IP and other data ownership rights may restrict use of data to develop or enhance their own AI/ML models. As case law develops further there will be more clarity on where these lines are drawn.


Privacy and Cybersecurity Risks in AI Development

Additionally, privacy and cybersecurity considerations continue to impact both use and development in AI/ML. The use of personal data to train AI remains a significant area of concern for regulators both in the U.S. and abroad, particularly health data. And the use of such information as inputs for AI/ML tools also raises concerns with privacy and cybersecurity as companies wrestle with employee and vendor use of AI tools to perform work, in some cases risking IP protection or privacy of input data when these tools are not appropriately vetted.


Fortunately, these risks can be mitigated by staying informed, planning, including implementing policies and training, vendor risk assessments, knowledge of privacy requirements, and appropriate contract terms.


[Wiew source]

Comentários


©2020 di extrema ratio. Creato con Wix.com

bottom of page