Senate AI Roadmap
The Bipartisan Senate AI Working Group recently released its Roadmap for Artificial Intelligence Policy in the U.S. Senate, entitled “Driving U.S. Innovation in Artificial Intelligence.”
As one might expect, the details are few and the ambitions broad. It gave me the distinct sense of a group who just wants others to know they’re doing something, though it is not immediately clear what that something is. It seems that the rest of the world legislates while the United States continues to study the issue, with no clear path for Congress to actually do anything in particular.
As a result, it might be difficult to make any specific predictions about any specific forthcoming legislation or regulation. But, we can at least take notice of the things the group mentioned, as you can assume that nothing was put in (or left out) without intentionality. With that, here are my observations.
First, whatever legislation or regulation emerges (whenever it emerges) is likely to borrow familiar concepts from the EU AI Act around accountability of AI systems. The Roadmap more than once references the following four concepts as the pillars of AI legislation and regulation: (1) Transparency, (2) Explainability, (3) Testing and (4) Evaluation. In short, there is an expectation that when the public encounters either AI or the product of an AI system, it will be able to ask—and be told—about what the AI considered, how it arrived at the output the customer was exposed to (whether information or a product, like a new drug or treatment), how the AI was tested to ensure its reliability, and what the provider of the AI system did to evaluate the data in and information out. This framework is similar to the EU AI Act.
Second, in the health care space, the emphasis continues to be on patient privacy, but now counterbalanced with the need for reliable data to improve health care for the public. The Roadmap encourages initiatives “with an emphasis on making health care and biomedical data available for machine learning and data science research, while carefully addressing the privacy issues raised by the use of AI in this area.” The recognition that good, complete and broad data is necessary to ensure AI functions reliably in health care applications will bump up against notions, enshrined in legislation like the Health Insurance Portability and Accountability Act (HIPAA), that generally treat personal health information with the highest levels of privacy and confidentiality.
Third, related to the above point, the Roadmap states that “[t]he AI Working Group supports a strong comprehensive federal data privacy law to protect personal information.” That is long overdue, but will also be essential to ensure that the goals of regulating (and providing specific and limited permission for) the use of personal health data to ensure robust AI systems is not stymied by patchworks of state legislation.
Finally, while most litigation around AI to date has involved copyright issues stemming from training on and generating content from copyrighted material by large language models, the Working Group punts on intellectual property issues. Rather than make any specific proposals, the Working Group merely suggests that Congress should “[r]eview the results of existing and forthcoming reports from the U.S. Copyright Office and the U.S. Patent and Trademark Office on how AI impacts copyright and intellectual property law, and take action as deemed appropriate to ensure the U.S. continues to lead the world on this front.”
The Roadmap spans approximately 30 pages; there is more in it than just those things that seemed most significant to me. It can be found here for the curious. Ultimately, there is no question that AI is on Congress’s mind. There are significant open questions, however, whether it will do anything about it, and when.
For more information about legal risks and trends in artificial intelligence, including company best practices, reach out to me at dshulman@vedderprice.com.
Vedder Thinking | Articles Senate AI Roadmap
Publication
May 21, 2024
The Bipartisan Senate AI Working Group recently released its Roadmap for Artificial Intelligence Policy in the U.S. Senate, entitled “Driving U.S. Innovation in Artificial Intelligence.”
As one might expect, the details are few and the ambitions broad. It gave me the distinct sense of a group who just wants others to know they’re doing something, though it is not immediately clear what that something is. It seems that the rest of the world legislates while the United States continues to study the issue, with no clear path for Congress to actually do anything in particular.
As a result, it might be difficult to make any specific predictions about any specific forthcoming legislation or regulation. But, we can at least take notice of the things the group mentioned, as you can assume that nothing was put in (or left out) without intentionality. With that, here are my observations.
First, whatever legislation or regulation emerges (whenever it emerges) is likely to borrow familiar concepts from the EU AI Act around accountability of AI systems. The Roadmap more than once references the following four concepts as the pillars of AI legislation and regulation: (1) Transparency, (2) Explainability, (3) Testing and (4) Evaluation. In short, there is an expectation that when the public encounters either AI or the product of an AI system, it will be able to ask—and be told—about what the AI considered, how it arrived at the output the customer was exposed to (whether information or a product, like a new drug or treatment), how the AI was tested to ensure its reliability, and what the provider of the AI system did to evaluate the data in and information out. This framework is similar to the EU AI Act.
Second, in the health care space, the emphasis continues to be on patient privacy, but now counterbalanced with the need for reliable data to improve health care for the public. The Roadmap encourages initiatives “with an emphasis on making health care and biomedical data available for machine learning and data science research, while carefully addressing the privacy issues raised by the use of AI in this area.” The recognition that good, complete and broad data is necessary to ensure AI functions reliably in health care applications will bump up against notions, enshrined in legislation like the Health Insurance Portability and Accountability Act (HIPAA), that generally treat personal health information with the highest levels of privacy and confidentiality.
Third, related to the above point, the Roadmap states that “[t]he AI Working Group supports a strong comprehensive federal data privacy law to protect personal information.” That is long overdue, but will also be essential to ensure that the goals of regulating (and providing specific and limited permission for) the use of personal health data to ensure robust AI systems is not stymied by patchworks of state legislation.
Finally, while most litigation around AI to date has involved copyright issues stemming from training on and generating content from copyrighted material by large language models, the Working Group punts on intellectual property issues. Rather than make any specific proposals, the Working Group merely suggests that Congress should “[r]eview the results of existing and forthcoming reports from the U.S. Copyright Office and the U.S. Patent and Trademark Office on how AI impacts copyright and intellectual property law, and take action as deemed appropriate to ensure the U.S. continues to lead the world on this front.”
The Roadmap spans approximately 30 pages; there is more in it than just those things that seemed most significant to me. It can be found here for the curious. Ultimately, there is no question that AI is on Congress’s mind. There are significant open questions, however, whether it will do anything about it, and when.
For more information about legal risks and trends in artificial intelligence, including company best practices, reach out to me at dshulman@vedderprice.com.
Professionals
-
Services