February 23, 2026
Thomas Keane, MD, MBA
Assistant Secretary for Technology Policy
National Coordinator for Health Information Technology
U.S. Department of Health and Human Services
330 C Street, SW, 7th Floor
Washington, DC 20024
Re: RIN 0955–AA13
Submitted electronically via http://www.regulations.gov
Dear Assistant Secretary Keane:
The Workgroup for Electronic Data Interchange (WEDI) writes today in response to the “Request for Information: Accelerating the Adoption and Use of Artificial Intelligence as Part of Clinical Care” published in the December 23, 2025, edition of the Federal Register.
WEDI was formed in 1991 by then Department of Health and Human Services (HHS) Secretary Dr. Louis Sullivan to identify opportunities to improve the efficiency of health data exchange. Named in the Health Insurance Portability and Accountability Act (HIPAA) legislation as an advisor to the Secretary of HHS, WEDI is the leading multi-stakeholder authority on the use of health information technology (IT) to efficiently improve health information exchange, enhance care quality, and reduce costs. With a focus on advancing standards for electronic administrative transactions, and promoting data privacy and security, WEDI is recognized and trusted as a formal advisor to the Secretary. Our diverse membership includes health plans, providers, standards development organizations, vendors, federal and state government agencies, and patient advocacy organizations.
WEDI supports the Assistant Secretary for Technology Policy (ASTP)/Office of the National Coordinator for Health Information Technology (ONC) in its ongoing work in advancing interoperability of health information. We share the goals of leveraging current technology to advanced capabilities and functions to decrease burden and streamline processes to improve the quality of care while minimizing administrative costs and protecting patient information. We believe that artificial intelligence (AI) technology has the potential to transform the health care sector in the coming years by augmenting the care delivery process, improving data exchange, and reducing administrative burden for all health care stakeholders.
To aid us in developing our response to this request for information (RFI), WEDI conducted a Member Position Advisory (MPA) event on February 12, 2026. Through surveys, interviews, and live events, the MPA process is designed to solicit WEDI member input on topical issues, public and private sector proposals, and government regulations. Individuals representing health plans, providers, standards development organizations, clearinghouses, electronic health record (EHR) vendors, consultants, and other health IT vendors participated in the session.
Introductory Comments
WEDI’s mission and work are driven by easing administrative burden, putting patients at the center of their care, implementing consensus based, mature standards that support automation, and maintaining appropriate safeguards for privacy, security, and confidentiality. We applaud this effort to solicit industry input on the potential to advance the adoption and use of AI as part of clinical care.
WEDI’s comments are based on key guiding principles that are integral for consideration to advance health IT, data exchange standards, and innovative technologies. As ASTP/ONC explores opportunities to improve the technological capabilities used in the health care environment, it is important that the system:
- Ensures the health information needs of the patient and their caregivers are at the center of the ecosystem.
- Promotes seamless, automated data exchange through mature, clear, and unambiguous standards that have been thoroughly evaluated and demonstrate meaningful return on investment.
- Integrates data exchange efficiently within the health plan, provider, and other end-users’ workflows.
- Maximizes the most current data privacy and security safeguards for protecting patients’ health information.
General Comments on the RFI
In WEDI’s discussions of the RFI questions, several overarching themes emerged that we wish to highlight as follows:
- Establishing Trust
Gaining providers’ trust with new technology is not a new barrier for adoption. With AI, there is an overall understanding of the promise it brings to health care and what it can do to relieve burden on providers. There is a subset of providers that have a trust and sense of the utility of these tools. There are, however, many providers that still need to both understand and trust it. As it stands, we are not seeing the tipping point of providers’ excitement level for AI tools outweighing their level of trust in them. Providers are showing they are moving at the speed of trust and not at the speed of investment and innovation. This reflects a practical adoption model grounded in clinical risk, workflow realities, and patient safety. The maturation and adoption of AI technology can be compared to the adoption of EHRs, which has taken years and there is still a small number of providers that have not converted.
- Ensuring Transparency
Transparency of AI technology is essential for users. Providers need a clear understanding of the functionality of the AI tools within their care delivery and any impacts they could have on patient care. Testing results, certification, source of data, integrity of the data set, and various other factors will be critical for addressing providers’ liability, usability, and validity concerns. Effective transparency, while respecting proprietary models, will lead to an increase in the rate of adoption of AI tools.
- Maintaining Data Privacy and Security
AI poses significant risks to privacy and security. The need for large data sets is necessary to train AI algorithms. Creating those data sets raises privacy, security, and cybersecurity concerns. AI developers are generally not HIPAA covered entities. Covered entities are under regulatory obligations to protect and secure protected health information (PHI). The sharing of PHI with developers would in most cases require a business associate agreement or new regulation to establish requirements for their use of patient data. Third parties that collect, hold, or transmit PHI should be held to the same standards and accountability as covered entities and business associates. In doing this, ASTP/ONC would close the current loophole for privacy and security related to third parties’ use of patient data.
- Defining the Role of Regulation vs. Innovation
We appreciate the administration's work to bring AI tools to the appropriate areas of health care and leveraging its adoption and use in a standardized manner through regulation. We assert, however, that a balance must be struck in setting AI regulations. An overly restrictive regulatory approach could have the unintended consequences of increasing costs and stifling innovation and competition. Conversely, too lax of an approach could raise concerns with data privacy, security, and safety of products. A policy framework is needed that strikes the balance of flexibility to enable that innovation, while still ensuring trust, transparency, protection of patient data, and patient safety.
- Enhancing Education
Examples of the use of AI in health care operations are everywhere, including misinformation, which leads to fear of its adoption and use. Clear education aimed at end users of AI, including patients and providers, and understanding across the industry of the capabilities of AI is needed.
RFI Questions and WEDI Responses
- What are the biggest barriers to private sector innovation in AI for health care and its adoption and use in clinical care?
Barriers to the adoption and use of health IT in clinical care are not new concerns for providers. Trust by providers in the adoption of new technology is a general, ongoing issue. Providers want a certain level of assurance that the new technology will perform as marketed and provide the expected value to the clinical or administrative workflow and processes. With AI, the level of trust has a greater impact on providers plans for incorporating products into their clinical routine, as trust in AI tools is linked to providers’ concerns with liability. Appropriate regulations with meaningful guardrails can increase the trust needed for continued adoption of AI tools by providers.
The role of AI in the delivery of patient care varies widely from administrative functions (scheduling and ambient scribes during patient encounters) to clinical care (interpreting radiologic images and chatbots). The amount of liability that a provider may take on while using AI tools is a risk and in many cases the level of risk is unknown. Providers have concerns regarding what happens when AI does not perform as expected, since they will ultimately be liable for the patient’s outcome. As such, AI should be deployed only as a tool and not without inclusion of a human in the loop. We anticipate that specific legal cases will need to work their way through the courts before a clearer understanding of the ramifications will emerge. It is possible that clinicians will limit their use of AI on this point alone.
Providers continue to have concerns with transparency as it relates to AI tools and addressing this will be critical as the industry moves forward with not just AI but other new technology. While providers do not need to know technical specifications of the AI tools, they need a basic understanding of the tool’s functionality and data sources to know how the tool is supposed to work in order to identify any failures or general concerns.
Data privacy and security issues are another barrier to the adoption and use of AI in clinical care. At issue is the current patchwork of state privacy laws, which if in conflict, preempt the HIPAA Privacy rule. The myriad of overlapping state and HIPAA legal requirements raise compliance costs and divert resources while also stifling innovation when PHI is involved.
The current fee-for-service reimbursement model serves as another barrier to the adoption and use of AI in clinical care. Additional evaluation is needed on how reimbursement models align with the use of AI and what payment reform could do to advance the adoption of AI use. Further, existing technology implementations may have built-in restrictions that restrict or make the integration of AI tools difficult. This is another area that ASTP/ONC should further explore.
Organizations are heavily invested in various projects stemming from regulations to business development. The value of prioritizing the adoption of AI needs to be established and how to align it with the existing interoperability needs. To overcome these barriers, multiple actions are needed at federal, state, and organization levels. Addressing these concerns will assist in overcoming the current barriers and start to bring about an uptake of AI use.
- What regulatory, payment policy, or programmatic design changes should HHS prioritize to incentivize the effective use of AI in clinical care and why? What HHS regulations, policies, or programs could be revisited to augment your ability to develop or use AI in clinical care? Please provide specific changes and applicable Code of Federal Regulations citations.
With respect to regulatory changes, an overall review of the rulemaking process itself is needed within the context of AI technology. Because AI technology is progressing so rapidly, the current regulatory process is neither designed nor suitable for quickly establishing the requirements and guardrails for its use. ASTP/ONC should explore the use of the sub-regulatory, technical assistance, and guidance to address emerging policy gaps. These alternatives could bring flexibility to the rule-making process while still being authoritative. ASTP/ONC should also review existing regulations and the impacts they have on advancing the adoption and use of AI in clinical care, such as the HIPAA Privacy Rule and upcoming cybersecurity rule.
At the state government level, there is a growing patchwork of state rules that are being proposed for the use of AI and privacy. The federal government should address these state laws as they are enacted by publishing guidance on how the new law reconciles with federal requirements. This action will show more directly how the federal rules interact with the state rules, instead of the entities operating in those states having to make their own interpretations that may or may not be completely accurate. As an example, this approach was used with the No Surprises Act, where HHS provided guidance on the states’ authority vs. federal oversight and where the two met.
The use of sub-regulatory guidance has the flexibility to be updated and adapt more quickly to keep pace with the changing AI technology. Regulations related to AI can be more all-encompassing and reference the guidance provisions. Within the regulations, the requirements should be framed using the five principles of accuracy, transparency, accountability, privacy, and ethics. Each proposed and final rule should identify the impact of its requirements on these principles, providing an analysis of how the rule should be viewed.
Regarding programmatic design changes that HHS should prioritize, there needs to be a focus on the impact of AI on rural and underserved areas. There are infrastructure limitations that inhibit the ability in these areas to fully leverage AI and other advanced technologies, specifically due to the limitations of broadband access, reliable Wi-Fi, and use of smartphones. Access to education to support digital literacy and train the workforce on new technologies must also be prioritized. Without these infrastructure builds, the adoption and use of AI will remain limited in these areas. We strongly encourage ASTP/ONC to use its authority to prioritize infrastructure and education to facilitate the adoption and use of AI.
For payment policy, experience has shown that incorporating reimbursement into payment policy for the use of technology in clinical care supports its adoption and use. An example is Medicare’s payment for certain capital related costs on the Medicare cost report. The payment policy approach could spur investment and innovation by developers and adoption and use by providers.
- For non-medical devices, we understand that use of AI in clinical care may raise novel legal and implementation issues that challenge existing governance and accountability structures (e.g., relating to liability, indemnification, privacy, and security). What novel legal and implementation issues exist and what role, if any, should HHS play to help address them?
We have interpreted “non-medical devices” to mean devices used outside of clinical settings that can collect, store, and transmit data that support the patient’s clinical treatment and monitoring. These devices include health applications (apps), fitness trackers, health monitors, ambient scribes, chatbots, prescription fillers, etc. We are also aware of the Food and Drug Administration’s (FDA) work on regulating the use of non-medical devices and the Center for Disease Control and Prevention’s (CDC) work on wearable devices and other technologies.
Our comments are framed on this interpretation of the term “non-medical devices” and the work by the FDA and CDC. We also encourage ASTP/ONC to collaborate with the FDA and CDC in developing definitions and case examples of non-medical devices, which will need to be reviewed on a periodic basis as this technology is continually evolving. Clear distinctions between medical and non-medical devices will also be necessary for establishing evaluation methods of the two.
The use of non-medical devices, including those that incorporate AI, do have potential legal implications, especially related to liability, privacy, and security. As mentioned, the potential of liability is one of the biggest risks that could hold providers back from using AI tools. In general, legislating or regulating how AI tools will impact liability is not possible, meaning that issues will need to be addressed through the legal system. While the concern of liability may be outside the scope of ASTP/ONC and HHS to regulate, guidance and education are critical for users of AI technologies to understand the risks and combat any misinformation. Again, AI transparency is a key facet of liability.
The need to establish large data sets to train AI models pose privacy, security, and cybersecurity risks. Developers of AI models are typically not HIPAA covered entities, which requires a separate agreement to transfer patient data to them for their use. Additional considerations may be needed to address patient consent for this use of their data, as well as any security and cybersecurity measures to protect this data. We encourage ASTP/ONC to explore ideas for how it can provide oversight for the collection and use of patient data for the purposes of AI training and model development. We urge ASTP/ONC to consider its role as a convener and to coordinate with the Federal Trade Commission where AI technologies fall outside HIPAA.
AI-enabled clinical care models will require an elevation of patient consent to give patients a full understanding of where AI is being used, how it is being used, and their options for having it involved in their care. If AI tools are contributing information to analysis or decision-making aspects of patient care, a provider must be included in that workflow and evaluate the output of the AI tool. The same is needed for the development of the AI model or algorithm.
AI needs to remain narrowed in its application in clinical care and not act in a way that limits patient and provider engagement and involvement in care. It should always be in a secondary, facilitator role with the provider remaining in the primary role for interacting clinically, finalizing clinical decisions, and developing treatment plans with the patient. AI should always serve in the role of augmenting clinical care and not be a substitute for clinical care. The benefits of AI are its ability to augment and enhance the experiences of providers and patients with the focus on delivering patient care. We encourage ASTP/ONC to develop clear definitions for what constitutes augmented AI care, AI-enabled care, and AI supported technologies.
Because of the role AI can and does play in many aspects of documentation and data collection, such as patient demographics, medical history, medication lists, etc., we recommend that ASTP/ONC consider developing certification criteria for AI-driven administrative and clinical processes for the ONC EHR Certification Program to facilitate a standardized approach to the use of AI tools. Having certification criteria will provide a level of assurance to users of the technology that it meets specified standards.
- For non-medical devices, what are the most promising AI evaluation methods (pre- and post-deployment), metrics, robustness testing, and other workflow and human-centered evaluation methods for clinical care? Should HHS further support these processes? If so, which mechanisms would be most impactful (e.g., contracts, grants, cooperative agreements, and/or prize competitions)?
Before releasing regulations or requirements on the use of AI, the biggest role ASTP/ONC can play is being a centralized location for testing and evaluating the effectiveness, efficiency, and value of the AI technologies in the delivery of care. We encourage ASTP/ONC to create a testing environment with models and sandboxes where all stakeholders can interact with the technology and identify the benefits and challenges of its use. This activity will increase trust in the products and enthusiasm for their use.
Further understanding is needed of the current practices of using AI to both incorporate non-medical data from patient fitness trackers, health apps, etc. into the clinical record and analyze that data. Consideration also needs to be given to how data generated from multiple sources can be incorporated into the EHR, along with a recognition that the EHR may no longer be the complete system of record. Patients may have better data for their day-to-day condition and treatment through health apps as compared to the episodic data that clinicians capture in their EHRs. The need for exchange mechanisms to bring these data sources together should be explored.
As we look at models to evaluate how well AI-enabled care does relative to its non-AI counterpart, we need to think about tangible outcomes that can be easily measured to identify the distinction. For example, measures such as reduced days in the hospital after a procedure, reduced readmissions, reduced post-discharge visits, increased patient satisfaction, and increased patient adherence to treatment could be deployed. The success of AI in managing patient care could also involve the use of data generated from non-medical devices.
- How can HHS best support private sector activities (e.g., accreditation, certification, industry-driven testing, and credentialing) to promote innovative and effective AI use in clinical care?
There are various ways in which ASTP/ONC can support private sector activities to promote innovative and effective AI use in clinical care. One option is to add AI certification criteria to the ONC Certification Program. It is our understanding that some providers find the current Decision Support Intervention (DSI) certification requirement with the source attributes and the intervention risk management practices valuable in making informed decisions about AI purchases, implementation, and use. As such, we urge ASTP/ONC to reconsider its current efforts to deregulate DSI certification criteria.
Other opportunities include creating and maintaining testing environments and sandboxes for industry use, supporting current accreditation or certification bodies to develop programs, and developing standards for AI-generated data. Providing a test bed of synthetic data that can be used to test models and understand their effectiveness would give users of the tools assurance that they have met testing thresholds and certain criteria have been met. Authorized testing environments would also allow innovators to quickly put their models through their paces and demonstrate they are working correctly. These opportunities will further strengthen trust if ASTP/ONC-initiated efforts are transparent and testing environment reports are made public.
If ASTP/ONC defers on incorporating AI into its Certification Program, then industry-driven testing must be supported with close collaboration between government and the private sector. Testing will be central to AI development, transparency of models, and adoption of its use, and will ensure that models demonstrate they meet specified requirements. An appropriate balance should be struck between government oversight and industry-driven initiatives to prevent stifling innovation.
A framework for data standards for AI-generated data is needed so users of the data understand the data’s fidelity, including where it came from, how it originated, any changes to it during transfer, AI-generated labels, etc. This approach would be similar to the Patient Access application program interface that supports knowing how the data flowed from different entities, who maintained it, and where it originated. The provenance of patient data is critical, and users of the data need to know whether the data were generated or synthetic, derived from other data, and the role of an AI algorithm in its production.
With this in mind, we recommend that ASTP/ONC develop a governance framework and include in it a safe harbor for those using AI tools that meet the specified framework pertaining to data provenance, transparency, testing, ongoing monitoring, and other factors for data integrity. Best practices, model language, model requirements, and warning labels for both the provider and the patient should also be included in the resources for the framework. Overall, there needs to be trust established in the AI tools and the framework or adoption and use could be impacted.
- Where have AI tools deployed in clinical care met or exceeded performance and cost expectations and where have they fallen short? What kinds of novel AI tools would have the greatest potential to improve health care outcomes, give new insights on quality, and help reduce costs?
Some AI tools have fallen short by their overall approach to engaging with users. Limited transparency of functionalities, lack of trust in data sources used to train models, and limited education have left many users skeptical of their value. Developers could take an even more active role in demonstrating the effectiveness of the tools, sharing details on how the product works, and providing more robust implementation support. Providers need to have a clear understanding of how the tools will support and enhance their delivery of care and administrative staff need a better understanding of what their role is in working with AI-based technology.
A specific example of how a promising AI technology has fallen short for some providers is with ambient scribes. Errors have occurred from the ambient scribe, and lawsuits have been filed against the developers. Another perspective on ambient scribes from hospitals was positive, as they have seen this technology reduce some of the burden on providers, while improving patient satisfaction. This demonstrates how a “one-size-fits-all” approach does not work in health care and approaches tailored specifically to the end users are needed to support adoption of new technology.
An area where clinical AI tools have exceeded expectation include identifying risks to patients, interpreting studies and reports, and supporting documentation. On the administrative side, successes have been seen with patient scheduling, projections for capacity management, and template management for providers, while AI tools for managing prior authorization have demonstrated room for improvement.
- Which role(s), decision maker(s), or governing bodies within health care organizations have the most influence on the adoption of AI for clinical care? What are the primary administrative hurdles to the adoption of AI in clinical care?
Administrative hurdles to the adoption of AI tools vary across organizations, as does their approach to addressing them. Known hurdles relate to purchase and implementation, including identifying tool options, procurement, contracting, governance, and training. Any new AI tool, championed by one or more groups within an organization, may need to be translated across the organization, as the use of these tools may not be siloed within the organization. Other hurdles include AI reliability, robustness of the tools, transparency of how the tools function, evidence of the tools delivering positive results, clearly defined limitations on AI use, and safety within patient care.
Some provider organizations have set up formal governing bodies to review and monitor AI use cases, review potential products, and approve purchases. These administrative hurdles will require time and resources. These hurdles could be especially challenging for small and rural organizations, where staff may have dual responsibilities or they lack dedicated staff to participate in these functions. As well, some organization’s policies may not permit AI tools to access PHI. As an example, certain value-based care contracts may prohibit an AI tool from reviewing PHI.
Small and rural organizations also face infrastructure challenges that potentially could benefit from the deployment of AI technology. Appropriate infrastructure investment to support these organizations is necessary to ensure they have access to these tools and the benefits they can provide.
- Where would enhanced interoperability widen market opportunities, fuel research, and accelerate the development of AI for clinical care? Please consider specific data types, data standards, and benchmarking tools.
Interoperability is the priority and many AI algorithms are dependent on interoperable data exchange to support the foundational data that drives AI tools. True interoperability will come with the ability to link patient information across different EHRs, clinical locations, and other organizations managing patient data to have a complete longitudinal record. Standardizing EHRs, supporting standardized data exchange between EHRs, using trusted health care terminologies, and embedding AI-driven process feeds will facilitate this state. Part of ensuring trust in AI tools is having consistent privacy and security standards that are integrated and part of data exchange processes.
- What challenges within health care do patients and caregivers wish to see addressed by the adoption and use of AI in clinical care? Equally, what concerns do patients and caregivers have related to the adoption and use of AI in clinical care?
The overall challenge of addressing patients’ and caregivers’ wishes is that there is no one-size-fits-all solution. We have heard situations where patients have been unhappy with AI tools, as in the example of lawsuits involving ambient scribes. We have also heard examples where AI tools have enhanced the patient experience.
There are also opportunities where AI could make a significant positive impact for patients, including decreasing delays in access to care, scheduling, diagnostic support, care coordination, care planning across providers, referrals, medication reconciliation, cancer detection, better interpretations of images, and reviews for rare disease flags. These are all areas where AI can assist in closing gaps in care and provide real benefits without stepping into the area of diagnosis.
Prior authorization is one area where there is significant interest in how AI can improve current processes. One potential benefit would be more accurate and appropriate data being identified and supplied by the provider and faster response times from health plans. There are use cases where there is evidence of medical necessity or available clinical information within the payers’ system that can be used for the review, which would negate the need for the provider to send the information as an attachment, generating a timelier response. At the same time, there is growing concern regarding the use of AI for denials. The sole use of AI should not be the determining factor in a health plan’s care or coverage determinations, and instead, human, licensed providers should have the final say.
Another positive example of AI use with patients is the ability to take complex clinical or administrative information and turn it into plain language for patients to better understand. As an example, a payer may use this technology to create patient-friendly versions of evidence of coverage and benefits coverage information. Again, trusted health care terminologies can play a major role here.
- Are there specific areas of AI research that HHS should prioritize to accelerate the adoption of AI as part of clinical care?
- Are there published findings about the impact of adopted AI tools and their use clinical care?
- How does the literature approach the costs, benefits, and transfers of using AI as part of clinical care?
Testing of AI tools under a government or private-public partnership governance structure will enable organizations to adopt and use AI technology with confidence that it will perform as expected. Whenever possible, identification of use cases for AI should include relieving administrative burden, such as using ambient scribes, charting, scheduling, and reducing providers’ prior authorization burden. While most AI research focuses on model performance, and less on the topic of implementation, there is a need to move beyond innovation and focus more on the real-world impact, economic implications, and implementation science of AI technology. Research on AI should focus on what happens after deployment of the tools, performance of tools in the real world vs. the lab, and necessary governance structures that can lead to sustained benefits.
Literature research often focuses on factors of diagnostic accuracy and predictive performance, but adoption also depends on functional aspects, such as reduced clinician burden, favorable net cost, positive health outcomes, and reduced variation in care, to name a few. The ecosystem needs evidence and a body of knowledge about AI outcomes and results, as well as use case-specific findings of what works and does not work in the real world and why.
ASTP/ONC can lay the groundwork to demonstrate the trust and reliability in using AI tools that will then make implementation decisions easier for all stakeholders.
Solicitation of Public Comments
- Regulation
As the nation’s principal health regulator, HHS helps shape the environment in which AI for clinical care is developed, evaluated, and deployed. HHS seeks to establish a regulatory posture on AI that is well understood, predictable, and proportionate to any risks presented to enable rapid innovation while protecting patients and the confidentiality of their identifiable health information and maintaining public trust. We seek feedback on how current HHS regulations impact AI adoption and use for clinical care.
Static algorithms are better suited to the current regulatory framework, whereas continuous learning systems and evolving models are not suited for a drawn out and unpredictable schedule. Current frameworks for evaluating AI are more appropriate for static AI, such as the regulatory framework of the FDA’s medical device approval processes. The challenge is that creating a separate regulatory framework for AI could create redundancies and inefficiencies. A solution would be to synchronize with some existing frameworks to include foundational aspects of HIPAA and the FDA regulatory process.
Regulations related to AI also require inter-agency, whole of government collaboration, including the FDA, CDC, Centers for Medicare & Medicaid Services, and others to fully address medical devices, non-medical devices, clinical decision support, privacy, security, and cybersecurity.
We know that the current regulatory process is drawn out, and this does not always meet the needs of fast-paced technology like AI. As stated, we encourage you to explore the use of sub-regulatory action to provide guidance or standardization in support of appropriate AI adoption.
Further, we recommend that HHS prioritize the leveraging of AI to reduce administrative burden on patients, providers, and payers, in addition to improving clinical care.
- Reimbursement
HHS’s payment policies and programs have massive effects on how health care is delivered in the United States, often times with unintended consequences. Hypothetically, if a payer is taking financial risk for the long-term health and health costs of an individual, that payer will have an inherent incentive to promote access to the highest-value interventions for patients. Under government designed and dictated fee-for-service regimes, however, coverage and reimbursement decisions are slow. Rarely does covering new innovations reduce net spending; and waste, fraud, and abuse is difficult to prevent, often times leading to massive spending bubbles on concentrated items or services that are not commensurate with the value of such products. Given the inherent flaws in legacy payment systems, we seek to ensure that the potential promises of AI innovations are not diminished through inertia and instead such payment systems are modernized to meet the needs of a changing healthcare system. We seek feedback on payment policy changes that ensure payers have the incentive and ability to promote access to high-value AI clinical interventions, foster competition among clinical care AI tool builders, and accelerate access to and affordability of AI tools for clinical care.
Specific to value-based care arrangements, providers need quicker access to better data to determine how they are performing in their value-based contracts and take actions to improve that performance. Earlier access to better data permit clinicians to make necessary modifications to better meet the measures and incentives and ultimately provide improved care to patients. Use of AI in value-based models is one area where this innovation could improve data collection, data reporting, and clinician performance.
- Research & Development
HHS supports one of the world’s largest health research ecosystems, catalyzing innovation to supplement the market. By enabling applied AI research and development, care delivery research and implementation science, as well as AI entrepreneurship in health care, we can better translate AI technologies from concept to clinical use. We seek input on ways in which HHS may invest in research and development (including public-private partnerships and cooperative research and development agreements (CRADAs)) to integrate AI in care delivery and create new, long-term market opportunities that improve the health and wellbeing of all Americans.
As stated earlier, ASTP/ONC can support the transition of AI technologies from concept to clinical use through providing testing environments and resources for developers and users. Developing methods for surveillance, tracking, and monitoring of AI-based tools to identify performance, including metrics related to time, cost, quality, etc., to evaluate and demonstrate functionality could result in increased adoption and use. Trust will be garnered by both testing environments and public access to testing reports.
Conclusion
WEDI applauds ASTP/ONC’s efforts to solicit industry opinions on the potential impact that AI will have on the health care ecosystem and actions the federal government should take to advance its adoption and use. We believe AI currently has a significant impact on the health care sector and that impact will exponentially grow in the coming years, affecting patients, providers, payers, and others. WEDI shares your commitment to understanding the AI environment and how AI technology can augment the care delivery process, improve data exchange, and reduce administrative burden for all stakeholders. We look forward to working with you as the development and implementation of AI-related regulations and policies continue.
We appreciate the opportunity to share our perspective on this RFI. We hope our comments and recommendations will serve to assist ASTP/ONC as it moves forward with this work. Please contact Robert Tennant, WEDI Executive Director, at rtennant@WEDI.org with any questions on these comments and recommendations.
Sincerely,
/s/
Merri-Lee Stine
Chair, WEDI
cc: WEDI Board of Directors
