First Midwest BankFirst Midwest Bank logoArrow DownIcon of an arrow pointing downwardsArrow LeftIcon of an arrow pointing to the leftArrow RightIcon of an arrow pointing to the rightArrow UpIcon of an arrow pointing upwardsBank IconIcon of a bank buildingCheck IconIcon of a bank checkCheckmark IconIcon of a checkmarkCredit-Card IconIcon of a credit-cardFunds IconIcon of hands holding a bag of moneyAlert IconIcon of an exclaimation markIdea IconIcon of a bright light bulbKey IconIcon of a keyLock IconIcon of a padlockMail IconIcon of an envelopeMobile Banking IconIcon of a mobile phone with a dollar sign in a speech bubbleMoney in Home IconIcon of a dollar sign inside of a housePhone IconIcon of a phone handsetPlanning IconIcon of a compassReload IconIcon of two arrows pointing head to tail in a circleSearch IconIcon of a magnifying glassFacebook IconIcon of the Facebook logoLinkedIn IconIcon of the LinkedIn LogoXX Symbol, typically used to close a menu
Skip to nav Skip to content
FDIC-Insured - Backed by the full faith and credit of the U.S. Government

What Biden’s AI executive order means for biotech and healthcare

In early November, President Biden issued a historic executive order on artificial intelligence in pursuit of what his administration calls “safe, secure, and trustworthy” AI. The milestone seems imperative given the technology’s rapid evolution and adoption across industries, and a regulatory gauntlet that will ultimately impact millions of Americans. That includes repercussions for the health and biotech industries leveraging AI to innovate as part of a health system that more than 70% of Americans say fails them.

Biden’s AI executive order, which mentions health or healthcare 33 times, is an opening shot by the federal government. It’s no surprise that healthcare plays a spotlight role in the conversation. “To protect consumers while ensuring that AI can make Americans better off, the president directs the following actions: Advance the responsible use of AI in healthcare and the development of affordable and life-saving drugs,” says the Biden administration.

This amounts to both a catch-up and a forward-looking call to build a blueprint for the future of drug development and patient care—one where shoddy algorithms are less likely to muck up the work of making new medicines and less likely to harbor biases against certain groups of patients.

So far, industry groups are in a holding pattern on their response. Spokespeople for the Pharmaceutical Research and Manufacturers of America (PhRMA) and Biotechnology Innovation Organization—the largest biopharma trade organizations in the country—said in emailed statements that they are still parsing through the totality of the order’s implications and gathering input from members on their recommendations to the government.

“PhRMA looks forward to engaging with the administration and other stakeholders to advance a regulatory framework that supports the safe and appropriate use of AI,” says a PhRMA spokesperson.

It’s an effort that will take time and considerable debate given that AI is already prominent in the healthcare industry. Drugmakers have been using AI and machine learning to discover new biological targets for experimental treatments for years. Doctors and hospitals have similarly relied on algorithms to guide patient care, from diagnosis to sick bed allocation, with 30% of U.S. radiology departments using AI to take a first pass at reading X-ray and MRI images to sniff out telltale abnormalities like cancerous tumors.

In drug development, tech giants from Apple to Google are collaborating with academic institutions and some of the world’s largest pharmaceutical companies to gather real-world evidence—such as biometric and other data collected in real time outside of a clinical trial setting—through wearable devices for drug development and patient care insights. That, itself, presents medical privacy issues which will play a central role in regulating AI and how people’s personal information might be used by the industry.

The constellation of stakeholders in biotech and medicine will weigh in on what they consider the correct balance of protecting public safety and privacy versus stifling AI-driven innovation. The federal government has its work cut out for it, too, as the executive order directs the Department of Health and Human Services to establish “a safety program to receive reports of—and act to remedy—harms or unsafe healthcare practices involving AI,” in addition to a National AI Research Resource meant to align best practices and healthcare data under the Biden administration’s proposed AI Bill of Rights.

How it all shakes out will take decades to determine. But the table has officially been set for the future of AI in biotech and healthcare.

 

This article was written by Sy Mukherjee from Fast Company and was legally licensed through the DiveMarketplace by Industry Dive. Please direct all licensing questions to legal@industrydive.com.

Subscribe for Insights

Subscribe