Utah recently made a big move—allowing AI systems to directly renew prescriptions for patients with chronic illnesses without any doctor signatures. This is the first time in American medical history that the "prescription authority" has been handed over to machines.



How does it work? Patients log into an online system, where AI reviews medication records, conducts a consultation, and if deemed safe, directly sends the prescription to the pharmacy. Simple and straightforward, but it sounds a bit alarming.

Currently, this pilot program is limited to 190 common medications, excluding sensitive drugs like painkillers, ADHD medications, and injectables, to control risks. As a result, Utah has become the first state in the U.S. to allow AI to independently handle prescriptions.

Why do this? The logic from the state government and startups is quite direct: healthcare costs are too high, and there’s a shortage of doctors, especially in rural areas with limited medical resources. Automating routine medication renewals can ease the burden on doctors, and patients won’t be forced to interrupt treatment due to administrative delays. It sounds like a way to open a window for new startups within the existing regulatory framework, giving them a chance to test the waters.

If this approach is widely adopted, the overall efficiency of the healthcare system could improve significantly, but it will also depend on how regulation and oversight are handled—after all, safety is a top priority.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • 6
  • Repost
  • Share
Comment
0/400
OneBlockAtATimevip
· 01-07 07:52
AI prescribing medication without signatures? This really threatens doctors' jobs now. It's called automation in a nice way, but honestly it's just shifting the blame to machines. Trying out 190 types of medication is okay, but what if the AI makes a mistake—who's responsible? I understand the shortage of medical resources, but this kind of laziness isn't appropriate. It feels like another test by capital in the healthcare sector. But on the other hand, rural patients have been stuck in the process for too long, which is a bit contradictory. Can this wave of regulation keep up? I'm a bit skeptical. In my opinion, doctors should still make the final decision; AI should only assist. Prioritizing cost over patient safety? That's a bit backwards. This kind of pilot project will definitely run into issues eventually—bet five bucks.
View OriginalReply0
MEVSandwichVictimvip
· 01-07 07:49
Hey, AI prescribing medication? How desperate are they for doctors... --- Only daring to try 190 kinds of medicine, quite bold. But honestly, it really saves trouble. --- Feels like once again capital is finding "legal" loopholes... Medical practices are becoming more and more elaborate. --- Honestly, a bit scared. What if something really goes wrong? Will AI take the blame then? --- I understand the lack of medical resources in rural areas, but replacing doctors' judgment with AI? Something feels off... --- Wow, the courage to try this out. What if one day the database gets hacked? Patients won't even know what to take. --- Just refilling chronic medication, no need for doctors to sign off every time, but I still feel uneasy leaving it all to machines. --- The medical system is being forced into this by costs, it's really ironic. --- With regulatory frameworks in place, startups have the chance to exploit loopholes. I’m all too familiar with this logic.
View OriginalReply0
MEVSupportGroupvip
· 01-07 07:42
Oops, AI prescribing medication without a doctor's signature? Is this to save money or genuine trust in machines? How much trouble would it be if the machine misreads a data point... While 190 types of medication are indeed cautious, can this approach really be controlled once it's open? Rural healthcare truly lacks personnel, but replacing doctors with AI for judgment... still feels uncertain. I understand the high healthcare costs, but it can't be directly handed over to machines—this is gambling with people's lives. When regulation can't keep up and things go wrong, who takes the blame? Certainly not the coders. It feels like another new trick by capital to exploit the system, under the banner of "efficiency." If this happened domestically, it would have been exposed long ago. Why are they so bold over here in the US...
View OriginalReply0
GasFeeSobbervip
· 01-07 07:41
Speaking of AI prescribing medication, it sounds convenient but something just doesn't feel right. Wait, what if the AI makes a mistake? Who will be responsible then? But on the other hand, doctors are indeed overworked and exhausted, so some redistribution can be understandable. Looking at 190 types of medication, it seems quite restrained, but is this threshold really enough... Honestly, it's still a money issue. American healthcare is like this. It feels like another cycle of "try it out first, then change the rules if problems arise."
View OriginalReply0
NervousFingersvip
· 01-07 07:39
AI prescribing without a doctor's signature? I need to see how this backfires later... I understand the pressure of rising medical costs, but this move is a bit too aggressive. Listening to 190 common drugs sounds safe, but if something really happens, who will be responsible? Honestly, it's a bit cowardly; it feels like a mess is bound to happen sooner or later. Improving efficiency is a good thing, as long as patient safety isn't sacrificed. Once again, relaxing regulations under the guise of an "innovation pilot"...
View OriginalReply0
MaticHoleFillervip
· 01-07 07:35
AI prescribing is indeed satisfying, but I still want to ask—what if there's a bug? Doctors can make diagnostic errors, so why shouldn't machines? Listening to 190 kinds of drugs sounds safe, but if someday it causes a fatality, who is responsible? This is capitalism. The goal is to save money and push forward; anyway, if something goes wrong, the government will take the blame. Actually, the shortage of medical resources in rural areas isn't the fault of AI; fundamentally, it's a system problem. Trying it out is fine, but patients should be given an informed consent option; don't force the use of machines.
View OriginalReply0
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)