When AI Gets It Wrong: What Every Patient Needs to Know
Artificial intelligence is rapidly becoming part of your healthcare experience—whether you realize it or not.
From helping doctors identify early signs of illness to summarizing your medical records, AI has the potential to improve care in powerful ways. But recently, a prominent voice in healthcare raised an important concern:
AI can be “gamed.”
That warning comes from Judy Faulkner, founder of Epic Systems—the electronic medical record platform used by a large portion of U.S. hospitals. When someone at the center of healthcare technology says we need to be careful, it’s worth paying attention.
Let’s break this down in a way that matters to you as a patient.
What Does It Mean That AI Can Be “Gamed”?
At its core, AI learns from patterns. It reads massive amounts of information and identifies what seems to be “true.”
But here’s the problem:
If false, biased, or repeated information is fed into AI systems, those systems can start to treat that information as fact.
As Faulkner explained, if something is repeated often enough, AI may begin to believe it’s true—even if it isn’t.
This isn’t just a tech issue. It’s a patient safety issue.
Why This Matters for Your Care
1. Your diagnosis could be influenced by flawed data
AI tools are increasingly used to help identify conditions or flag concerns. But if those tools are trained on incomplete or biased datasets, they may miss diagnoses—or worse, reinforce existing disparities.
Research already shows that AI systems can behave differently across patient populations, potentially affecting how diseases are diagnosed and treated.
2. Bias doesn’t disappear—it scales
If a human makes a biased decision, it affects one patient.
If an AI system is biased, it can affect thousands.
Some studies even show AI can detect things like race from medical images in ways clinicians cannot—raising concerns about hidden bias influencing care decisions.
3. You may not know AI is involved
AI is often embedded quietly into electronic health records and clinical workflows.
That means recommendations about your care—tests, diagnoses, or treatment options—may be influenced by systems you never see.
4. “Black box” decisions can limit transparency
Many AI systems are not easily explainable. Even clinicians may not fully understand how a recommendation was generated.
Experts warn this lack of transparency can undermine trust and weaken the physician–patient relationship.
The Good News: AI Can Help—When Used Well
It’s important to keep perspective.
AI is not the enemy.
In fact, when used appropriately, it can:
Detect subtle patterns humans might miss
Identify early warning signs (like sepsis or deterioration)
Reduce administrative burden for clinicians
Help personalize care
In some cases, AI may even improve empathy in patient communication by standardizing thoughtful responses.
The goal isn’t to reject AI—it’s to use it wisely and safely.
What You Can Do as a Patient
This is where you have more power than you think.
1. Ask questions
“Was this recommendation generated by AI?”
“How confident are we in this result?”
“Is there another way to confirm this?”
You don’t need to challenge your doctor—you just need to stay engaged.
2. Be your own data advocate
AI is only as good as the data it receives.
That means:
Keep your records accurate
Correct errors in your chart
Share complete information about your symptoms and history
3. Seek human judgment—not just algorithmic output
AI should support care—not replace clinical thinking.
If something doesn’t feel right, ask for:
A second opinion
Additional testing
A deeper explanation
4. Remember: You are not a data point
AI sees patterns.
Your physician should see you.
The best care happens when technology and human insight work together—not when one replaces the other.
The Bottom Line
AI is transforming healthcare—but it is not infallible.
If it can be influenced, biased, or “gamed,” then it must be used with caution.
And that brings us back to what matters most:
You are the most important voice in your care.
At Sideline MD, our mission is to help you stay informed, ask better questions, and navigate a system that is becoming more complex—not less.
Because the future of healthcare isn’t just about smarter technology.
It’s about more empowered patients.