You're already a patient. You just don't know it yet.

Last month, a routine cardiac CT scan. The radiologist looked at the images, found nothing alarming, and sent the patient home. Healthy. Clear. Good news!
What the radiologist couldn't see, a team at the University of Oxford could. Their AI model detected subtle changes in the fat surrounding the heart, invisible to the human eye, that predict heart failure with 86% accuracy, up to five years before any symptoms appear. The same scan. A completely different verdict.
The patient went home reassured. The data said something else.
This is the paradox quietly reshaping medicine. AI is getting extraordinarily good at finding disease before it exists, at least in any form a patient can feel. Oxford's model, validated in 72,000 NHS patients, identifies people at 20 times higher risk of developing heart failure within five years. Mayo Clinic's REDMOD system spots pancreatic cancer, one of the deadliest cancers partly because it's caught so late, up to three years before clinical diagnosis. And the Delphi-2M model, trained on 400,000 UK Biobank participants, aims to predict a person's next health event across 1,000 different diseases over a 20-year window. None of these are theoretical: Clairity Breast, the first FDA-authorised AI tool to predict five-year breast cancer risk from a routine mammogram, was added to the 2026 NCCN guidelines last month and is already being used on patients at Beth Israel Deaconess.
We celebrate these results. They deserve it. But we're not asking the harder question.
When a machine can tell you're going to be ill before you have any symptoms, "healthy" stops being a binary state. A new social category is emerging: the pre-diagnosed patient. And insurance, employment, and your own sense of self haven't caught up.
The science is moving faster than the law
A fresh KFF report published this week confirmed something worth sitting with: 84% of US health insurers already use AI for fraud detection, utilisation management, and prior authorisation. These same companies are watching early-detection technology arrive in clinical settings and asking obvious questions about risk pricing.
A recent piece in The Economist by Oxford philosopher Carissa Véliz made the obvious point: once disease prediction becomes data, it can flow into decisions about credit, insurance, and employment with no statutory firewall. There's no major law in most countries preventing this. Predictive health data sits in a grey area between medical record and actuarial input.
Employment is equally unguarded. Algorithmic hiring systems already use personal data to determine the lowest salary a candidate will accept. Workday is currently being sued for discriminating against applicants based on race, age, and disability. The distance from "pre-diagnosed with heart failure risk" to "flagged as a hiring liability" is shorter than anyone wants to admit.
The new social category nobody named yet
Let me map who gains and who loses when pre-diagnosis becomes routine.
Patients with access gain time, and time is everything in diseases where early intervention changes outcomes. Catching pancreatic cancer three years early isn't just a clinical win: it's the difference between a curative surgery and a palliative conversation.
Patients without access may gain a label without the care to act on it. Mayo's REDMOD flagged up to 19% of healthy controls as suspicious in external validation cohorts, people who would face anxiety, follow-up scans, and bills, without having cancer. Scale that across a healthcare system and you have a wave of the medically anxious, sitting in no-man's land between "healthy" and "patient."
Insurers and employers gain predictive data they can use with limited accountability. The regulatory frameworks protecting people from discrimination based on genetic data don't cleanly extend to AI-generated risk scores derived from routine CT scans.
Clinicians gain a powerful tool and inherit a new burden: how do you tell a person who feels perfectly well that they have a 25% chance of developing heart failure within five years? There's no clinical protocol for this conversation. There's no psychological infrastructure for it either.
Regulators are behind. The Trump administration dismantled federal AI transparency requirements through late 2025 and is now challenging state-level bias laws via executive order. The EU has GDPR, but predictive inference from non-sensitive imaging data is a gap every insurer's legal team has already noticed.
The question we're not asking
Coverage of Oxford's heart failure model was almost entirely celebratory. "Big step forward." "Gives patients a fighting chance." All true.
What none of it asked: once a hospital system knows your five-year heart failure risk score, who else gets to know? And under what conditions?
Nature reported last month that dozens of AI disease-prediction models were trained on dubious data and that some are already in clinical use. We're building the infrastructure to label people as pre-patients before we've agreed on what rights those people have.
This isn't an argument against early detection. It's an argument for running the social architecture alongside the science, not five years behind it.
Next experiment: Ask your insurer, your employer, and your GP the same question this week: "What predictive health data do you hold about me, and what decisions is it informing?" Most won't have a good answer. That silence is the data point that matters.
💥 May this inspire you to ask the question before someone else answers it for you.