When AI Becomes the First Symptom Checker

In a world where artificial intelligence sits in our pockets and answers questions in seconds, it’s no surprise that many people now turn to AI with their most personal concerns including their health.

Of the more than 800 million regular users of ChatGPT, 1 in 4 submit a prompt about healthcare every week, according to OpenAI. Included in their new report more than 40 million turn to ChatGPT every day with healthcare questions and more than 5% of all ChatGPT messages globally are about healthcare, averaging billions of messages each week.

But when it comes to self-diagnosis, how much trust is too much?

That question sits at the heart of the latest Canadian Institute for Health Information podcast that explores the promise and the risks of AI-assisted self-diagnosis through the lived experience of Alvina Nadeem, an engineer, mother of two, and ovarian cancer survivor, alongside research from University of Waterloo professor Sirisha Rambhatla.

“It connected the dots”

For Nadeem, AI was not a replacement for doctors — but it may have been the reason she went to see one just in time.

In early 2023, Nadeem began to feel “not like herself.” the 36-year-old working mother of two young boys was exhausted, sleeping far more than usual, and like many women, initially talked herself out of concern.

With no family history of cancer and a generally healthy lifestyle, it was easy to dismiss the warning signs.

“I definitely suppressed the inner voice, and I started to think, oh, you know, if I go to the (ER) waiting room or to my clinic. I'm gonna sit there for three hours just to be told...that it's in my head or it's nothing or ‘welcome to perimenopause.”

“I don't have time to sit there and be told it's nothing. So let me just see and wait it out.” she said.

But one symptom stood out.

Her menstrual cycle, usually clockwork-regular, suddenly arrived nine to ten days early. That red flag pushed her to take her symptoms more seriously — though not immediately by calling a doctor.

Instead, Nadeem turned to an early version of ChatGPT.

Working in digital transformation, she was already familiar with AI tools. Rather than Googling symptoms and wading through thousands of alarming links, she decided to experiment. She fed

the AI vague, open-ended descriptions of symptoms, even creating a fictional persona rather than saying the symptoms were her own.

The results over two weeks of inputting symptoms and questions, surprised her.

There was no panic but rather clarity.

“It didn’t scare me,” she said. “It showed empathy. And as I added more symptoms over time, it started funneling toward the ovaries.”

Eventually, ovarian cancer appeared as one of four possibilities. The AI didn’t diagnose her, but it connected what she calls “the constellation” of symptoms.

That was her first alert. “I don’t think I would’ve gotten seen if it didn’t prompt me to go,” she said.

She did seek medical care and just in time.

Imaging revealed a tumor on her ovary that doubled in size in a matter of weeks. Surgery and chemotherapy followed. Her doctors later told her how unusual it was to catch ovarian cancer so early and wondered how she had made the call. Most women in Canada are still detected when the symptoms are more severe, the disease more advanced, and far less treatable.

Asked directly whether AI saved her life, Nadeem didn’t hesitate: “Absolutely.”

It’s why she tells her story.

The Flip Side of Confidence

AI can sound convincing — sometimes dangerously so. That’s where the research of Sirisha Rambhatla, Director of Critical ML Lab and an Assistant Professor in the Management Science and Engineering Department at the University of Waterloo offers a sobering counterweight.

Rambhatla led a study titled Medical Misinformation in AI-Assisted Self-Diagnosis, https://formative.jmir.org/2025/1/e66207 ( Link) which tested ChatGPT’s performance on open-ended, real-world medical questions rather than multiple-choice exam formats. The results were striking - less than a third of responses were rated as correct by both medical students and non-experts.

“Thirty percent is not a good number at all,” Rambhatla said.

In one example, the AI called the skin rash being described as caused by laundry detergent, missing a crucial detail that the patient wore protective gloves for work. That pointed to an entirely different diagnosis. The error wasn’t obvious, but it was meaningful.

“The differences can be very subtle,” Rambhatla explained, “and they can lead people down the wrong path.”

Her concern isn’t just misdiagnosis, but misdirection. A patient who fixates on the wrong explanation may arrive at a doctor’s office focused on the wrong problem, complicating care rather than improving it.

There’s also the issue of tone. AI systems speak confidently, use “I” statements, and often reinforce users’ beliefs — a phenomenon that can fuel anxiety and in extreme cases, psychological harm.

Still, Rambhatla doesn’t argue that AI should be dismissed entirely. Like Nadeem, she acknowledges that self-diagnosis — whether through search engines, forums, or AI — already plays a role in how people engage with healthcare, especially for those who feel unheard or who don’t have regular medical care in the form of a primary care provider.

The real challenge, she says, is how to use these tools responsibly.

Nadeem agrees. She argues that AI literacy, that avoids blind trust and outright fear, is the missing piece. She often compares today’s AI moment to the early days of automobiles.

“There were no seat belts, no traffic lights,” she said. “But we learned. We built guardrails.”

Both Nadeem and Rambhatla point to a future where AI could play a safer, more constructive role. That means better training data, more transparency, ongoing human evaluation, and, crucially, patient involvement from the start.

“AI is only as unbiased as the research behind it,” Nadeem noted. For her, AI’s greatest value isn’t diagnosis it’s preparation. The technology helped her identify the right questions to ask her doctors, making limited appointment time more effective. She also encourages people to ask open ended questions:

-what could it mean if someone had these symptoms?

- what are 10 questions I could ask my doctor or specialist?

- what are racial, genetic differences in disease risk and treatment to help AI become more specific in its information?

Until then, the message from both voices is clear: AI can help raise questions, but it cannot answer them alone.

Used with curiosity, skepticism, and medical follow-up, AI may empower patients. Used as a final authority, it can mislead.

One AI disclaimer was Chat GPT itself, because when asked about trusting its answers on question of health, it put it this way “trust it as a support tool, not a replacement for clinicians.”

Next
Next

Blog Post Title Three