What’s new in cancer: tackling misinformation in health
In an era when a presidential advisor referred to ‘alternative facts’ when describing proven falsehoods, the need to ‘fact check’ our most powerful leaders is now sadly an accepted norm. In health too, myths and lies cause real problems.
People with cancer may be misled by misinformation (things that are incorrect) or disinformation (falsehoods that are knowingly spread). Misinformation may be ignorant or innocent, but disinformation has a motive which may be for money, mischief or worse.
Misinformation in cancer is not new. There have long been claims, falsehoods and mistaken beliefs about cancer, driven by commercial interest, credulous reporting, low levels of scientific literacy and a hopefulness that something as bad as cancer could have a solution as simple as apricot kernels or mushroom extract.
Scientific enquiry should provide truth, but belief requires trust, which in many areas is lacking. The fraudulent work falsely linking MMR vaccination to autism has been debunked many times, yet despite this, the U.S. Centers for Disease Control and Prevention have announced they plan to run new studies on vaccines just days after the appointment of a vaccine sceptic as US Health Secretary. This appears to this author as an example of repeatedly asking the same question in the hope of eventually getting an answer you agree with. This should not be necessary. Science in its purest sense should have no bias.
It’s ironic that the sum of human knowledge is now available via a few clicks and yet ‘truth’ is argued. Misinformation online and on social media has exponentially increased the problem. Last year Ofcom estimated that 93% of UK adults aged 16+ had home internet access and that more than 47 million people were internet users. In 2020 over 60% of UK citizens searched for health information online, with higher percentages in those under 50. Rates in the USA are likely higher. The internet contains many multiples of ‘alternative facts’. Some would claim that connecting people with fringe beliefs is a powerful antidote to a narrative controlled by vested interest. This may be so, but it also amplifies falsehoods around cancer. ‘Going viral’ is an apt description for the spread of a dangerous agent capable of harm. A new study in Cambridge is planning to observe just how these falsehoods spread.
Tracking misinformation is a Sisyphean task. Every year trillions of gigabytes of data are added to the internet. Monitoring it all is impossible, but academics can sample for truth. A review of 200 articles found 33% with misinformation, 77% of this was harmful.
Worryingly, engagement was statistically significantly greater for misinformation than for truthful features.
The USA HINTS survey (Health Information National Trends Survey) tracks trends in misinformation. In 2024, around 36% of respondents (1740/4959) reported encountering "a lot" of misleading health information on social media. Exposure to misinformation was higher for those in higher-income brackets. People over 75 had challenges in assessing this information whilst younger adults (18-34) showed increased vulnerability to misinformation. The importance of being able to assess information is reinforced by another US study published this month showing that low levels of health literacy were associated with poorer survival across a range of cancer types.
Artificial Intelligence (AI) is both part of the problem and a potential solution. Most of us will be familiar now with AI-generated summaries of our internet searches, but current AI models can only sort through what is there, not place a value judgement on it. If the internet is full of misinformation, AI then may repeat that. AI is also a source of content itself with one estimate suggesting 80% of US-based bloggers now use AI in some form (and in case you’re wondering, the answer is ‘no I didn’t’). Some of the major large language models claim to have safeguards in place to prevent them from generating health misinformation but a 2024 British Medical Journal (BMJ) study showed that these safeguards frequently fail.
This year the FUTURE AI Consortium launched a framework for best practice for trustworthy AI in healthcare and the overview of the framework in the BMJ is well worth reading. The acronym is helpful:
Fairness
Universality
Traceablilty
Usability
Robustness
Explanability
There have been multiple proposals for dealing with misinformation. Tackling social media disinformation head-on may backfire by simply bringing attention to the falsehoods in the original post. Giving an ‘Information Prescription’ in clinical settings seems an appropriate place to start. At Macmillan we have a wealth of resources that could and should be part of that information prescription. We produce a range of alternative formats including easy-read information and an increasing amount of translated content. Macmillan also have resources that help with myth busting, and guidance on finding reliable information online. I also recommend the excellent resources from Memorial Sloan Kettering for navigating facts about herbs and supplements.
The ‘New Yorker’ magazine has arguably the most famous department of fact checking in publishing history. Their fastidiousness is sadly less common in many online spaces. Macmillan also has a history and reputation of being a trusted source of truthful information. Despite the challenges of information overload and exponential rises in data we will do all we can to maintain and deserve that reputation. Truth is a precious commodity treated badly, it remains in good hands at Macmillan.