When Dr. Garth Graham thinks about health misinformation on social media platforms, he imagines a garden.
Despite the abundance of information, the head of YouTube’s global health division acknowledges that the garden often needs attention.
“How do you weed out the bad information?” he asked.
“But also…how do you ensure access to good and high-quality information?”
For social media companies, these have become ongoing questions that have become more important as the number of platforms has increased and people have spent more time online.
Now, spotting misinformation with almost every scroll is not uncommon.
A 2022 paper published in the Bulletin of the World Health Organization reviewed 31 studies looking at how common misinformation is, finding it in up to 51 per cent of social media posts about vaccines, up to 28.8 per cent of COVID-19 content, and up to 60 per cent of posts related to pandemics.
An estimated 20 to 30 per cent of YouTube videos about emerging infectious diseases were also found to contain inaccurate or misleading information.
The consequences can be harmful, or even deadly.
According to research from the Council of Canadian Academies released in 2023, COVID-19 misinformation alone caused more than 2,800 Canadian deaths and at least $300 million in hospital and ICU visits.
Platforms take the risks seriously, Graham said in an interview. “We are always concerned about anything that may produce harm.”
This concern often leads platforms to remove anything that violates their content policies.
YouTube, for instance, has forbidden content denying the existence of certain medical conditions or contradicting health authority guidance on prevention and treatment.
Examples embedded in its medical misinformation policy show the company removes posts promoting dangerous substances as treatments due to their lethal effects. Ivermectin and hydroxychloroquine are also prohibited from being promoted as COVID-19 cures.
When it comes to vaccines, YouTube prohibits videos alleging that immunizations cause cancer or paralysis.
Facebook and Instagram parent company Meta Platforms Inc. declined to comment for this story and TikTok did not respond to a request for comment, but generally, these companies have similar policies to YouTube.
Yet Timothy Caulfield, a University of Alberta professor focused on health law and policy, still sees medical misinformation on platforms. He recently asked his students to search for stem cell content, and several posts spreading unproven therapies came up easily.
Still, he understands some of the challenges tech companies face because he sees combating health misinformation as a game of “whack-a-mole.”
He says there’s a quickness to spreaders of misinformation, who are often motivated to keep finding ways to evade removal policies because their posts can boost profits and brands or spread an ideology.
Caulfield said that the ability to work around the moderation strategies demonstrates that this issue cannot be solved with just one tool.
This will continue to be an ongoing battle.
Meta admits in its misinformation policy that what is considered true can change quickly.
The policy states that people have varying levels of knowledge and may believe something is true when it's not.
Meta relies on independent experts and fact-checking organizations to assess the truthfulness and potential harm of content before removing it.
YouTube uses workers and machine learning programs to monitor posts and news for misinformation trends.
Health-care practitioners and institutions are responsible for highlighting trustworthy information on their content platforms to help users find it easily.
YouTube has partnered with organizations such as the University Health Network and the Centre for Addiction and Mental Health in Toronto.
CAMH received production funding from YouTube for its YouTube channel, where medical professionals provide information on various conditions.
Graham sees partnering with health-care professionals as a way to combat misinformation effectively.
It's important to make credible information easily accessible so people can have informed conversations and feel empowered.
According to Heidi Tworek, not all organizations and doctors have the capacity to disseminate information effectively.
Health-care organizations aim to provide credible information but face challenges due to limited resources and time constraints.
Some health-care institutions prioritize spending on areas other than communication due to budget constraints.
Some doctors take on the responsibility of sharing information voluntarily, but this makes them vulnerable to online attacks and threats.
Some individuals are reluctant to engage in these spaces due to observing the consequences faced by others.
Heidi Tworek urges platforms to act more responsibly to combat medical misinformation as she notices their algorithms often promote problematic content on social media.
However, she and Caulfield both agree that addressing health misinformation requires a collaborative effort from everyone.
“The platforms have a significant responsibility. They are becoming similar to essential services and we are aware of the impact they have on public discussions and on division,” Caulfield stated.
“However, we also need to educate people on how to think critically.”
This could start in schools, where students could be taught how to recognize reliable sources and identify when something might be incorrect — a practice that Caulfield has heard is initiated in kindergarten in Finland.
Regardless of when or how this education is provided, he emphasized that “we need to equip citizens with the skills to determine what is misinformation.”