
I think a classic example of AI amplifying misinformation on social media in Canada is the “AI slop” mentioned by *The Walrus*. For me, the most troubling thing about this kind of content isn’t just that it’s fake, but that it often looks far too normal. Some pieces resemble news articles, others look like opinion pieces, and some even give the impression that they were written by a real person with genuine care. So the problem now isn’t that fake content is hard to spot; rather, it often looks too real, and people easily believe it when they come across it.
Also, I actually rely heavily on social media for information myself, so I really relate to this. Many posts, short videos, or comments in discussion threads seem well-rounded and sound very convincing, but they aren’t necessarily reliable. Often, people don’t check the source first; instead, they’re swayed by the headline, tone, or emotional appeal. I think this is why media literacy is becoming increasingly important. It’s no longer just an abstract concept taught in classrooms—it’s something we encounter every day as we scroll through our phones.
This kind of AI-generated content can shape public opinion by gradually changing how people evaluate information. In the past, when seeing a piece of news, people might have paused to consider whether the source was legitimate, if there was evidence, or if it was taken out of context. But now, as long as the content looks convincing enough, the headline is attention-grabbing, and the tone is authoritative, many people will believe it—or even share it—without a second thought. Over time, not only does misinformation spread more easily, but even originally reliable information may come under suspicion. I think this is more problematic than the straightforward fake news of the past, because it affects not just a single piece of content, but people’s trust in the entire online information ecosystem.
In addition, according to MediaSmarts’ article “Wait… What?”, when faced with AI-generated misinformation, people need to pause, ask questions, and verify the information first. I think this advice is actually very practical, because the hardest part right now is that a lot of content isn’t obviously fake. The more natural it looks, the easier it is for people to let their guard down.
If I were to design a short media literacy course, I’d aim for a simple yet practical approach—something like the “pause, trace, verify” method. First, pause: don’t immediately like, comment, or share something the moment you see it. Next, trace: check the source, assess the account’s credibility, and look for the original source. Finally, verify by examining the content itself to see if anything about the images, text, tone, or context seems off. I believe this approach is more effective than simply telling students “don’t believe fake news,” because it more closely mirrors the situations they actually encounter while scrolling through social media.
Furthermore, according to a report by The Dais, those small AI labels on platforms are often less useful than people think; they don’t necessarily change whether someone believes or shares the content. So I believe we can’t rely solely on platform alerts—users still need basic judgment. Another report from The Dais also notes that Canada needs more systematic AI literacy education, ideally incorporated into K–12 curricula at an earlier stage. To me, this is no longer just a matter of technological development; it is clearly an educational issue as well.
In summary, I believe AI makes misinformation harder to spot—not because it’s always particularly outrageous, but because it can blend seamlessly into the social media content we encounter every day. Precisely for this reason, media literacy education is now more important than ever. It’s not just about teaching people how to distinguish fact from fiction; it’s also about helping them maintain a basic sense of scepticism, judgment, and the habit of verifying information in an environment where it’s increasingly easy to be swayed by content that “looks real.”
Reference
Bednar, Vass. “AI Slop Will End Canadian Culture as We Know It.” The Walrus, 6 Jan. 2026, thewalrus.ca/can-canadian-culture-survive-the-age-of-ai-slop/. Accessed 23 Mar. 2026.
“‘Wait… What?’ Media Literacy Week Highlights Growing Concern over AI-Driven Misinformation.” MediaSmarts, n.d., mediasmarts.ca/about-us/press-centre/wait-what-media-literacy-week-highlights-growing-concern-over-ai-driven-misinformation. Accessed 23 Mar. 2026.
Human or AI? Evaluating Labels on AI-Generated Social Media Content. The Dais, Toronto Metropolitan University, n.d., dais.ca/reports/human-or-ai/. Accessed 23 Mar. 2026.
Submission to the Consultation on Canada’s Renewed AI Strategy. The Dais, Toronto Metropolitan University, n.d., dais.ca/reports/submission-to-the-consultation-on-canadas-renewed-ai-strategy/. Accessed 23 Mar. 2026.
A lot of interesting ideas here, and a really strong written piece. I like the idea of the pause, trace or verify idea because I think we are naturally so inclined to feeling something as untrue when there is very blatant and explicit use of AI in news or fake images etc. especially coming from the generation of photo shop and curated videos. Nonetheless, we are still at risk for falling pray to these more nuanced applications and even fake users and AI written reviews which often persuade us that something has been hand crafted with research and dedication through fellow humans so I love this idea of taking actual time to investigate whether something is even worth investigating first through authentic source recognition. Great post!
I really like how you explain that AI misinformation looks “too real,” that point is very strong. Your example and personal reflection make it easy to relate. I also think your “pause, trace, verify” idea is very practical and useful. Overall very thoughtful and realistic post, thanks for sharing.
Also, I think you could make your idea even stronger by adding one specific real example, like a viral post or video you personally saw. Right now your explanation is clear, but a concrete case would make it feel more real and convincing.
You could also include one or two actual tools, like a fact-checking website or reverse image search, so readers know exactly what to do when they try your “pause, trace, verify” method. This makes your strategy more actionable, not just an idea.
Another small suggestion is to connect your points a bit more to Canadian context, since you already mentioned MediaSmarts and The Dais. Maybe one sentence about why this matters specifically in Canada would help meet the course expectation.
Overall your post is already very strong, these are just small improvements to make it even more practical and clear.