1. Why this suddenly feels urgent
In the last election cycles across the US and Europe, AI stopped being a future threat and became part of the campaign toolkit:
- Synthetic campaign videos that look real, whether labelled or not.
- AI-written messages personalised for tiny voter segments.
- Deepfake images and audio of politicians saying things they never said.
Researchers found that four in five U.S. adults are already worried about AI-driven misinformation in elections.
You’re not overreacting if your first thought is:
“Will I still be able to trust what I see and hear next time I vote?”
This stack is about getting back to a place where you can answer “yes” with a straight face.
2. What AI is actually doing in campaigns right now
A lot of AI use is boring and invisible:
- drafting emails, tailoring slogans, segmenting voter lists
- generating hundreds of ad variants and testing which ones work best
Then there’s the sharp edge you feel more directly:
- AI-generated political ads and videos that blend real footage with synthetic lines or faces; some are labelled, many are not.
- Synthetic voices mimicking well-known politicians or local figures, used in robocalls or clips.
- Highly targeted content tuned to your fears or anger, pushed into your feed by algorithms that care about engagement more than accuracy.
Campaigns and political actors have always used new tech, from TV ads to micro-targeted Facebook posts. AI just drops the cost of persuasion and fakery close to zero, and makes it easier to run experiments on you without you ever realising.
3. Deepfakes: when your eyes and ears aren’t enough
A deepfake is synthetic media—image, audio or video—created or altered by AI so convincingly that it seems real. In an election context, that can mean:
- a video of a candidate making a racist remark that never happened
- an audio clip of someone “admitting” to corruption
- images placing a politician in an invented scene (a protest, a crime, a party)
Legal and academic analyses are blunt about the risk: deepfakes are now a routine feature of online political discourse, spreading faster than corrections and living forever in people’s memory even after debunks.
The danger isn’t just that you might be fooled once. It’s that, over time, you stop trusting anything:
- Real scandals feel “maybe fake”.
- Fake content feels “maybe real”.
That erosion of basic trust is exactly what makes democracies brittle.
4. What governments and platforms are trying to do
No one has a clean, global solution yet. You’re living in an experimental phase.
In the US:
- Multiple states have passed or proposed laws targeting AI-generated deepfakes in election communications—often requiring labels, takedowns close to Election Day, or giving candidates legal recourse.
- Lawyers warn of a patchwork of state rules that campaigns, platforms and vendors now have to navigate.
In the EU:
- The AI Act includes provisions that touch elections, such as transparency obligations for certain AI systems and rules around high-risk uses tied to democratic processes.
Globally:
- Election bodies and democracy organisations have been running workshops, trying to understand how AI is already being used and what practical safeguards they can add.
Platforms, under pressure, have experimented with:
- labels for AI-generated content
- political ad rules that restrict targeting or synthetic media
- detection tools for manipulated content
But you’ve probably noticed: rules vary by platform, enforcement is inconsistent, and bad actors learn quickly how to route around new policies.
So the honest answer right now is:
There are rules, but they are fragmented and incomplete.
You still need your own filters.
5. Why this hits your brain harder than old propaganda
Propaganda isn’t new. What changes with AI and modern feeds is the speed, scale and personalisation.
A recent experiment on X (Twitter) during the 2024 US election showed that small shifts in how partisan posts are surfaced in the “For you” feed can drive a spike in affective polarisation—negative feelings toward the other side—equivalent to three decades of change compressed into a week.
Translate that to your experience:
- Your feed can quietly become a more hostile, fearful place without you changing anything.
- AI tools let campaigns and influencers test which emotional triggers work best on people like you, then scale those versions.
Deepfakes plug directly into this system:
- They’re built to be shareable—funny, shocking, infuriating.
- They ride on algorithms tuned for engagement, not accuracy.
- They land in a context where you’re already primed to distrust “them” and trust “your side”.
That’s why simply saying “I’ll be careful” isn’t quite enough.
6. Practical ways to keep your political choices your own
You can’t inspect every pixel of every image. But you can change how you interact with political content so you’re much harder to manipulate.
Here are a few moves that genuinely help:
Slow down on anything that spikes your emotion
If a clip makes you instantly furious, disgusted or triumphant, treat that as a warning sign. That’s exactly the state in which you’re most likely to share first and think later.
Ask yourself:
- Who posted this first?
- Who benefits if I believe and share this?
- Has any reputable outlet I trust confirmed it?
Double-source anything important
If something would actually change your vote if it were true, don’t accept it from a single tweet, TikTok or screenshot.
- Check a mainstream outlet you normally disagree with and one you usually agree with.
- Look for evidence that both are at least aware of the claim.
If no serious outlet is touching a “huge scandal”, that’s a signal in itself.
Look for labels—but don’t lean on them completely
Some platforms and campaigns now label AI-generated content. That’s useful when it appears, but:
- not everything is labelled
- labels can be subtle or easy to miss
- malicious actors can simply avoid them
Treat labels as bonus information, not a full safety system.
Be stingy with your shares
You don’t control what others post, but you do control whether you amplify it.
A simple rule:
“If I wouldn’t defend this as true in a real-life conversation tomorrow, I don’t share it today.”
That one filter alone slams the brakes on a lot of low-quality political content.
7. How to stay informed without getting numb
AI and deepfakes aren’t the only problem. They sit on top of a news environment that already overwhelms you.
You can make two parallel decisions:
- Choose a small set of sources on purpose
- a couple of outlets known for basic fact-checking
- one or two explainers or newsletters that go deep on politics and tech
- maybe one perspective that doesn’t match your own, so you don’t live in a sealed bubble
- Decide when you’ll engage with politics, and when you won’t
- Specific windows for news, not all-day doomscrolling
- No political rabbit holes right before bed
- Days where you consciously step back from feeds but still catch up via a summary or long-form piece later
You’re not less responsible for stepping back.
You’re more likely to actually think, instead of just react.
8. What to watch over the next few years
AI in elections is not a 2024–2025 blip; it’s the new baseline. The interesting questions shift from “will this happen?” to:
- How far do disclosure and labelling rules go, and do they converge across countries or stay patchy?
- Do platforms invest in serious detection and friction, or keep optimising mainly for engagement?
- Can independent watchdogs, journalists and civic groups keep up with the speed of synthetic campaigns, or do we gradually accept a more confused information environment as normal?
You don’t control those decisions directly.
What you do control is whether your own political thinking is driven mostly by:
- evidence you’ve checked, or
- content someone’s model discovered is good at yanking your emotions around.
AI will keep getting better at imitation.
Your job is to get better at noticing when you’re being played—and to leave enough quiet, verified space in your media diet that your vote still feels like your decision, not the output of someone else’s model.
Leave a Reply