By Sunjun Hwang

It’s not a huge surprise how fast AI technology is stepping up and becoming a big part of our lives. Whether you have noticed or not, AI is incredibly prevalent, from personalised social media feeds to spell checkers reading through your email drafts.

Donald Trump shocked his supporters when he wrote a Truth Social post saying that he might be arrested. They were flabbergasted when, a few days later, pictures of him tackling the police and playing basketball in orange jumpsuits went viral on Twitter.

The Big Don fans were relieved when the pictures turned out to be fake images made by journalist Eliot Higgins, but it wasn’t Higgins’s Photoshop skills that made the pictures look so realistic. Instead, he simply used an image generator called ‘Midjourney’, inserting certain keywords such as ‘Trump’ and ‘arrest’, and the program created the images for him. In other words, they were the work of AI. 

As authentic as the pictures seemed, one can easily spot flies in the ointment upon closer look, such as Trump having three legs and police officers holding out distorted hands. Higgins admitted that the images were fake and produced with AI, which makes it hard to say that they had a serious political impact but there were still some Twitter users who seemed to have been hoaxed.

Had the images been more realistic and not credited, chaos would surely have ensued, not to mention done some serious reputational damage. Deepfakes, another common type of AI-generated content, have the potential to fool even more.

Amidst the Russo-Ukrainian War (not to be confused with the Russo-directed, Infinity War), a video of Ukraine President Volodymyr Zelenskyy ordering his Ukrainian troops to surrender was uploaded on a mainstream Ukrainian TV channel. The footage itself was scuffed, as the President was suspiciously motionless and had a different voice. Clearly the editor wasn’t getting paid much.

There is no doubt that, if the video had been a little higher quality, this could have led to massive consequences on a global scale. 

Professor Kathleen Carley, who researches the social impacts of new technology, says: “We are going to see more deepfakes entering the realm of politics, and we’ve already seen it cause huge problems, with deepfakes being made of certain politicians, and actually disrupting governments in various countries for a few days.”

Now you may be wondering; how many people are actually naive enough to instantly believe everything they see on the internet. Well, for obvious propaganda that readily gives itself away such as those above, not many. However, not everything is obvious enough to clock immediately. 

Dr Carley says: “Deepfakes don’t even have to be the president or the chairman of a country. Most people don’t know what generals think – what if some general said, ‘Hey we’re going to war with England tomorrow’?”

What’s more is that most of the time, we do not realise how dominant AI is within our lives and how much influence it has on the choices we make.

For instance, you will notice that the content being displayed on your Instagram feed is different from your friends’, This is because Instagram’s AI-powered algorithm recommends content that is based on your personal interests and search histories. The same algorithm might be exploited by political individuals, particularly to gain votes and political support. 

Former US President Barack Obama’s presidential campaigns are considered the precursors of data analysis that AI does today. When they weren’t out playing golf or being quizzed over Barack’s ‘real name’, Obama and his team identified voters and analysed their political preferences, values, and even their possible voting behaviours. With that data, they made their campaign appeals by reaching out to people who were likely to resonate with them and micro-targeted them with personalised messages and advertisements.

Their tactics increased resource efficiency and gained the affability of potential voters, ultimately resulting in a win.

Dr Carley says, “Whenever there is an election – it doesn’t matter which country – AI is actually used in different social media forms to try to identify voters, build up or tear down candidates, get voters to form coalitions, and emphasise certain issues to make them bigger issues than what they really are.”

In the 2022 presidential election in the Philippines, candidate Ferdinand Marcos Jr. secretly hired mobs of internet trolls to flood TikTok with videos depicting content in his favour, helping him gain a huge number of votes from the young generation.

Now imagine if the same process could be automated with bots. They would save Marcos Junior a lot of time and money and he wouldn’t even break a sweat. 

“We’ve barely begun to see the use of AI. I think the amount will escalate, but I think it’s going to be used in lots of different ways, just like it’s going to be used in lots of different parts of our everyday lives,” Dr Carley says.

By now, AI might just seem like a commercial tool used by politicians to fulfil their political desires. But, as all things have their pros and cons, AI in politics has its advantages as well. 

The first example is chatbots. Bots are the simplest form of AI, and they are used in many governments and constituencies. They answer simple questions on behalf of the politicians, allowing the public to get instant responses to inquiries without having to wait a miserable five to ten working days.

Next up is healthcare. The NHS uses AI to analyse medical records and identify which patients are more vulnerable to particular illnesses. Such information can help devise preliminary medical plans to prevent those illnesses and help us save potential hospital bills, and cut medical costs for the government as well.

AI is also used to detect fraud. The Indian government uses the world’s largest biometric system called “Aadhaar”, which gives citizens and residents a unique identification number linked to their biometric data. This makes it difficult to impersonate another person’s identity. Just recently, they added AI-based fingerprint authentication, which gives spoofers even fewer chances of doing their job.

So maybe AI is not that bad after all. Dr Carley says, “I tend to think people are smart, so in the long run the advantages will outweigh the disadvantages.”

But imposing legislation on AI is crucial to prevent its negative impacts. She adds, “We’re going to have to start putting in place better international legislations.

“Enforceable, universal legislation is needed to ensure that no AI system can be used to send political messages and that no AI system can be used to report on political events. Without that kind of major legislation that’s enforceable, you’re going to see lots of negative uses for it.”

Before that happens, what should we do? Unless we want to be manipulated by politicians or fuel the early stages of SkyNet, we need to stay alert. We always need to be aware that AI is hidden almost everywhere online, and that what we see there might not be a fair account of the truth.

We should always think critically and not instantly commit to something available online in order to avoid a misled community full of political disinformation and propaganda.