In February, ahead of elections in India, the Indian National Congress, the chief opposition party, shared a parody video of Prime Minister Narendra Modi on Instagram. It used AI tools to superimpose his face on that of Justh, an Indian singer whose song “Chor” (“Thief”) went viral on social media. Producers slightly changed its lyrics, cloned Modi’s voice and paired it with animated visuals of the leader with industrialist Gautam Adani. The video was a gibe on their closeness that had led to Adani’s acquiring several airports, seaports and power plants in the country after Modi’s ascent to power in 2014.
Modi’s Bharatiya Janata Party (BJP) also responded with an AI-generated video. In it, senior Congress party leader Rahul Gandhi’s face had been superimposed on that of Tejashwi Yadav, an opposition leader in Bihar. It was made to seem as if he was addressing Mamata Banerjee, a leader who had broken with the Congress-led alliance just before the elections. It was a dig at the fragile relations in the opposition bloc.
This exchange marked the onset of an AI-driven war and paved the way for a new way of political campaigning in the country, where AI tools, such as voice cloning, conversational bots, personalized video and text messaging, QR codes to click selfies, hologram boxes and deepfake technology were employed by political parties to reach out to voters and take jabs at one another.
In a year when there are elections in over 50 countries, there has been keen interest in the use of AI during India’s, which is the biggest democratic exercise in the world with over 968 million voters. Several international publications weighed in on the topic, covering the different trends and tech involved. Rest of World, the U.S.-based tech publication, also launched an “AI in Elections” tracker to keep a tab on content from different countries. Parties in India reportedly spent an estimated $50 million on authorized AI-generated content this season, prompting people to dub the 2024 polls the “AI election.”
“However, I would say we’re not there yet. It was the year of AI experiments,” Indian journalist Nilesh Christopher, who spearheaded coverage on AI in Indian elections, told New Lines.
During these elections, deepfake technology was used to resurrect dead politicians. For instance, in Tamil Nadu the digital avatar of popular poet-turned-politician M. Karunanidhi, who died in 2018, appeared at several events. His political rival J. Jayalalithaa, who died in 2016, also made an appearance thanks to AI. In West Bengal, an AI-generated video of former Chief Minister Buddhadeb Bhattacharjee, 80, who is ailing and unable to make public appearances, was released by the Communist Party of India (Marxist), urging people to “save the country and the state.”
AI techies were hired by parties to create conversational bots that would speak to voters about their policies. The BJP developed “Bhashini,” an AI translation tool that enabled people to listen to Modi’s Hindi speeches in real time in regional languages. In Sikkim, campaign workers for Chief Minister Prem Singh Tamang’s party campaigned door to door asking people to scan a QR code to hear their leader’s message and click a selfie with him. Politicians also used generative AI to make promotional songs and personalized videos.
On social media, creators used AI cloning tools to generate popular Bollywood and Punjabi songs in Modi’s voice, thereby mixing politics with entertainment. Several meme and political satire pages also added to this trend by using generative AI tools to create content on Indian politics.
“It created social acceptance for such technologies being used in the political context, but in a satirical manner,” Christopher said. “But that also sort of created a moral panic around how this could be misused, and the media got interested in understanding how AI would be used during the elections.”
In fact, Modi himself gave this trend his stamp of approval during the election season by retweeting a meme showing him dancing to a popular Bengali number. “Like all of you, I also enjoyed seeing myself dance. Such creativity in peak poll season is truly a delight!” said Modi, even though last year he had highlighted the misuse of AI and deepfakes in a meeting with the media and urged reporters to educate people about this phenomenon after a morphed video of a popular actor went viral.
But his public acknowledgment of AI and its potential during a meeting with Microsoft co-founder Bill Gates — where the prime minister highlighted AI’s role in translating his speeches from Hindi into different languages during the G20 summit last year in Delhi — led to further acceptance of the new technology. Christopher said that local politicians and party workers, who otherwise did not know much about AI, became aware of it and started understanding how it could be put to use.
However, a portion of the media and fact-checkers did start raising concerns about ethics and the lack of regulation in a country that continues to deal with information disorder triggered by the use of WhatsApp and other social networking platforms. In the past decade, thousands of Indians have fallen for rumors and false information circulating on social media, which have often led to physical violence. In 2017 and 2018, more than two dozen Indians were killed in mob lynchings after rumors of child kidnappers circulated on WhatsApp.
Anxieties about AI deepfakes have heightened because making synthetic images and videos has also become cheaper and more accessible. During the recent elections, popular Bollywood actor Ranveer Singh issued a cautionary tweet after a digitally manipulated video of him criticizing Modi surfaced on social media. In the original, he had praised the prime minister. Veteran actor Aamir Khan also found a morphed video of him doing the rounds where he was made to mock the BJP. Deepfake videos of popular TV news anchors, falsely predicting wins, were also circulated on social media, as well as a simulated phone call involving a politician.
Similar concerns were raised in 2020, when AI-generated dubs of BJP parliamentarian and Bhojpuri actor Manoj Tiwari surfaced in English and Haryanvi, languages in which he is not proficient. Last year, when a morphed photo of top women wrestlers who were protesting against Brij Bhushan Sharan Singh, a BJP parliamentarian and the Wrestling Federation of India president, went viral on social media, it too raised alarms.
In the original, the wrestlers, who were detained and in a police van, had straight, serious faces. But in the morphed photo, they were made to smile. It was used to discredit them, and several BJP leaders tweeted it, only to delete it after it was flagged as a deepfake by fact-checkers. During the election campaign in Madhya Pradesh, doctored remarks in voice clones of some politicians surfaced. AI voice clones and videos were also used during recent elections in Pakistan and Bangladesh.
However, despite these instances, there was little opposition to AI within political circles in India, and it seemed parties had embraced the new tech because it suits their political goals. “There has been little opposition because they were pursuing it on a pilot basis, experimenting on a smaller scale, and in some constituencies to figure out how it works. To bring this to scale, they did show some amount of restraint,” said Aroon Deep, a tech journalist at The Hindu. Political parties, he added, wouldn’t want to close doors on any tools that would help them in the future.
There was briefly some outrage, however, when a video of Home Minister Amit Shah making a controversial statement about scrapping quotas for Dalits, tribals and other caste communities in government jobs and public universities surfaced on social media. The BJP leader accused the Congress party of creating a deepfake video, though it turned out that the video was only edited, not changed. But the instance prompted the Election Commission to release a statement warning political parties against misuse of AI-based tools “that distort information or propagate misinformation.”
In the absence of specific laws or regulations that directly regulate AI, the Indian government issued some advice ahead of the elections on its lawful use. To assuage the concerns of the government, startup founders also released their own manifestos and statements on ethical use of AI during elections. Jadaoun, who claims that his firm works on “ethical” AI creations such as authorized translations, conversational chatbots and depictions of deceased leaders as though they were alive and speaking to current issues and candidates, said that he has denied almost 50 to 70 unsavory requests since last year. One client asked him to discredit allegations by transforming a problematic yet genuine video of their candidate into a convincing deepfake.
It is these “ethical” creations, coupled with the fact that AI use didn’t lead to as much misinformation as was expected, that have led to more positive impressions of the use of AI for election canvassing rather than it being looked at as a threat. Out of the 258 fact checks that BOOM, a leading fact-checking website in India, performed from March to the end of May, only 12 of them were about AI-generated images and videos, said Archis Chowdhury, a senior correspondent at the publication.
“While this is a small portion of the overall disinformation that we saw, one should expect this number to get higher and higher over the next few years,” Chowdhury said. “We also saw widespread use of rudimentary deepfake tools to create satirical content, most of which were shared without labels or disclaimers on the use of AI.”
“There is a high potential for abuse,” he said, as tools like face swapping, voice cloning and lip-synching improve with more data sets that can be used to make political leaders say incriminating things or target them with deepfake pornography. “You can also use deepfakes for voter suppression — by spreading fake videos of leaders or public figures providing misleading information on the voting process.”
However, in The Conversation, Vandinika Shukla and Bruce Schneier of the Harvard Kennedy School wrote that most campaigns, parties and their candidates used the new tech “constructively.”
“They used AI for typical political activities, including mudslinging, but primarily to better connect with voters,” they said, adding that the country’s experiments “serve as an illustration of what the rest of the world can expect in future elections.”
This underscores the importance of fact-checkers, who were on their toes during the election season, kept a keen eye on AI-related misinformation and worked with tech companies to debunk fake news.
“Indian fact-checkers did a phenomenal job. Until last year, none of the Indian vendors had forensic capabilities,” Christopher said. “Coming into 2024, Indian fact-checkers were prepared and were in partnerships with several forensic tool creators. I think no one piece of disputed content took more than like two days for them to pronounce a verdict, which is phenomenal capability building.”
But Christopher is cautious and has his eyes on the next state elections, in October.
“The deployment of AI for electioneering will increase, and in addition to that, more and more people will get comfortable using freely available tools,” he said.
“Spotlight” is a newsletter about underreported cultural trends and news from around the world, emailed to subscribers twice a week. Sign up here.