The year 2024 bore the brunt of a massive hype around generative AI, which began the previous year. With the sudden ubiquity of "AI chatbots", we found ourselves facing myriad threats from mostly unregulated systems running amok, and occasionally wreaking havoc.
Let us take a look at some of the most remarkable developments in AI in 2024, and few of the biggest expectations from the industry in 2025.
1. OpenAI – A Year Of Lawsuits
Just over two years ago, OpenAI launched ChatGPT, officially kick-starting the AI hype and becoming generative AI's "industry leader" overnight.
However, in the final days of 2023, the company was slapped with a copyright infringement lawsuit by US-based media outlet New York Times, who argued that the OpenAI unlawfully used millions of articles by New York Times to train its chatbots which were in competition with the media outlet. New York Times' lawsuit foreshadowed what was to come in 2024, as OpenAI was hit with several other lawsuits. US comedian and author Sarah Silverman, along with fellow authors Christopher Golden and Richard Kadrey sued OpenAI and Meta, alleging that their respective copyrighted works were used to train chatbots without their authorisation.
A month ago, major Canadian media outlets including Toronto Star, Metroland Media, Postmedia, The Globe and Mail, The Canadian Press and CBC filed a lawsuit against OpenAI for illegally using their respective works to train its chatbots. The media outlet's alleged that OpenAI was illegally using their journalism for commercial gain, and sought punitive damages of US$14,300 per article.
Indian news agency ANI took a similar route, and sued OpenAI in Delhi High Court, alleging copyright infringement, false attribution and improper use of their content.
Italian data privacy regulator Garante fined OpenAI €15 million, to close an investigation on the use of personal data by OpenAI to train its chatbots, which it argued was in contravention of the European Union's strict data privacy laws.
Scarlett Johannson Alleges Voice Theft
Back in May, OpenAI launched a voice assistant called Sky, which immediately drew an allegation from Hollywood actor Scarlett Johansson that the voice assistant bore a striking resemblance to her own voice. Johansson further added that she had declined a prior request from the company to provide her voice for its chatbot. Many were quick to draw a resemblance between the chatbot's voice, and that of a fictional AI voice assistant called Samantha, in the 2013 film "Her", which was voiced by Johannson herself. Just days before the Sky was launched, OpenAI CEO Sam Altman posted on X with the text, "her."
After Johannson threatened legal action, the Sky was rolled-back and other voices were introduced.
Death Of OpenAI Whistleblower
26-year-old former OpenAI researcher-turned-whistleblower Suchir Balaji was found dead in an apartment in San Francisco in November, after he accused his former employer of copyright violations and unethical business practices. His death was suspected as suicide, due to the lack of evidence suggesting foul play.
2. AI and Elections
Over 50 countries held general elections in 2024, which has been called the "year of elections". Due to it coinciding with the AI boom, experts around the world had warned of the huge potential for abuse of AI in furthering election-related misinformation and deceptive practices. While there was no significant AI-linked disaster during the elections around the world, we saw rampant use of AI (specifically deepfakes) to generate disinformation, and garner support for parties and candidates.
Bangladesh and Pakistan
In Bangladesh, which went to polls in January, the eve of elections was marred with deepfake videos of a few independent candidates, where they were made to falsely announcing their withdrawal from the elections. The elections faced widespread allegations of rigging, and the newly-formed government under Sheikh Hasina was toppled by mass protests seven months later.
Pakistan, which also faced allegations of rigging and unfair polling, saw deepfake videos of jailed opposition leader Imran Khan and his party members went viral, falsely showing them announcing a boycott of elections. Interestingly, due to Khan's incarceration, his party also released authorised deepfakes of him in a bid to garner support from the public.
Cruel Tyrant to Cute and Chubby Leader
25 years ago, Prabowo Sbianto was the son-in-law of Indonesia’s longest-serving dictator Suharto, and a much-feared Major-General who has been linked to the torture and disappearances of activists. This year, he used tools like Midjourney to create a "cute" and "chubby" AI avatar of himself, and reinvent himself as a lovable ruler, eventually winning the Indonesian presidential elections.
Indian elections - deception, resurrections and memes
The Indian general elections saw widespread use of AI in campaigning. AI translation tools were used by leaders to create campaign material in multiple languages, while enormous amounts of voter data collected by political consultants were used to create highly personalised content using AI voice clones.
We also saw 12 different instances of deepfakes being used to target parties and leaders - with the most prominent ones being deepfakes of actors Ranveer Singh and Aamir Khan, where they could be seen criticising the Narendra Modi government, or endorsing the Congress party.
Authorised deepfakes were also used to revive the public image of deceased political leaders like Karunanidhi and Jayalalitha to rally support for their respective parties.
AI-generated memes also became commonplace, primarily to target opposition leaders. Right in the middle of the polls, Modi shared an AI-generated "Lil Yatchy" meme of himself, where his deepfake could be seen dancing on stage. This meme was shared by him to as a counter to a similar meme shared on West Bengal Chief Minister Mamata Banerjee, which prompted the authorities to threaten legal action against the distributors of the meme.
The US elections
In January, the use of AI in US elections was marked by a fake robocall containing outgoing US President Joe Biden's voice clone, where he could be heard telling New Hampshire voters to skip the primary election. This caused massive outrage around the dangers posed by deepfakes in elections.
During election season, incoming President Donald Trump and his billionaire-backer Elon Musk shared multiple deepfakes images and videos of opponent Kamala Harris under the garb of memes. Similar to India, many Trump fans also started sharing deepfake memes of him with Musk as well.
3. AI Regulations
With growing concerns around the risks posed by AI systems, countries around the world took widely different approach to regulating AI.
EU AI Act
One of the most significant regulations on AI systems were accepted by European lawmakers in March. The EU AI Act took a risk-based approach to regulating AI, identifying four levels of risk from ‘minimal risk’ to ‘unacceptable risk’. Those falling under "unacceptable risk" will be banned, while those under "high risk", such as critical infrastructures, educational or vocational training, law enforcement, migration, asylum and border control management, administration of justice and democratic processes, among others, will be subjected to strict obligations.
South Korea – AI Basic Act
In the final days of the year, the South Korean National Assembly passed the “Basic Act on the Development of Artificial Intelligence and the Establishment of Trust” (also called the AI Basic Act).”
Similar to the EU AI Act, the AI Basic Act takes a risk-based approach to regulating AI, with frameworks classifying AI systems according to the level of impact they have on human rights and safety, and is expected to take effect in January 2026.
With South Korea still reeling from a massive deepfake pornography scandal that has hit girls in schools across the country, this act has included provisions meant to combat deepfakes (such as watermarking) and disinformation, while mandating stronger penalties for AI-led defamation.
US & UK
While the United Kingdom is yet to finalise its AI regulatory approach, the growing cases of abuse of deepfakes led to amendments to the existing Online Safety Act to explicitly include the malicious creation and distribution of synthetic non-consensual imagery as a punishable offence.
The United States have taken a decentralised approach to regulating AI, with many states advancing their own laws to govern AI. This tracker by a US-based law firm lists out the current AI laws being proposed or implemented by different states in the country.
According to US government relations company MultiState, 18 different states in the country enacted laws to clamp down on deepfakes, especially of minors and electoral candidates. US Congress is also mulling over additional federal legislation to regulate the creation and distribution of deepfakes in the country.
India – No Regulations, And A Little Confusion
India still lacks a comprehensive AI regulatory framework, and has instead proposed a set of voluntary guidelines.
However, earlier this year, confusion arose when reports suggested that generative AI startups would need government approval to launch. This started after reports surfaced of Google's Gemini AI responding to the question "Is Modi a fascist" with ambiguity that alluded to the accusations of fascism faced by the Indian Prime Minister. However, the government notice caused backlash and outrage.
An advisory was immediately rushed into, that required tech companies to seek approval from the Indian government before releasing AI tools that were “unreliable” or “under-tested”. This led to more confusion, as the advisory failed to properly define such terms.
The Minister of State of Electronics and Information Technology Rajeev Chandrashekhar then tweeted out clarifying that the advisory is aimed at “large platforms and will not apply to startups”. It is also “aimed at untested AI platforms from deploying on Indian Internet”. With Chandrashekhar's tweet causing further confusion, the proposal was eventually rolled back.
4. Deepfake – Steroids for Digital Criminals
Better generative abilities aided by deep learning methods led to an evolution of deepfake tools in 2024, making them more accessible, and capable of producing more convincing deepfakes. Digital criminals were quick to integrate deepfakes to add sophistication to their crimes.
Deepfake porn
Last year saw a scourge of AI-generated non-consensual intimate imagery, or unauthorised deepfake porn, particularly targeting women and girls across the world. Towards the beginning of the year explicit deepfakes of pop star Taylor Swift went viral on X, forcing the platform to take action to remove such visuals.
South Korea was embroiled in a massive deepfake porn scandal that had particularly impacted school girls. Thousands of members gathered on messaging app Telegram and to organise a sophisticated level of harassment and blackmail targeting women using newly available deepfaking tools. The outrage led to many young female internet users removing their photos from social media, and forcing the government to enforce harsher penalties for the creation and distribution of sexual deepfakes.
Deepfake Scams
While misogynists were exploiting deepfaking tools to harass and blackmail women, digital scammers were having a field day executing highly advanced and sophisticated scams and frauds, and tricking people across the world to invest fraudulent funds and cryptocurrencies, or making unauthorised transactions.
In one of the biggest scams led by generative AI, scammers used real-time voice-cloning to impersonate the CEO of British multinational professional services firm Arup and trick the company's Hong Kong-based CFO into transferring US$ 25 million of the company's funds.
Scammers also used deepfakes of billionaire Elon Musk to trick people into investing in spurious cryptocurrencies, or transferring their money to unknown bank accounts. A South Korean woman reportedly "fell in love" with a scammer impersonating Musk on Instagram, and transferred US$ 50,000.
5. First Nobel Prize for AI - And A Warning
While countries across the world was dealing with the aftermath of the generative AI boom, this novel field - once relegated to science fiction - got its own recognition with a Nobel Prize in Physics, given to British-Canadian computer scientist Geoffrey Hinton. Known as the "Godfather of AI" for his pioneering work on artificial neural networks - a computational model inspired by the structure and functioning of biological neural networks in animal brains.
Hinton's made breakthroughs in applying ANNs to machine learning techniques like backpropagation and deep learning, that laid the foundations for many popular generative AI applications in use today.
Beyond his contributions to modern AI, Hinton has also consistently warned against the unconstrained development in AI. Speaking to BBC Radio 4, he reiterated his predictions of "10-20 percent chance" of AI wiping out humanity in the next three decades. Hinton stressed on more government regulation overseeing AI development, adding that "the profit motive of large companies is not going to be sufficient to make sure they develop it (AI) safely.”
6. The Menace of AI Companions
Talking of unsafe AI applications - 2024 also saw a boom in AI companion chatbot providers, resulting in some disastrous outcomes.
Character.ai
AI companion provider Character.ai made a US$ 2.7 billion deal with Google last year to let the tech giant rehire two of its former talents, who had left Google to launch Character.ai. The same year, Social Media Victims Law Center & Tech Justice Law Project filed two lawsuits against Character.ai - the first on behalf of a mother in Florida who accused the platform's AI chatbot of driving her 14-year-old son to suicide, the second on behalf of two Texan families who accused Character.ai chatbots of recommending violent acts, and sexually abusing and grooming their minor children.
Furthermore, a data breach at "AI girlfriend" platform Muah AI revealed that many of its users were seeking to engage in child abuse fantasies on the platform
7. AI Slop Floods Social Media
Social media platforms, once a space to get updates from friends and family, were filled with unwanted low-quality AI-generated media, known as AI slop.
An investigation into Facebook’s Creator Bonus Program revealed that its engagement rewarding mechanisms incentivised users around the world to create AI slop at scale, and spam the platform for engagement farming.
As a result, social media feeds are increasingly saturated with AI-generated images of disabled war veterans, houses in shapes of giraffes, and the infamous crustacean and muscular Jesus, among other bizarre imagery made to provoke reactions and generate engagement.
Mark Zuckerberg, in a conversation with Nvidia’s Jensen Huang at the SIGGRAPH 2024 conference in July, heralded generative AI as the next big phenomenon, while implying that user feeds will see an even greater influx of such AI-generated posts.
Meanwhile, Connor Hayes, vice-president of product for generative AI at Meta, told Financial Times last week that the company plans to populate their platforms with AI characters among human users, who would "exist on our platforms, kind of in the same way that accounts do."
This suggests that not only will the platform's AI bot users generate its own stream of AI slop, these 'characters' will also keep users engaged in conversations, similar to AI companion providers.
What To Watch Out For In 2025
Keeping the above in mind, we can expect more AI generated users to flock on social media, thus creating more AI slop content, and potentially changing the social media landscape entirely.
Lower Human-Human Interactions Online
Last year, creators at digital content subscription platform OnlyFans - which is better known for its widespread usage by adult content creators - started using chatbot platforms like FlirtFlow, to "talk dirty" on their behalf with subscribers. This is one of the many examples of content creators deploying AI chatbots to represent then to their followers.
Similarly, Meta's AI studio now allows creators to build their own virtual AI avatars, to increase their interactions with their followers.
Along with the arrival of AI characters in our midst on social media, this will likely have an impact on human-human interactions, and increase human-AI interactions, in 2025.
AI Agents
Big Tech companies have been signalling a boom in AI agents, with many calling it the next big thing in AI, in 2025.
AI agents autonomous AI systems that can perform tasks without intervention. Their key features are their ability to store relevant knowledge to understand context, use advanced planning algorithms, and use machine learning to adapt and refine their behaviour over time to best achieve their objectives.
OpenAI CEO Sam Altman, expects AI agents to go mainstream in 2025, while US-based software company Salesforce has already made deals to install AI agents “at more than 200 companies including Accenture, Adecco Group, FedEx, IBM and RBC Wealth Management,” according to a report by Bloomberg.
Small Language Models
In the past few years, we have seen an explosion in large language models (LLMs) - built using deep learning techniques that are trained on massive database of texts, in order to process and generate human-like text.
ChatGPT, Claude and Gemini are all examples of LLMs, that require tremendous processing power, can perform an enormous range of tasks, and are highly expensive.
With companies seeking to reduce carbon emissions and training time of AI models, industry analysts expect 2025 to see a spurt in the use of small language models (SLMs) - which are trained on significantly reduced data and parameters. An SLM is typically expected to be 0.01 per cent the size of GPT-3, requiring much less computational power, and deployment time.
SLMs are trained on highly specialised datasets, and they are built for very specific tasks, as opposed to LLMs that serve a general purpose.