At the AI Action Summit in Paris, Prime Minister Narendra Modi shared an intriguing observation—Artifical Intelligence, for all its remarkable capabilities, can effortlessly interpret complex medical reports, yet it struggles to depict something as simple as a person writing with their left hand.
Decode decided to test AI models like MetaAI and ChatGPT, and indeed, they confirmed Modi’s claim. Despite prompts to show images of a person using their left hand, none of the AI tools could generate that image.
Despite the clear prompt, AI failed to imagine a left-handed person
This shortcoming is striking, considering that AI can generate hyper-realistic images of war zones, fabricate entire historical figures, and even conjure up people who never existed. AI can dream up dystopian cityscapes, design futuristic weapons, and produce entirely fictional and disturbingly hate-filled propaganda.
However, when it comes to showing someone holding a pen in their left hand, it falters. It is a curious irony that AI finds it challenging to portray left-handedness, while simultaneously being capable of generating harmful and biased content with ease.
This stark contrast between AI’s difficulty with harmless details and its ability to amplify harmful stereotypes points to deeper flaws in its architecture. While AI fails at representing something as simple as handedness, it excels at reproducing the biases inherent in its training data.
These biases are not just technical anomalies; they reveal much about how AI models are shaped by the world around them, and the problematic data they are fed.
For instance, during the run-up to the 2024 US election, AI-generated images circulating on X falsely depicted Haitian immigrants eating pets, a racist fabrication pushed by Trump supporters.
One image showed a person of colour chasing a cat, while another portrayed a food truck run by Haitians offering "dogs, cats, and ducks" for breakfast. These deeply racist images spread quickly, illustrating how AI, without proper safeguards, can become a tool for generating and amplifying hatred.
Similarly, Decode found how AI was being used to fuel the baseless "rail jihad" conspiracy, which falsely accused Muslim communities of sabotaging railway infrastructure. The AI-generated images depicted Muslims as perpetrators of heinous acts—child sexual abuse, drugging women, committing rape, sabotaging railways, stone-pelting, adulterating food, and forcing religious conversions.
These images, drawn from social media platforms like X, Facebook, Instagram, and Threads, reflected a troubling pattern of bias and misinformation.
The irony here is glaring: AI cannot accurately depict a left-handed writer, but it effortlessly produces politically charged, prejudiced, and harmful content. These failures are not simply glitches; they expose how AI inherits the biases of the internet, history, and the people who create and train these systems.
To understand why this happens, Decode spoke with AI researchers.
AI as a Product of Its Training
AI has become an integral part of our everyday lives, especially with the rise of platforms like OpenAI’s ChatGPT. Researcher Kate Crawford once famously said, “AI is neither artificial nor intelligent.” She elaborated that AI is made from natural resources, and it is humans who perform the labour behind the scenes to make these systems appear autonomous.
AI learns by being trained on massive datasets consisting of images, text, or both. Through this training, the AI analyses these datasets to identify patterns and relationships.
For example, in image recognition, the AI looks at thousands, or even millions, of images paired with descriptions to understand what objects, people, or scenes are present. Over time, it adjusts its internal parameters to improve accuracy. However, the quality of training data is critical—if the data is biased or incomplete, the AI inherits these flaws. This is why AI struggles with tasks like depicting a left-handed person.
Himanshu Pandey, a digital anthropologist and researcher, explained that AI models rely on training datasets that pair images with text descriptions, helping them recognise objects, styles, and concepts.
However, these datasets often overlook details like whether a person is left- or right-handed. "As a result, the models learn from a pattern that predominantly features right-handed people," Pandey noted. He added that similar issues arise with other visual details, such as showing the correct time on a clock or accurate calendar dates.
Aarushi Gupta, a researcher at the Digital Futures Lab, emphasised that this bias stems from the statistical distribution of the data AI is trained on. "Given that only 10-15% of the world’s population is left-handed, images of left-handed individuals are underrepresented in many datasets," she said.
Without deliberate efforts to include more left-handed examples, AI models struggle to represent them.
How AI Generates Hateful and Prejudiced Content
AI models often reflect societal biases, sometimes producing content that is harmful or prejudiced, even when not explicitly trained on such data.
Pandey pointed out that AI models are trained to recognise patterns based on the data they receive. “Without debiasing mechanisms, they cannot distinguish between a benign commonality and a harmful stereotype. With systems devoid of safety layers looking at inputs, the generative model can't understand the difference," he explained.
Gupta further elaborated on the two types of biases present in AI, explicit and implicit. "Explicit bias is when training data deliberately excludes or overrepresents certain groups," she explained.
For example, an AI hiring tool that prioritises candidates from specific universities is an example of explicit bias.
Implicit bias, on the other hand, is more subtle. "It occurs due to underlying patterns in the training data," she noted. For instance, a model trained on images of US presidents from the 1700s to the present would learn that male presidents are the "norm," as the US has yet to have a female president.
Even though the model isn't explicitly trained on gender bias, it would likely produce gender-biased results.
"AI learns patterns from real-world data, including stereotypes, and may inadvertently amplify these biases in its outputs," Gupta said.
In essence, since humans and history are inherently biased, AI, as a product of human labour, inevitably inherits these biases. AI amplifies societal patterns like the underrepresentation of women in certain fields, the overrepresentation of certain ethnic groups in criminal records, and even something as banal as only right-handed individuals.
The more biased the data, the more AI perpetuates these inequalities.
Can AI Ever Truly "Understand" Context, or Just Remix Patterns?
AI struggles to "understand" context in the way humans do. It doesn't possess a true understanding of concepts like leadership, historical accuracy, or social issues. As Gupta puts it, "It parrots the data it is fed."
Although modifying prompts can help generate more accurate or diverse outputs, Gupta believes this is merely a short-term fix. "It addresses the symptoms, not the root cause," she said.
To address these issues at their core, AI must be trained on more representative datasets, and bias-detection mechanisms need to be improved. "Bias detection cannot happen in a vacuum," Gupta explained, stressing that biases vary across countries, cultures, languages, and sectors. "This requires experts from multiple disciplines, including social scientists, gender experts, and anthropologists, not just engineers and computer scientists."
Despite advancements, she believes that future models may better mimic human cognition, but they will never truly "understand" context as the human brain does.
However, Pandey poses a striking question in this context: "Aren't we all prone to logical fallacies, cognitive errors, hallucinations, and toxicity? Do we want AI models to be utopian while we remain far from it?"
Ultimately, AI's struggle to depict a left-handed writer is not just an amusing quirk—it is a reflection of a broader problem. AI does not learn reality; it learns what is most common in the data it is trained on. And it is this same limitation that allows it to reinforce and replicate harmful biases with ease.