Support

Explore

HomeNo Image is Available
About UsNo Image is Available
AuthorsNo Image is Available
TeamNo Image is Available
CareersNo Image is Available
InternshipNo Image is Available
Contact UsNo Image is Available
MethodologyNo Image is Available
Correction PolicyNo Image is Available
Non-Partnership PolicyNo Image is Available
Cookie PolicyNo Image is Available
Grievance RedressalNo Image is Available
Republishing GuidelinesNo Image is Available

Languages & Countries :






More about them

Fact CheckNo Image is Available
LawNo Image is Available
ExplainersNo Image is Available
NewsNo Image is Available
DecodeNo Image is Available
Media BuddhiNo Image is Available
Web StoriesNo Image is Available
BOOM ResearchNo Image is Available
BOOM LabsNo Image is Available
Deepfake TrackerNo Image is Available
VideosNo Image is Available

Support

Explore

HomeNo Image is Available
About UsNo Image is Available
AuthorsNo Image is Available
TeamNo Image is Available
CareersNo Image is Available
InternshipNo Image is Available
Contact UsNo Image is Available
MethodologyNo Image is Available
Correction PolicyNo Image is Available
Non-Partnership PolicyNo Image is Available
Cookie PolicyNo Image is Available
Grievance RedressalNo Image is Available
Republishing GuidelinesNo Image is Available

Languages & Countries :






More about them

Fact CheckNo Image is Available
LawNo Image is Available
ExplainersNo Image is Available
NewsNo Image is Available
DecodeNo Image is Available
Media BuddhiNo Image is Available
Web StoriesNo Image is Available
BOOM ResearchNo Image is Available
BOOM LabsNo Image is Available
Deepfake TrackerNo Image is Available
VideosNo Image is Available
Explainers

Did AI Write The Exam? Jindal Law Student’s Fight May Set Academic Rules

The student argues that AI tools are not plagiarism unless they infringe on copyright and claims the university lacked clear AI guidelines. His case highlights the need for clearer academic policies as AI use among students grows.

By -  Hera Rizwan |

8 Nov 2024 10:26 AM IST

When Kaustubh Shakkarwar, an LLM student at Jindal Global Law School, took the end-term exam for the subject "Law and Justice in the Globalising World," he expected his work to be evaluated like that of any other student. Instead, on June 25, he received an unexpected response from the university’s Unfair Means Committee (UMC): they alleged that 88% of his exam responses were “AI-generated” and promptly marked him as having failed the subject.

Shakkarwar, who is pursuing a Masters in Intellectual Property and Technology Laws, contested the accusation. He filed a lawsuit against the university.

In the lawsuit against OP Jindal Global University, he argued that his exam answers were entirely his own and not generated by any artificial intelligence (AI) tool. The exam in question was administered online, and students were required to submit their responses electronically.

Shakkarwar argued that the university did not provide any clear guidelines regarding the use of AI in academic submissions, and therefore, he should not be penalised. He emphasised that AI itself does not inherently equate to plagiarism unless its use infringes upon copyright laws.

The Punjab and Haryana High Court has since directed the university to respond to Shakkarwar’s petition, setting up a high-stakes legal debate about the role of AI in academic work.

Interestingly, the stakes go beyond a single exam for Shakkarwar. The law student is developing an AI-based platform aimed at supporting litigation services. His platform 'fidy.ai' is touted to help users navigate legal processes by translating vernacular legal documents into English, sending timely notifications about their court cases (such as reminders shortly before hearings), filing trademarks, and monitoring for any similar trademarks that could affect their claims.

Ironically, the same technology that powers his platform has now cast a shadow over his own academic integrity.

What does the petition say?

The petition states that the university accused Shakkarwar of plagiarism, claiming that 88% of his exam responses resembled AI-generated text.

Shakkarwar asserts that his original, human-generated responses were unfairly labeled as AI-generated, with no opportunity to appeal the decision. He claims he requested a formal review, but the university’s Controller of Examinations denied it, asserting that there was no reason to establish a review committee.

Shakkarwar’s legal counsel has also argued that despite numerous requests, the university had failed to provide the "First Ordinance"—a key regulatory document that outlines student policies "under Section 26" of the university's rules. This ordinance, which details university procedures for exams, governance, and discipline, is typically published in the Haryana Government Gazette as a public document, as per the Haryana Private Universities Act.

Shakkarwar’s counsel claims that the ordinance has not been shared with him or made available on the university’s website, raising questions about transparency.

Also Read:JioHotstar Saga: A Tale Of Two Child Philanthropists And One Developer's Dream

The big question of AI use in Academia

Shakkarwar’s case highlights the need for universities to develop clear policies on AI use in academic settings. His argument that Jindal Global Law School had no explicit policy on AI usage adds to an ongoing debate: should educational institutions establish guidelines on how students can and cannot use AI? 

Speaking to BOOM, lawyer Nandita Saikia emphasised that the solution lies in a comprehensive approach to understanding and regulating AI, rather than solely focusing on AI-specific policies. "What we need is a comprehensive approach to understanding what AI is and how it should be used," she said.

She further suggested that universities should update their existing plagiarism and copyright policies to account for the growing influence of AI tools in student work.

To illustrate this point, Saikia provided an example: "AI tools that generate citations in the correct format are widely used in academia. It’s difficult to argue that such tools are unethical. However, using Generative AI to write an essay and passing it off as your own is clearly unethical. Additionally, if the AI-generated essay copies or paraphrases existing scholarly work, it could result in plagiarism or copyright infringement."

Dominic Karunesudas, an AI-ML and cybersecurity expert, echoed these concerns, arguing that universities need clear, practical AI policies that can guide both students and faculty. He pointed towards leading US institutions that make it clear to students that they are fully responsible for the output generated using AI tools.

According to Karunesudas, “The universities must mandate that faculty familiarise themselves with guidelines on AI and academic integrity,” so they can provide clear instructions to students on acceptable academic behavior in an AI-driven world.

Reliability of AI Detection Tools

The limitations of AI-detection tools is also a factor in this case. 

AI tools can misidentify human-written text as AI-generated, leading to what experts describe as "false positives." This risk is not hypothetical: Turnitin, a widely-used plagiarism detection service, recently reported an increase in false positives soon after launching its AI detection tool. The issue is particularly prevalent when the tool detects less than 20% AI-generated writing in a document, casting doubt on these systems' reliability. 

According to Karunesudas, international organizations such as UNESCO and the Institute of Electrical and Electronics Engineers (IEEE) have issued ethical frameworks for AI, yet these guidelines are voluntary and lack enforceable accountability.

“AI detection tools should be used carefully as they may not always be reliable,” Saikia advised, suggesting that universities need to be cautious when implementing these systems in academic settings.

Highlighting the limitations of AI detection systems, Karunesudas noted that these tools have "incorrectly accused students of cheating" in the past and are likely to do so in the future as well.

Shakkarwar’s petition brings an important issue to the forefront: as AI technology evolves, academic institutions need transparent, accessible guidelines that address ethical considerations and AI’s potential impact on learning. This case may, in fact, shape the future standards for academic integrity in the age of AI in India and beyond. 

Also Read:How A Facebook Group Became An Underground Library For Indian Students

Tags: