This November marks three years since OpenAI released ChatGPT, the generative artificial intelligence (AI) tool that has reshaped the workflow and livelihoods of students and educators worldwide, including at Emory University. Now, students and faculty at the University are grappling with how best to approach AI usage for academic purposes. Emory lacks an institutional policy on AI, which has made it challenging for professors and students to navigate its usage in the classroom.
However, with the rise of AI, Emory University’s Honor Council has created an information sheet to help professors navigate the use of generative AI. The information sheet warns that there is not a definitive consensus among educators “about the accuracy of detection programs,” and that “detection programs may produce false negatives or false positives.” The Honor Council information sheet also lists some common indicators of AI.
Office for Undergraduate Education Associate Dean Jason Ciejka, who leads the Honor Council, said the boom in AI use in classrooms over the years has not drastically increased the number of violations the body reviews. Ciejka said AI-related Honor Code violations generally fall into pre-existing violation categories, including plagiarism, unauthorized assistance and data fabrication.
“[AI-related violations] are now a proportion of the total cases that we see,” Ciejka said. “Within our own reports, we haven’t seen a drastic shift in the types of violations that we usually encounter within a given year.”
Ciejka said that despite not seeing a dramatic number of AI cases, professors still come to the Honor Council with questions and concerns. He reasons that AI’s accessibility might be incentivizing students to cheat.
“Many tools being free and accessible to students, that makes it easier, and in some ways more tempting, for students to misuse artificial intelligence,” Ciejka said.
Even in light of this potential risk, some professors, such as Professor of Biology and Computer Science Yana Bromberg, have decided not to ban AI outright. Bromberg compared it to using other classroom tools.
“It’s like a calculator versus doing it by hand,” Bromberg said. “You should know how to use a calculator. Does it mean you shouldn’t know how to add by hand? No, those two go hand in hand.”
Bromberg said it is not as simple as telling students not to use AI, instead, students and faculty need to educate themselves on how to use AI effectively and ethically.
A 2025 Reuters poll found that 71% of Americans are concerned about AI technology taking away jobs from humans. Bromberg said the problem is not AI replacing employees, but rather that people who understand how to use AI will have advantages over their peers.
“They’re not going to lose their jobs to AI,” Bromberg said. “They’re gonna lose their jobs to the people who know how to use AI if they don’t know how to do it, and just plugging in questions in ChatGPT is not knowing how to use AI.”
Associate Professor of Political Science J. Judd Owen has taken a hard stance against AI use since what he calls the “ChatGPT outbreak,” which he said he first observed in 2023. Owen said he has adapted how he leads his classes to combat the rising use of AI by students.
“I’ve been teaching at Emory for probably 25 years now, and I would say that I’ve prosecuted about the same number of plagiarism cases in my whole career as I had instances of unauthorized AI use on this first exam in this one class, and that really took me aback,” Owen said.
Owen said that as of now, he does not have a problem with the Honor Code as a beginning point for AI regulation. He said the use of AI should not automatically be considered plagiarism, as the faculty members should be educating students on the appropriate uses of AI. However, Owen said the rise in AI has made him rethink how he approaches his classes.
“I’ve gone very old school,” Owen said. “I require paper editions of the book, students take notes by hand, unless there are accommodations otherwise.”
Jay and Leslie Cohen Assistant Professor of Religion and Jewish Studies Kate Rosenblatt said the rise in AI will likely alter the future of education.
“We have a responsibility to help them become responsible AI users, but I am also deeply suspect of the people who imagine that AI is going to become ascendant in ways that disregard or devalue or render antiquated everything that we currently value,” Rosenblatt said.
Rosenblatt argued that using AI in humanities classes is just “shortchanging” students’ learning.
Still, students are just as divided on AI use as faculty members. Brigid May (26C) said AI harms students’ critical thinking skills. He said using shortcuts like AI makes it hard for him to fully understand the content, leading to worse exam results.
“I’ve always felt really guilty if I cheated in any tiny sense,” May said. “I’m just averse to taking shortcuts like that.”
May also said using AI feels contradictory to Emory’s core mission and values.
“In an institution where you’re supposed to learn how to process information that might not be part of your worldview and open your mind to new ideas, it’s not conducive to that,” May said.
Branson Adams (29B) took a different stance, arguing that he uses AI as an “aid” for assignments and called it a convenient tool for classwork. He said professors are often too harsh when banning AI, and said that proper AI use is an important skill for students to learn.
“I feel like professors should be more understanding,” Adams said. “It just seems like AI is going to be used more and more, and it’s going to have real-world applications, so I feel like they should start being more lenient.”
AI’s presence on campus is seemingly undeniable, leaving students and faculty to make decisions about its integration or exclusion from college education.
“This is, in some ways, the biggest crisis in education that I've seen at Emory,” Owen said. “A lot more attention has to be paid to this issue.”







