When students in Associate Professor in the Practice of Information Systems and Operations Management Wen Gu’s courses have a question on their assignment, they have a choice: Wait for Gu or one of his teaching assistants (TA) to be available to help them, or consult another aid: an artificial intelligence-powered (AI) TA chatbot.
Trained on Gu’s course material, this AI tool is just one example of how Emory University professors and students are adapting to the fast-changing landscape of AI in higher education.
Three years ago, OpenAI publicly released ChatGPT, which has become the most popular AI chatbot in the world. Now, 90% of undergraduate college students reportedly use generative AI tools in their academics, according to Forbes.
Amid this AI boom in academia, Emory Office for Undergraduate Education Associate Dean Jason Ciejka, who leads Emory’s Honor Council, said that as the technology improves, the tools may lead to more students cheating.
“Many tools being free and accessible to students, that makes it easier, and in some ways, more tempting, for students to misuse artificial intelligence,” Ciejka said.
Some humanities professors have shared their fears of a “ChatGPT outbreak,” including Associate Professor of Political Science J. Judd Owen. However, with recent advances in AI, other professors are adopting and incorporating the technology into their classrooms.
At Emory, individual organizations, schools and departments have “autonomy” over how to use AI in their respective domains, according to Assistant Vice President of University Communications and University Spokesperson Laura Diamond. In fact, the University plans to curate a set of resources for organizations to reference while developing their own AI guidelines on a “University-wide Responsible AI website.” Still, the University is working to make sure community members use AI in a “responsible” manner, according to Diamond.
“Emory values the safe, ethical, and secure use of AI across all of our environments, and at the same time we equally value academic freedom, faculty governance, and the autonomy of schools, departments, and student leaders to set policies that align with their needs,” Diamond wrote.
Emory faculty members have the flexibility and freedom to implement the technology on their own terms, and on different corners of campus, professors are doing so in unique ways.
Psychology Professor Ditches Textbooks for ChatGPT
Department of Psychology Director of Undergraduate Research and Associate Teaching Professor Andrew Kazama (06G, 10G) began experimenting with ChatGPT when it first launched. Kazama explored ChatGPT with the intention of helping students learn more effectively. Now, he has “basically tossed” the standard Psychology (PSYC) 110 textbook.
Instead, Kazama instructs students to use three prompts to have ChatGPT retrieve necessary information from the internet. Rather than reading pages of a psychology textbook, students first ask ChatGPT for information on a subject, then how the subject relates to their lives, and finally, they ask the generative AI for an “adaptive quiz” on the topic.
“The guiding principle for how I use it is to make sure that students are using it as a ladder, and not a crutch,” Kazama said. “The primary goal for all of my courses is: I want to give my students the tools to build themselves into better human beings. That’s the thing I care about the most.”
Kazama noted that he still keeps physical textbooks on course reserves for students who prefer “more traditional learning.” However, he emphasized that no students have expressed discomfort with using ChatGPT.
“I haven’t had a student to come up and talk to me and say, ‘OK, I’m really uncomfortable using this in my class,’” Kazama said. “I hope that I’m not intimidating students … I don’t think students should be forced to use AI, but it’s an important skill to learn.”
Kazama said that since he has brought AI into his courses, he has seen students’ grades improve, although he has not conducted a full study.
“It’s going very well,” Kazama said. “It’s gotten good reviews, and my test scores have honestly gone up, so they actually are learning it better.”
Psychology major Siobhan Mullins (26C) said the first time she was ever exposed to AI was in Kazama’s PSYC110 course. Mullins found Kazama’s approach to AI in the classroom to be beneficial for her and other students.
“It creates an opportunity to engage with the material and ask questions if you’re confused about a certain area,” Mullins said.
Now, as a learning assistant in that same course, Mullins said some students find switching between classrooms where AI is banned and classrooms where the professor embraces AI to be difficult.
“They were just confused like, ‘Oh, is it really just we input a prompt to ChatGPT and interact with it, and then we answer a question?’” Mullins said. “It just felt like, ‘Oh, we’ve never done an AI homework before.’”
Mullins said Kazama’s incorporation of AI into the classroom improves students’ learning experiences by preventing them from relying on AI when completing assignments.
“Having an assignment where you’re directly engaging with AI is a lot more of an opportunity to do active learning, it eliminates that temptation,” Mullins said. “No one’s going to use AI to cheat on an AI assignment, in my opinion.”
According to Kazama, a key benefit of using ChatGPT in his classroom is financial accessibility for students. The PSYC110 textbook can cost students over $100, making it unaffordable for some.
“I came into college as a low-income student, so I’ve always been very sensitive to pricing, and a lot of these AI systems are free or nearly free,” Kazama said.
While Kazama sees numerous benefits to using AI in the classroom, he also stressed the need for students to learn how to use the tool properly, noting the potential pitfalls of relying on AI.
“Students are under a lot of pressure to perform, and a lot of the misuse of AI comes from pressure on students to be perfect, to get the perfect score,” Kazama said. “If we could do things to reduce that need for perfection in all things and all domains, we would actually solve a lot of these issues.”
Kazama said he appreciates the freedom the University gives professors to implement AI into their classrooms. He envisions a future in which Emory students and professors learn and grow from AI use in education together.
“You’re here at Emory to learn and grow and become the next best version of yourself,” Kazama said. “Wouldn’t it be nice if we taught you what AI is, how AI is contributing to that and what are some of the deficits that it might be creating that you’re not even aware of?”
AI Teaching Assistants Enter Business School Classrooms
Across campus, at Emory’s Goizueta Business School, Wen Gu is using AI to improve classroom resource accessibility. In his courses, students not only learn about AI, but they can also learn from an AI chatbot.
Gu noticed an ongoing issue where students wanted to meet him or his TAs for office hours but could not find a time that worked for both parties.
“A lot of times, no matter when I scheduled my office hours and how many TAs that I have, there were always people who said, ‘Oh, when I wanted to get help, I couldn’t get help,’” Gu said.
Gu turned to AI to combat office hour inaccessibility for students. Last fall, after months of experimentation, he introduced an AI chatbot, trained on the content of his courses, to serve as a TA available 24/7 to his students.
When designing his courses, Gu said he experimented with different methods using the AI chatbots, including having only human TAs, only AI TAs and a combination of both. He found that using a combination of both human and AI TAs has been most successful for students.
After Gu implemented the TA chatbot, other professors in the business school began adopting Gu’s model and training their own TA chatbots. He estimated that over 15 other professors are using the same technology for their courses, including a computer science professor. Last year, Gu, along with other business school professors, developed an AI task force to tweak the technology to suit individual courses.
“We’re trying to work with individual faculty members to see what is the best way for us to provide different tools that we develop to fit their individual need,” Gu said.
Gu said adjusting the technology for different subject areas required “a lot of customization.” As the technology spreads to other departments, he anticipates needing to further modify the AI TA.
Raphael Nelson (25Ox, 27B) helped develop a similar AI TA to the one used in Gu’s classroom for ISOM 351 with Professor in the Practice of Information Systems & Operations Management Steve Walton. He mentioned that students appreciated these TA chatbots because they improve access to classroom resources.
“It can be a nice, good way to let students ask questions without needing a TA to be ready at any moment of the night,” Nelson said. “Generally, AI is pretty well-suited to that task.”
Nelson also shared that AI TAs are specialized in the course content, allowing students to receive more effective answers to their questions.
“The benefit is any time of day you can ask it a question,” Nelson said. “You don’t have to be worried about being judged or looking stupid, if that’s a concern for someone, and you can be confident that it’ll reliably give you a good answer based in the course material.”
While Nelson noted AI has been implemented in some of his courses, he also said that not all business school professors have incorporated the technology into their classrooms. However, he noted that many business professors are open to using AI within their classes.
“How can [AI tools] save you time, or make you more efficient, and, where can they not?” Nelson said. “School is a good place to learn that lesson, as long as you’re following class rules.”
Nursing Professor Uses AI Start-Up as Teaching Tool
Similar to Gu, Assistant Professor of Nursing Patti Landerfelt (96N, 19N) has harnessed AI to expand educational resources for students. Landerfelt partnered with Qvio, a digital content company, to develop video lectures for students. Qvio is an interactive video platform that enables authors to create interactive video questions for viewers.
Landerfelt shared that Qvio inputs her class content into ChatGPT and generates 50 questions for students to answer. From there, Landerfelt matches the answers to the 50 questions with corresponding parts of the video lecture. When she publishes the video, students have access to a chatbot and the questions. Landerfelt highlighted the “positive” student response to the chatbot.
“The students like it,” Landerfelt said. “They’ve given us some great feedback on what they do like, what they don’t like, what’s cumbersome, but for the most part, they wanted it.”
Landerfelt noted that students who interacted with the Qvio videos were more likely to answer related questions correctly.
“If they interacted with [Qvio] … then they had a statistically significant increase in their exam scores on these questions that directly related to the Qvio,” Landerfelt said.
However, Landerfelt emphasized that even though Qvio is an AI tool, integrating the questions still took a lot of time.
“It took quite a while to do it, especially considering that students want a video for every single lecture, and it’s taking me forever to do a 30-minute lecture,” Landerfelt said.
Landerfelt sees AI as a tool to enhance the classroom experience, but one that requires significant instructor input and oversight.
“When you’re an early adopter, you’re going to have a lot of hurdles,” Landerfelt said. “That’s kind of what I’ve experienced, but I don’t regret implementing this.”
AI, Not Your Professor: Business Professor Automates Grading
Like Landerfelt, Gu also uses AI to assist on the backend of being an instructor. About a year ago, Gu introduced an AI-powered grading system, designed to reduce wait time for grades and provide AI-generated feedback on student work.
Gu said many students liked the automated AI grading system.
“About 84% of the students prefer the type of feedback generated by autograder and its focusing on, not the negatives, but the part that can be improved,” Gu said.
While the autograder is a helpful tool, the technology still makes errors, and Gu said he always goes through and checks the AI’s work. But Gu framed what some could see as a negative as another teaching moment: If students can catch grading errors in their assignments, they are still learning.
“If the grading has errors, I’m happy to double the points that has been mistaken back to you — not just to give you the correct grade, but also award you with double the points,” Gu said. “The evaluation of the actual grading is actually an extremely valuable learning opportunity.”
Gu emphasized that AI usage will only increase as students enter the workforce, especially for those entering the business world.
“We want to have business students as principled, world leaders,” Gu said. “How can you be a leader without understanding how technology will change or shape the future?”
Center for AI Learning Keeps ‘A Human in the Loop’ with AI.Data Lab
The Emory community is also engaged in AI research beyond the classroom. Launched in 2023, the Center for AI Learning offers several programs to the Emory community, including the AI.Humanity Seminar Series, public workshops and their AI.Data Lab.
Any Emory student can apply to join the AI.Data Lab, which runs on a semesterly basis. Assistant Director of Programs at the Center for AI Learning Tyler Cook said this fall, about 200 students applied to be a part of the 120-person program. Cook highlighted the strengths that an interdisciplinary group of students brings to AI research. Students in the AI.Data Lab learn to implement the technology into projects ranging from assisting the nursing school to analyzing sports statistics.
“It’s really cool to see that kind of collaborative environment where it’s not just one viewpoint that’s dominating, but it’s a number of different viewpoints that are coming together and working together on these projects,” Cook said.
Cook emphasized the importance of teaching students to use AI effectively before they enter the workforce, which the center helps achieve through its programming.
“We want to help humans and people and Emory students figure out how to partner with these tools or how to collaborate with these AI tools in order to make sure that the AI isn’t doing all the work … that you still have a human in the loop,” Cook said.
Coming from an ethics and philosophy background, Cook highlighted fears that teaching students about AI use might increase their dependence on it.
“I do worry about a scenario where we’re just offloading a lot of our cognitive judgement or exercise or capabilities or activities onto these systems,” Cook said. “If we get to a scenario where we’re doing too much of de-skilling, critical thinking, all that diminishing, that’s the kind of scenario we want to avoid.”
Cook hoped that participating in the AI.Data Lab and the center’s other programs has helped Emory community members understand the importance of using AI ethically and responsibly.
“You can either do that thinking for yourself and you can be a human, or you can just automate it and let an AI do all that thinking for you,” Cook said. “Which one of these do you want to be? Do you want to be just sitting around letting AI do all the thinking for you or do you actually want to be a critical thinking human?”
‘A Human-Centered Approach to AI’: The Future of AI Innovation in Higher Education
While all of their implementations of AI are distinct, there is one fact that Kazama, Gu, Landerfelt and Cook can all agree on: People need to find a balance between utilizing AI while not having it replace their work, thoughts and creativity.
“AI can be kind of a crutch, right?” Kazama said. “It can replace critical thinking. It can be used to take shortcuts. But every time you don’t use it in that way, you exercise that integrity muscle, and it gets stronger.”
An additional risk of improperly using AI in the classroom is the risk of information “hallucinations,” according to Kazama. AI hallucinations occur when a Large Language Model, like ChatGPT, provides false information as fact. These malfunctions are very common in newer AI tools, with rates as high as 79% on some AI systems.
“If we’re going to use AI in the classroom, you need to understand where it’s strong, but also, even more importantly, where it’s weak, where the hallucinations going to be issues with citing peer-reviewed papers, sometimes it will make them up,” Kazama said.
Landerfelt also shared the dangers of students receiving incorrect information from outside AI models.
“When students go to ChatGPT, there’s no telling what’s going to come back out,” Landerfelt said. “They can put in a prompt, which is very minimal words, but the output is not necessarily congruent with what we’re teaching in the class.”
Gu also recognized the potential downsides of neglecting critical thinking in pursuit of complete automation, but added that he intentionally designed his virtual TA to limit students’ reliance on the technology.
“The future work is shaped by humans and AI working together in a lot of different areas,” Gu said.
Even amid concerns that AI may replace student-led thought, Kazama emphasized it is important to know how to use AI and for professors to effectively implement it in Emory’s classrooms.
“If you’re going to allow AI, you should understand what are the benefits and what are the impacts of it on student learning, on many different domains,” Kazama said.
Likewise, Gu said using AI in the classroom is vital for students and preserving the long-term relevance of higher education.
“I would strongly recommend my colleagues all start to think about how, or what exactly, is the core value that we’re providing to students?” Gu said. “We should think about, not just for our own survival, but really, for the point of education.”
Gu said he is interested in how AI will change both the classroom and Emory students’ futures.
“I believe there is incentive for us to actually change the pedagogy,” Gu said. “Maybe we can work with AI during your learning, so that when you start working, then you will be creating more value in your future workplace.”
Also looking to the future, Cook is optimistic about the Center for AI Learning and Emory’s roles in AI innovation and development. However, he warns that AI users must engage with the technology and not rely solely on pure automation.
“Right now we’re in this critical phase where it’s new and it’s cool and it’s advancing,” Cook said. “ [I am a] big advocate for thinking about not just full speed ahead on innovation and automation, but, humans first, and a human-centered approach to AI.”

Ellie Fivas (she/her) (26C) is from Cleveland, Tenn., and is majoring in political science and history on the pre-law track. When she is not working for the Wheel, she works in prison education, leads a human rights club and works at the Emory Writing Center. In her free time, you can find her reading trashy romances and The New York Times, basking on the Quadrangle and doing crossword puzzles.
Siya Kumar (she/her) (28C) is a news editor at The Emory Wheel. She is from New Orleans, La., majoring in Economics and Creative Writing on the pre-law track. Outside of the Wheel, Kumar is a market news analyst for the Emory Economics Investment Forum and a writer for the Emory Economics Review. She loves baking, reading, and drinking coffee.








