Skip to Content, Navigation, or Footer.
Friday, Dec. 5, 2025
The Emory Wheel

Ai_.jpg

Regulate AI before it regulates us

Content Warning: This article contains a reference to suicide.

Artificial intelligence (AI) is embedded in every aspect of our modern lives — it writes our essays, trades our stocks, diagnoses our diseases and curates the media we consume. Emory University’s Spring 2026 Course Atlas offers artificial intelligence (AI)-themed electives in almost every discipline, from computer science to finance to English. With roughly 90% of college students reporting they use AI tools, and the inclusion of AI in courses themselves, it is clear that AI has moved beyond the realm of science fiction and into reality. Emory must play its part in deciding the future of this transformative technology as it threatens our own.

As the lines between human and algorithm blur, we are at a critical juncture: Emory students must recognize AI’s ballooning, exponential growth. As such, our community should advocate for governmental regulations that limit AI’s capacity for exploitation and harm, while fostering its potential to advance innovation and public good. The time for passive spectatorship has passed if Emory is to train responsible technology leaders who will usher humanity forward.   

AI’s rapid advancement has transformed many industries for the better. For example, in healthcare, machine learning models can detect cancers earlier than humans and aid in drug development. In environmental science research and response, predictive algorithms help track pollution patterns and map wildfire spread. Additionally, AI manages tedious data analysis and streamlines administrative processes. These innovations demonstrate the technology’s capacity to improve lives. 

However, without guardrails, the same technologies that diagnose diseases can be weaponized for profit and exacerbate already pressing issues.  The electricity needed to power vast networks of large language models, a type of generative AI, may prove unsustainable as climate change progresses. Projections by the University of California, Berkeley have found that by 2028, AI could consume as much electricity as 22% of all U.S. households. Currently, much of this energy is from nonrenewable sources, which further contributes to carbon emissions and environmental strain. 

Before using ChatGPT, students should be aware of the impact of just a single inquiry. Data centers require vast amounts of water for cooling during processing, with large centers using up to 2.1 million liters of fresh water a day. As our supply of natural resources rapidly diminishes, we need to ensure data centers transition to renewable sources of energy. Technology is meaningless without a thriving Earth to use it in. 

Ethical failures in AI management also impact users’ emotions. The exploitation of loneliness fosters one-sided relationships that have proven harmful to users’ mental health, even before the introduction of sexual content. For instance, Character.AI currently faces a lawsuit that alleges a chatbot played a factor in a 14-year-old boy’s suicide. When machines are designed to imitate affection without regulation, the boundaries between user and algorithm dangerously blur. 

Beyond emotional parasitism, generative AI also has a nonconsensual pornography problem — it has been used to humiliate or blackmail celebrities, politicians and journalists.  There has been a disturbing rise in AI websites and applications, which grow increasingly realistic, that utilize image generation to create fake nude photos of someone with only a photo of their face. A reported one in four teenagers say they have seen content generated by these websites, often of someone they know. It is disconcerting to imagine the impact of an unregulated digital landscape in which sexual crimes and revenge pornography can occur at the tap of a button, further perpetuating the rampant amount of misogynistic content on the internet. Despite existing federal legislation intended to stop the distribution of coercive imagery of minors, the government should target not just the exchange of images, but their generation. 

As AI redefines society, regulations must safeguard democracy. Without enforceable regulation, AI’s rapid evolution will continue to outpace humanity’s ability to control it. In order to properly regulate AI, governments need to enforce transparency and accountability from technology companies at both the state and federal levels. The government should require that companies disclose the data they are using to train models, the energy consumed to power them and safeguards in place to protect users’ privacy and the environment. The government should implement independent oversight bodies to audit AI systems before their release to ensure they meet ethical benchmarks. 

Change must also occur on a local level: Emory supports programs such as the AI.Humanity Initiative and the Center for AI Learning. Additionally, the University joined the U.S. AI Safety Institute Consortium, demonstrating an acknowledgment of the presence of AI on college campuses and a commitment to building a smarter and safer future. 

However, even at an institution that claims to prioritize ethical AI usage, many students use ChatGPT and other generative AI bots for their coursework or to simplify their daily lives without recognizing the risks that come with reliance on such tools. To address this, students can start by reading OpenAI’s Terms of Use before sharing personal information, attending lectures at the Center for AI Learning to hear experts discuss ethical AI development and supporting organizations like the Algorithmic Justice League, which fights bias in AI systems. 

Furthermore, members of the Georgia Public Service Commission are up for election on Nov. 4. Democratic challengers Alicia Johnson and Peter Hubbard, who advocate for green energy, are running for seats responsible for approving power plants, renewable energy initiatives and data center construction. With low projected voter turnout, even a few votes can make a difference by ensuring that the commission enacts much-needed reforms. This is particularly important as the DeKalb County Commission recently extended a moratorium, or temporary suspension, on data centers applications until Dec. 16. 

Technological progress should not come at the expense of humanity or democracy, but complement them. AI has the potential to further revolutionize our world, but it may do more harm than good if we fail to properly regulate it. It is time to stop asking what AI can do and start focusing on what it should do. 

If you or someone you know is having thoughts of self-harm or suicide, you can call Student Intervention Services at (404) 430-1120 or reach Emory’s Counseling and Psychological Services at (404) 727-7450 or https://counseling.emory.edu/. You can reach the Georgia Suicide Prevention Lifeline 24/7 at (800) 273-TALK (8255) and the Suicide and Crisis Lifeline 24/7 at 988.


The above editorial represents the majority opinion of The Emory Wheel’s Editorial Board. The Editorial Board is composed of Editorial Board Editor Carly Aikens, Shreyal Aithal, Ananya Jain, Mira Krichavsky, Wayne Liang, Eliana Liporace, Pierce McDade, Niki Rajani, Robyn Scott, Noah Stifelman, Ilka Tona, Meiya Weeks and Crystal Zhang.