Casual A.I. use is not victimless

By Mikayla Keniry

Casual A.I. use is not victimless

Unavoidable A.I. use is commonplace online, forcing internet users to be complicit in the environmental and democratic degradation A.I. contributes to.

Warnings about the dangers of artificial intelligence (A.I.) is not a new discussion. Since OpenAI's chatbot ChatGPT sprung into popularity in late 2022, warnings surrounding A.I.'s dangers to our environment, political landscape and social realities have continued to pour in.

However, internet users are no longer divided into those who religiously rely on A.I. to provide solutions to any task they encounter and those who strictly steer clear of it. With recent implementations of A.I. services across search engines and social media, A.I. is no longer a choice.

If you're an internet user, you are probably using A.I. every day without realizing it.

With Google's automatic employment of "A.I. overviews" -- which users do not have the option to fully disable -- Meta's use of "Meta A.I." on Instagram and Facebook and Grok's ever presence on X, A.I. has bled into our daily internet habits.

Whether you're using Google to find academic papers for your next essay, searching up the Instagram handle of one of your peers or simply scrolling online, you're bound to interact with some form of A.I. as long as you're using the internet.

Though we might not see it for ourselves, these seemingly minuscule interactions have real consequences.

According to Harvard Business Review, the environmental impacts of A.I. go beyond their initial production process through the supply chain. Popular generative A.I. software like Large Language Models (LLMs) consume egregious amounts of electricity during their training processes, subsequently expelling "hundreds of tons" of carbon emissions.

Importantly, the environmental effects of A.I. bleed into political concerns as the distribution of A.I. related environmental degradation remains localized to specific areas across the globe. This localization puts certain populations at risk for experiencing first-hand the environmental consequences of A.I.

The political consequences of A.I. are vast and cannot be condensed into a paragraph. Notably, A.I. has contributed to surges of political misinformation -- whether that be through fake audio clips, generated images or false statements -- rapidly spreading online.

Online misinformation has clear consequences for our democracy, as it leads citizens to cast their votes from uninformed perspectives, thereby compromising the agency behind their voting decisions in general.

Amidst undermining citizens' democratic agency, online misinformation also contributes to political polarization and the cultivation of extremist spaces online -- both of which A.I. remains complicit in as long as it contributes to the spread of online misinformation.

Evidently, using A.I. is a politically charged action: it reveals the privilege we have in not yet having to experience first-hand the localized environmental consequences of its ever-expanding carbon footprint while also revealing our complicity in driving up profits for a platform that actively fuels the degradation of democracy.

In Carol J. Adams' book on misogyny and meat consumption, The Sexual Politics of Meat: A Feminist-Vegetarian Critical Theory, she alleges that most people only eat meat because they do not have to witness the horrific production processes industrialized factory farming requires -- and if they did, they wouldn't be able to stomach it.

Though this interjection seems unrelated to A.I. usage, I think the two bare quite the resemblance. If internet users had to witness the deplorable environmental destruction and real-world consequences of A.I. misinformation undermining democracy before going ahead and using this software, would they still choose to use it so often? Would they be more upset that tech companies are forcing A.I. into our daily online activities?

After asking these questions, it is concerning to me that there is no clear path forward for internet users who want to free themselves from being complicit in A.I.-led destruction.

This leads me to wonder, with all of its political consequences, shouldn't internet users have the right to opt out of the forced A.I. software that has begun to appear on every platform they regularly use?

Computer scientist and cognitive psychologist Geoffrey Hinton's sentiments towards A.I. expressed in his speech after winning the 2024 Nobel Prize in physics remain continuously relevant to this discussion.

As Hinton states that the population's safety and wellbeing cannot be prioritized "by companies motivated by short-term profits," tech companies forcing their users to be complicit in the vast consequences of A.I. is a choice that roots not from working in our best interest. Instead, profit remains their top priority while A.I. seems to be every company's newest money-making endeavour.

Engaging with A.I. is a choice with hefty political consequences -- and tech companies are skillfully leaving their users ignorant to these consequences and the fact that their A.I. use can even be a choice at all.

Previous articleNext article

POPULAR CATEGORY

corporate

15012

entertainment

18244

research

9076

misc

17943

wellness

15016

athletics

19393