Views on "Artificial Intelligence"
Important prefaces
My opposition is on philisophical, ethical, and moral grounds. Utility is tangential and questionable and ultimately not important. I consider myself a conscientious objector to these systems. They are certainly not useful enough to offset all other concerns and often it is found they (and their owners) feign usefulness and those convinced through experience are found to be fooled. In other words, one of the most insidious parts of LLMs is their convincing appearance of usefulness which is an illusion, and a delusion of many of their users, creating more work for, and less trust from those around them.
My views are likely considered maximalist and adversarial in their opposition to generative AI, questioning of other forms and implementations of AI, and lamenting of the mess of misinformation and miscommunication wrought by marketing and terminology surrounding related technologies. Consider me a hater if you must. However, it is my wish that this position considered maximalist becomes the moderate position, and I'll leave the idea of what I wish was the maximalist position to the reader. Maybe an image of a Luddite swinging a hammer would convey the idea.
On being specific: I believe it is necessary that we be nuanced and specific in regards to what types of technologies we are talking about. Simply using the term "AI" colloquially is not enough, and this is a massive misstep (both intentional and non) in public discourse, journalism, and the industry's marketing itself. There is so much that falls under this umbrella term that having any conversations that do not specify the particular technology at hand is futile and destined to be rife with misunderstandings. Generative AI/LLMs are different from Machine Learning which is different from Artificial General Intelligence. Often things can be subsets of one another and can widely vary in their impact, output, and so on.
These systems do not exist in a vacuum. I believe it is necessary to understand that everything discussed here is further complicated, catalyzed, or exacerbated from multiple angles, including capitalism and structures of power. Money and power play a large role, as do perverse incentives, lack of regulations, and so on. Even then, in an ideal society, I believe some of these criticisms would still survive, it's just that the dynamics of power, capital, and survival would be entirely different, and power in the hands of the people would steer the entire trajectory and priority of technologies in other directions.
Criticism must be careful not to advocate other human-hostile structures. While it is important to recognize and oppose the unethical vacuuming of data that LLM companies are involved in, including ingestion without consent of content to which they do not have the rights, we absolutely must not advocate in turn for tightening of copyright law, which in its current form is already overly strict and draconian. If one does take the angle of critique that considers the licensing and rights of works, it must be with respect to protecting small creators, not corporations who are already overly favored by existing law. The power must tip towards the people. We absolutely must not get in bed with one form of corporate misbehavior to critique another form. To do so is to simply trade one evil for another.
Generative AI / LLMs
I am in full opposition, without reservation. It is a new paradigm, able to generate a fully-complete "work", whether that be text, art, code, music or otherwise. All else aside, this new paradigm on its own I find inherently harmful.
It is not a tool. It absolutely is not akin to a digital paintbrush or a digital illustration program as these still require an actual artist to perform the work. It is not simply a tool to be used or misused. It is mimicry of human works, expression, and skill, not a novel form of it, nor a tool to enhance it.
It does not think. It is not intelligent. It does not matter that it exists under an umbrella of technology that uses the term intelligence. That does not make it so. Misinformation is inherent to these systems. If it states something correct, it is not because reason got it to that point, nor is it because it's a rigid, repeatable calculation like a math equation. This ties in with a later point about anthropomorphization- giving these systems a human-mimicking speech/dialogue structure, human-mimicking behavior (emojis and such), and other similar facets- subversively builds trust in users that what they're "talking to" is a system that thinks. LLMs are decidedly not this, they just fake it, and their owners have designed them to do so intentionally.
The fact that it is not intelligent or thinking in any way helps explain why it fails the way it does, why it has some predictiable patterns, why it's never good at humor, and so on. It's also why I believe that those that are convinced it is useful are delusional. It's good at mimickry, and that's enough to fool many. Those that think they are improving their writing, or emails, or code, or otherwise using these systems are just bouncing things off of an average/bad idea generator. It's actively making what they do worse, while convincing them it is better. What a fresh hell we're living in.
It does not democratize. This is an industry lie. It commodifies under the guise of democratization. The entire process of learning, developing, and growing is replaced and so is the motivation to even attempt it. It short-circuits and flattens the entire process. It saps the will to express, it does not provide a new conduit for expression. It is expression, or work, dehumanized and productized in its most extreme form with no regard to quality. Its use subtly indoctrinates people into viewing themselves as lacking worth and worthiness. It devalues expression. It normalizes the notion that time is wasted on making things and that people do not deserve well-made, human-made things. The Democratizing software development section of tante's On Vibe Coding inspired this bullet point and articulates this and much more.
Time and time again it is trained unethically and without consent of the owners of the data vacuumed up for that purpose, then it can turn around and mimic that which it ingests. This has gotten to the point where AI crawlers are unethically slamming websites with extra traffic, being made to subversively pretend to not be crawlers, companies are downloading obscene amounts of copyrighted material, and treating everything accessible on the web as if it were public domain. While this is not an invitation to advocate for draconian copyright law, it is an invitation to advocate for extreme regulation, if not outlawing, of systems and companies that do this.
Arguably one of my most important points here: prompting is not expressing, it is commissioning. People prompting generative AI are not artists, coders, writers, etc. Prompting for a work is equal to commissioning a computer rather than taking time to learn skills, ethically source other human-made materials (such as choosing particular public domain art for their project), or properly commission a real artist for work. It is an outsourcing of expression or labor but without a human on the other side. It devalues humanity and its pursuit of learning. It provides a substandard form of "learning". When I frame AI as something someone is commissioning, the AI is attempting to replace/become the artist or worker itself. The user of AI is not what becomes the artist or worker, the AI does, and that is a massive problem.
It is an architecture of subsumption and a threat to privacy. In addition to all of its ingested data, the data obtained from its use by millions of humans cannot be reasonably expected to be kept private, given an industry that lives for making money through the brokering of data. Now the industry has convinced millions of people to volunteer all of their requests, sometimes their deepest thoughts, questions, and fears, to a harvester ready to ingest it all. This is an absolute dream of amassed data for structures of power (governments, corporations, etc) to abuse and provides a convenient and novel cover for doing so. If you do not think a system creating a "fun novelty" that converts portaits into mediocre art, that asks people to upload photos of their face, will someday be used for facial recognition surveillance- you're not thinking deeply enough about this. The fact that there are search engines and browsers built on this tech now is another way anonymity on the net is being eroded- by getting people used to browsing in trackable ways.
It props up a broken tech industry and the vultures becoming rich off of its pieces. One look at how the industry is passing billions around in circles between companies to appear to be growing, investing, and innovating and it's clear this is a farce to continue carving up what little is left for the rich.
Its use signifies a lack of care. If you're not willing to have a carefully crafted visual expression on your album cover, why would I ever remotely consider you cared deeply about your music either?
Its use creates dependency. All of the large trained models are made by large companies that want to keep you in their ecosystem. All of the "best" systems are the ones that have been thoroughly trained. Attaching such a heavy dependency to coding and other skills is a massive mistake.
It is a counter and intentional threat to worker power. Its use creates a burden of work for others. Workers trying to use these systems to create things for their jobs are being found to submit low-quality work that they either aren't skilled enough or willing enough to double-check. Down the line, other workers receiving this submitted work often have to re-perform much of the work to fix all of the problems generated by involving generative AI in the first place. It creates an environment of further mistrust and isolation in the workplace. This is one of the many ways generative AI is worker-hostile.
It is a grift/fraud enabler. It will enable a generation of pretenders. It saturates every field it touches with low-quality work. People get to pretend they are something they are not. They get to show off work that they had no hand in building and present it as their own. It enables those "group project" fools that let everyone else do all of the work and then take part of the credit. When I describe what I want an artist to make for me (commission process), I don't turn around and pretend I did the work!
It is notable to me that the defenders and owners of generative AI often are the same people that do not value art or the process of making it. This becomes clear whenever you hear executives talk about how artists should be happy that the difficult/tiresome process of "creating" is "solved" by generative AI. There could not be a more fundamental misunderstanding of creation itself.
If you absolutely won't let go of the weak "AI is another art advancement like photography" comparison- then even on those terms, and it's true for any kind of its output, generative AI is the "extreme end-stage automation" version of them that goes too far. It is the threshold that we should never cross- there's a finality of "withdrawal of the human" from the expression that reaches a dead end, a road past which there is nothing else. Unlike any other 'tool', generative AI is an occupying force in the land of human expression that should be resisted. Just because similar arguments and statements were made in the past about new technologies, doesn't mean those arguments aren't valid for generative AI now. In some historical cases, even at the time, those arguments were valid. The parallel of "criticism of something new and worry of its destruction of something that already exists" I think is fine, but defense of generative AI on the grounds of it receiving similar criticisms is not. It must be considered on its own merits or lack thereof.
There's a particular intent behind generative AI too which furthers the point- artists that painted or illustrated could see photography as another way to express, whereas generative AI isn't that- its owners instead look at human expression and say "we'll chase down whatever is left that hasn't been automated and automate that too". Instead of a new way to express or capture it's "aren't you thankful you don't have to do the expressing yourself anymore, just leave it to us".
There's an inherently dishonest nature to its creative mimicry. While mimicking another style isn't inherently bad (say digital paint mimicking physical paint), generative AI is this twisted, corrupted form of mimicry that really is mimicking the expression in particular. It's mimicking the skill, the effort, the artist themselves, in addition to the medium. All of the life experience, emotion, and everything else distilled into real art is flattened into nothing. But even when it comes to mediums, it mimics in an underhanded way. Plenty of output pretends to be a real photo, etc. It inherently misinforms. You probably won't find real artists painting something photo-realistic but trying to pretend it's a photo. There's this nature of dishonesty to generative AI output and its users that real artists don't have.
Inevitability isn't an argument in the favor of something. It may be a distraught reality (so, inevitability in its bleakest terms), but it is a defeatist "comply-in-advance" attitude. If we assume all technological advancements, or more abstractly, social and political changes, are inevitable, we have abdicated our agency in advance and are advocating for our own obsolescence and that we should just lay down and accept things. This is a ridiculous line of thinking. Would one say the same of the perceived inevitiability of other things, like fascism? Certainly not. If the reality is that given a particular society and power structure, a particular technology will inevitably be released- that still is no argument to use it, learn it, or comply with its owners' desires. To think otherwise is to consider all resistance of anything that ever feels inevitable futile. It is worthwhile alone to save yourself by not using these systems, even if you can't prevent others from doing so. Resistance is never useless, no matter how overwhelming the tide feels coming against you.
Make it shameful to use. Make it socially costly to use and especially to bring it into spaces meant for humans. Shun bands, fake visual "artists", and all sorts of the skill-less frauds that use it. Make using it a shameful act, especially for anything creative. Depending on the situation, the only exception to be made here is for workers that are forced to do it as they need to eat and solidarity with workers includes them. However, I call on these workers to do what they can to make its use undesirable within their companies, through whatever subtle means they can find, even if it's as simple as always giving Copilot a thumbs down when it's right, and a thumbs up when it's wrong. Erode their power from the inside out if you must.
Other criticisms
Against the anthropomorphism of any form of Artificial Intelligence, particularly LLMs. They are not and will never be humans and they should never act like them. This is a danger to the public, and has already been proven so in how tons of people are interacting with them in ways that are a danger to themselves and others. Relationships, advice that begets violence, "yes man" confirmation of toxic feelings and ideas. Not in the false-narrative sense of the big scary sentience but in the sense of actual sentient beings having feelings or misplaced trust toward them.
Against agentic AI. They are not human and do not have the understanding of one, they should never act on our behalf or make decisions on our behalf as an agent.
The 'big scary sentient' AGI or Super Intelligence is a pro-industry distraction masquerading as criticism. This worry about sentient AI and apocalyptic events wastes time pointing in the wrong direction. It nearly does the industry's marketing for them, in touting these systems as being highly intelligent, rather than the truth- which is they do not think at all, which is more genuinely frightening when they're simultaneously being touted as intelligent instead. It all ignores the real, immediate, tangible harms that we already see from plenty of other angles and saps valuable resources, time, and energy fighting the wrong battle.
It's environmentally destructive and furthers environmental racism/classism. The talk about whether a single prompt uses a particular amount of energy or water is a distraction. Without question, water and energy use are massive for the data centers being built for these purposes. At the scale of these systems, their training, hosting, operation, and so on is a massive harmful force, the waste and other byproducts from which often affect poor and marginalized communities the most. There are large energy contracts happening specifically for AI data centers. It's no secret.
Against algorithmic discrimination and computers as substitutes for responsibility, and the dissolution of human checks, balances, and watchdogs that comes along with that. Computers cannot be held accountable and thus should not be given the direct power to:
Generate information out of thin air that is ascribed freedom of speech or press rights
Make decisions, especially those involved in politics, welfare, management, medicine, or other life and death situations
Be used in any kind of authoritative decision structures such as those involved with warfare, surveillance, or other apparatuses of destruction, power, and policing
Machine Learning, other AI aspects... do you just hate it in every form?
If you have a simple markov-chain-based bot for humor purposes, that is only trained on a very small set of output local to its location, is very obviously a bot (labeled and unconvincing in its output) and it uses minimal energy, and so on- it's really not a threat in any of the ways given above, so while it's pointless other than as a source of absurd humor and novelty, its harms are minimized and it's not going to take the job of a comedian.
Narrowly-tailored deep learning trained on only ethically-sourced data, built for a specific positive purpose, like folding proteins for therapeutic discovery in medicine can be a valid approach that still falls under the "AI" umbrella. There are a number of uses that fall in this kind of category, though some may be arguable regarding their usefulness, marketing, and otherwise.
Even given what seem to be good use cases for machine learning / deep learning and similar systems, it must be noted that situations that involve these still must follow some strict guidelines to be considered ethical in my view:
- See the section above on algorithmic discrimination for considerations. Warfare, surveillance, and similar are not benevolent uses of this technology.
- It should be human-guided and checked by individuals with enough knowledge and skill to do so
- The biases in the input/output should be considered and intently looked for
- More abstractly, the humanity of those involved and those affected by the eventual output should be a top consideration
- If the work it performs is later used to make important decisions, other parallel work should be performed to ensure the results are actually useful and not misleading
- The human and their livelihood should be prioritized above all else, and this may include sometimes making decisions contradictory to the conclusions that may be gleaned from a particular output. The output of the system must not be the arbiter or given the ultimate authority.
- There should be multiple layers of human checking and intervention, with company cultures and environments that explicitly allow bringing complaints when bias, misuse, etc are seen
Exceptional posts on AI worth reading
- Tackling the supposed use of AI for disability and disability-washing of AI
- Other topics
This page was last edited 11/22/2025.