Views on "Artificial Intelligence"
Overview: My views are likely to be considered maximalist and adversarial in their opposition to generative AI, questioning of other forms and implementations of AI, and lamenting of the mess of misinformation and miscommunication wrought by marketing and terminology surrounding related technologies. Consider me a hater if you must.
Preface: First of all, I believe it is necessary that we be nuanced and specific in regards to what types of technologies we are talking about. Simply using the term "AI" colloquially is not enough. There is so much that falls under this umbrella term that having any conversations that do not specify the particular technology at hand is futile and destined to be rife with misunderstandings. Generative AI/LLMs are far different from Machine Learning which is different from Artificial General Intelligence. Often things can be subsets of one-another and can widely vary in their impact, output, and so on.
Preface II: I believe it is necessary to understand that everything discussed below is further complicated, catalyzed, or exacerbated from multiple angles, including capitalist society and power-hungry governments. Money and power play a large role, as do perverse incentives, lack of regulations, and so on.
Generative AI / LLMs
I am in full opposition, without reservation. It is a new paradigm, able to generate a fully-complete "work", whether that be text, art, code, music or otherwise. All else aside, this new paradigm on its own I find inherently harmful.
It is not a tool. It absolutely is not akin to a digital paintbrush or a digital illustration program as these still require an actual artist to perform the work. It is not simply a tool to be used or misused. It is mimicry of human works, expression, and skill, not a novel form of it, nor a tool to enhance it.
It does not democratize. It commodifies. The entire process of learning, developing, growing is replaced and so is the motivation to even attempt it. It short-circuits and flattens the entire process. It saps the will to express, it does not provide a new conduit for expression. It is expression dehumanized and productized in its most extreme form. Its use subtly indoctrinates people into viewing themselves as lacking worth and worthiness. It devalues expression. It normalizes the notion that time is wasted on making things and that people do not deserve well-made, human-made things. The Democratizing software development section of tante's On Vibe Coding inspired this bullet point and articulates this and much more, perfectly.
Time and time again it is trained unethically and without consent of the owners of the data vacuumed up for that purpose, then it can turn around and mimic that which it ingests. This has gotten to the point where AI crawlers are unethically slamming websites with extra traffic, being made to subversively pretend to not be crawlers, companies are downloading obscene amounts of copyrighted material, and treating everything accessible on the web as if it were public domain.
People prompting generative AI are not artists, coders, writers, etc. Alternatively- prompting is not expressing, it is commissioning. Prompting for a work is equal to commissioning a computer rather than taking time to learn skills, ethically source other human-made materials (such as choosing particular public domain art for their project), or properly commission a real artist for work. It is an outsourcing of expression or labor but without a human on the other side. It devalues humanity and its pursuit of learning. It provides a substandard form of "learning". When I frame AI as something someone is commissioning, the AI is attempting to replace/become the artist or worker itself. The user of AI is not what becomes the artist or worker, the AI does, and that is a massive problem.
It signifies a lack of care. If you're not willing to have a carefully crafted visual expression on your album cover, why would I ever remotely consider you cared deeply about your music either?
It creates dependency. All of the large trained models are made by large companies that want to keep you in their ecosystem. All of the "best" systems are the ones that have been thoroughly trained. Attaching such a heavy dependency to coding and other skills is a massive mistake.
It creates a burden of work for others. Workers trying to use these systems to create things for their jobs are being found to submit low-quality work that they either aren't skilled enough or willing enough to double-check. Down the line, other workers receiving this submitted work often have to re-perform much of the work to fix all of the problems generated by involving generative AI in the first place. It creates an environment of further mistrust and isolation in the workplace. This is one of the many ways generative AI is worker-hostile.
It is a grift/fraud enabler. It will enable a generation of pretenders. It saturates every field it touches with low-quality work. People get to pretend they are something they are not. They get to show off work that they had no hand in building and present it as their own. It enables those "group project" fools that let everyone else do all of the work and then take part of the credit. When I describe what I want an artist to make for me (commission process), I don't turn around and pretend I did the work!
It is notable to me that the defenders and owners of generative AI often are the same people that do not value art or the process of making it. This becomes clear whenever you hear executives talk about how artists should be happy that the difficult/tiresome process of "creating" is "solved" by generative AI. There could not be a more fundamental misunderstanding.
If you absolutely won't let go of the weak "AI is another art advancement like photography" comparison- then even on those terms, and it's true for any kind of its output, generative AI is the "extreme end-stage automation" version of them that goes too far. It is the threshold that we should never cross- there's a finality of "withdrawal of the human" from the expression that reaches a dead end, a road past which there is nothing else. Unlike any other 'tool', generative AI is an occupying force in the land of human expression that should be resisted. Just because similar arguments and statements were made in the past about new technologies, doesn't mean those arguments aren't valid for generative AI now. In some historical cases, even at the time, those arguments were valid. The parallel of "criticism of something new and worry of its destruction of something that already exists" I think is fine, but defense of generative AI on the grounds of it receiving similar criticisms is not. It must be considered on its own merits or lack thereof.
There's a particular intent behind generative AI too which furthers the point- artists that painted or illustrated could see photography as another way to express, whereas generative AI isn't that- its owners instead look at human expression and say "we'll chase down whatever is left that hasn't been automated and automate that too". Instead of a new way to express or capture its "aren't you thankful you don't have to do the expressing yourself anymore, just leave it to us".
There's an inherently dishonest nature to its creative mimicry. While mimicking another style isn't inherently bad (say digital paint mimicking physical paint), generative AI is this twisted, corrupted form of mimicry that really is mimicking the expression in particular. It's mimicking the skill, the effort, the artist themselves, in addition to the medium. All of the life experience, emotion, and everything else distilled into real art is flattened into nothing. But even when it comes to mediums, it mimics in an underhanded way. Plenty of output pretends to be a real photo, etc. It inherently misinforms. You probably won't find real artists painting something photo-realistic but trying to pretend it's a photo. There's this nature of dishonesty to generative AI output and its users that real artists don't have.
Inevitability isn't an argument in the favor of something. It may be a distraught reality, but it is a defeatist "comply in advance" attitude. If we assume all technological advancements, or more abstractly, social and political changes, are inevitable, we have abdicated our agency in advance and are advocating for our own obsolescence and that we should just lay down and accept things. This is a ridiculous line of thinking.
This might be the hottest of the takes: it doesn't matter if it's useful. Even if there are ways one discovers it to be helpful and uses it very narrowly as a piece of some of their own work or process, this will never outweigh every other criticism. It's ultimately the least important factor here. It will never be worth it. The ownership, the deskilling, the burdening of work on others, the environmental costs, the incentives, the non-consented training, the underhanded behavior are far more important, and by ignoring these you're tacitly approving them.
Make it shameful to use. Make it socially costly to use and especially to bring it into spaces meant for humans. Shun bands, fake visual "artists", and all sorts of the skill-less frauds that use it. Make using it a shameful act, especially for anything creative. Depending on the situation, the only exception to be made here is for workers that are forced to do it as they need to eat and solidarity with workers includes them. However, I call on these workers to do what they can to make its use undesirable within their companies, through whatever subtle means they can find, even if it's as simple as always giving Copilot a thumbs down when it's right, and a thumbs up when it's wrong.
Other criticisms
Against the anthropomorphism of any form of Artificial Intelligence, particularly LLMs, though. They are not and will never be humans and they should never act like them. This is a danger to the public, and has already been proven so in how tons of people are interacting with them in ways that are a danger to themselves and others. Relationships, advice that begets violence, "yes man" confirmation of toxic feelings and ideas. Not in the false-narrative sense of the big scary sentience but in the sense of actual sentient beings having feelings or misplaced trust toward them.
Against agentic AI. As they are not human and do not have the understanding of one, they should never act on our behalf or make decisions on our behalf as an agent.
The 'big scary sentient' AGI or Super Intelligence is a pro-industry distraction masquerading as criticism. This worry about sentient AI and apocalyptic events wastes time pointing in the wrong direction. It nearly does the industry's marketing for them, in touting these systems as being highly intelligent, rather than the truth- which is they do not think at all, which is more genuinely frightening when they're simultaneously being touted as intelligent instead. It all ignores the real, immediate, tangible harms that we already see from plenty of other angles and saps valuable resources, time, and energy fighting the wrong battle.
It's environmentally destructive and furthers environmental racism/classism. The talk about whether a single prompt uses a particular amount of energy or water is a distraction. Without question, water and energy use are massive for the data centers being built for these purposes. At the scale of these systems, their training, hosting, operation, and so on is a massive harmful force, the waste and other byproducts from which often affect poor and marginalized communities the most. There are large energy contracts happening specifically for AI data centers. It's no secret.
Against algorithmic discrimination and computers as substitutes for responsibility, and the dissolution of human checks, balances, and watchdogs that comes along with that. Computers cannot be held accountable and thus should not be used to:
Generate information out of thin air that is ascribed freedom of speech or press rights
Make decisions, especially those involved in politics, welfare, management, medicine, or other life and death situations
Be used in any kind of authoritative decision structures such as those involved with warfare, surveillance, or other apparatuses of destruction, power, and policing
Machine Learning, other AI aspects... do you just hate it in every form?
If you have a simple markov-chain-based bot for humor purposes, that is only trained on a very small set of output local to its location, is very obviously a bot (labeled and unconvincing in its output) and it uses minimal energy, and so on- it's really not a threat in any of the ways given above, so while it's pointless other than as a source of absurd humor and novelty, its harms are minimized and it's not going to take the job of a comedian.
Narrowly-tailored deep learning trained on only ethically-sourced data, built for a specific positive purpose, like folding proteins for therapeutic discovery in medicine can be a valid approach that still falls under the "AI" umbrella. There are a number of uses that fall in this kind of category, though some may be arguable regarding their usefulness, marketing, and otherwise.
Even given what seem to be good use cases for machine learning / deep learning and similar systems, it must be noted that situations that involve these still must follow some strict guidelines to be considered ethical in my view:
- See the section above on algorithmic discrimination for considerations. Warfare, surveillance, and similar are not benevolent uses of this technology.
- It should be human-guided and checked by individuals with enough knowledge and skill to do so
- The biases in the input/output should be considered and intently looked for
- More abstractly, the humanity of those involved and those affected by the eventual output should be a top consideration
- If the work it performs is later used to make important decisions, other parallel work should be performed to ensure the results are actually useful and not misleading
- There should be multiple layers of human checking and intervention, with company cultures and environments that explicitly allow bringing complaints when bias, misuse, etc are seen
Exceptional posts on AI worth reading
- Tackling the supposed use of AI for disability and disability-washing of AI
- Other topics
This page was last edited 10/07/2025.