Machine utterance: what does it mean for humanity to say something?
If words are written, if sounds are uttered, was something still *said*? How do large-language models change who we are and what it means to *do* anything? And how does the overall economy of riskless LLM-use shift honesty, truth, our climate, and the future of the world?
Alberto Cairo posted on Linkedin a provocation, “The rise of GenAI is the corollary of an era that values talking a lot without saying much.”
In response, someone argued that AI may have “jagged edges” but is “certainly changing how things are done (for the better).”
Perhaps. I’m not sure if “for the better” here can be easily unpacked. I’m skeptical (to put it lightly). So instead, I want to interrogate Alberto’s words a bit more: what does it mean to “talk a lot without saying much?” And why does a shift from what “saying” something used to mean into what “saying” means now, really matter to unpack?
Generative AI is changing who we are: it is an ontological event
Now, I’ve already written (or should I say “said” here?) about how I think generative AI is shifting what roles we play as humans: from having thoughts to managing them. But I think it could be argued that we do still learn something when we leverage AI tools. We still “have thoughts” so to speak. Human intellect is impressive and I’d argue that we aren’t not learning when we use AI. But what we learn likely changes. So perhaps that framing isn’t quite right yet - I’m still working through my thoughts on this all the time.
So instead perhaps what irks me are the impacts when we change from doing to managing. What do we get to claim is ours anymore?
My major provocation is this: if agent-based AI do the talking, who is saying anything at all? And more importantly: what does that make us? Who are we now if we cease to say things of our own?
Consider: If I asked someone to give a speech at my wedding, I’m not the one who said something - they did. So I haven’t really said anything, save for asking them to speak. If I have an assistant who works for me and I ask them to ghost write a book for me, again, I haven’t said anything. The ghost writer did.
But when we use AI agents to say, work, express, and do things - people readily take credit. Why? Why do we get to take credit? Is it simply because there isn’t another human we are taking credit from? Why does asking, “write this email for me” mean we get credit for the email, if another entity (AI or person) wrote it? (And of course: we are taking credit from other humans when we use generative AI because those models do not exist outside of an ecosystem of theft.)
So I’d argue that true users of AI agents aren’t saying much of anything anymore. To have a thing written isn’t the same as saying something. Machine utterance isn’t a human speaking.
To my older point: users of AI agents are managers and requesters then, not doers. They’re all petit executives of their own little enterprises.
One might learn things through the use of an AI agent, but claiming to have written a paper or code or whatever is dishonest and opaque; you may have, to some degree, contributed to the production of an artifact. To some degree, your management and validation may have been involved, but supervisors, CEOs, and QA testers aren’t the same as engineers, designers, authors, and creators.
Yet despite my protests and provocations, I firmly believe that we are witnessing a definitional shift at a philosophical level. Generative AI and agent-based AI are changing what it means to “say” and “do” things. That, in turn, shifts who we are.
And I, quite realistically, don’t think that I can defeat this new shift in language. One thing that humans have left to do, that we are still doing, is re-defining what it means to speak and who we are (perhaps unfortunately so). I, of course, deeply oppose the trend where generative AI gets to shape who we are becoming. But again, I am sure that I won’t win this battle. We already have “authors” who have never written whatever is in their own books. And this trend will likely continue so long as people profit from it.
Eytan Adar on Bluesky posted a provocation to a post I made, based on that old blog “Real programmers don’t eat quiche.” The idea here is that policing the tools and methods people use, taken to the logical extreme, is really just culturally gatekeeping a field of practice. The post is meant to challenge my critiques of LLM usage by researchers, where I asked, “what is the point?”
I have 2 things I’d like to say against the “Real programmers don’t eat quiche” critique in the context of LLMs and our identities as creators: First, there does come a point where words matter. A “writer” should perform the act of writing and a “programmer” should perform the act of programming. What it means to program changes based on the conditions, our tools, our outputs, etc. So does someone still program if they ask someone else, or something else, to program for them? Maybe. I’d argue that there comes a point where you aren’t a programmer anymore but a manager-to-the-programmer instead. My second point is that I, on socio-cultural grounds, actually do oppose the idea that a “writer” can claim the title if they use an LLM. I do want to gatekeep some indentity a bit. Even aside from definitions and the importance of words (or whatever), I cannot have a reasonable conversation about “writing” with a human “writer” who hasn’t written anything other than prompts to a machine. We are socially and culturally divided. What means “writer” to them has little worth connecting in terms of what it means to me. We don’t share techniques, thoughts, motivations, influences, histories, and experiences. Perhaps they can claim “writer” and I am some older thing now (a pre-historic writer, a dinosaur, of sorts). But whatever I am isn’t what they are.
As the immortal Ursula Le Guin said, the thieves, posers, and villains of writing are the ones who cannot “appreciate their predecessors and fellow-workers in the saltmines of literature.” Asking me to acknowledge a writer’s manager as a writer themselves is a social and cultural offense to the craft of writing. The driver with the whip and the slave are not the same.
I actually left industry once it became clear that my career path forward would likely involve management. “Individual contributors” are expensive and the role seemed high-risk to maintain as a specialist. But management has flexibility into other roles. This is because the identity of a practitioner with deep expertise is much different than one who dictates and delegates their agendas to others.
So yes, as anti-social as this stance is: I cannot accept that my creation and someone else’s (who uses LLMs) are even remotely related and worth speaking about using the same terminology.
In this definitional shift, “saying” used to mean that the thoughts, ideation, framing, motivations, editing, validation, expression, construction of language, and execution are performed by a human person who can be held responsible for all of the above and may take credit for all of the above. Things are different now.
So perhaps my next thoughts rest on the ever-relevant critical questions: who benefits from these new identities? And how? What are the material conditions, realities, and systems that support and incentivize the new ways of being that generative AI and large language models have given birth to?
What is the economy of modern artificial intelligences?
In a sense, there used to be a risk to writing. Writing and saying used to have an economy of give and take: you said things if it was worth it, to get away with it, to bring it into the world for your own benefit (or the benefit of others). We would speak to enact change, even if it had risk. This meant that we would sift and sort, the economy of writing, much like the economy of creation and craft, mattered to be done well. The cost of poor creation was wasted time, effort, and potential responsibility for damages caused or loss. Time, in particular, has always been one of the greatest filters of human production: we had to believe that creation would be worth our time. Responsibility, in tandem to time, was our primary method of refinement.
But now “saying” has drifted to mean that any one of the things that comprise the collective act of “saying” may or may not involve any human responsibility, time, or labor at all. Thoughts, ideation, framing, motivations, editing, validation, expression, construction of language, and execution may all be performed by machine agents. And because of that, we have lost responsibility. We have no more economy of creation. Writing something no longer has risk because it no longer requires anything of us.
The great selling point of generative AI, which is built on mountains and mountains of existing human creation, is that you no longer need to pay any price, save for 9.99 per month for a premium subscription, in order to create. Enough “hard” creation has already been done, now human labor can be re-created with ease. Quite a tempting selling point!
Interestingly, we have managed to still maintain an ethos where we can take credit for “saying” things, despite possibly having said nothing of our own at all anymore and not being responsible for what we’ve said, either.
And this leads us to the reason that I believe generative AI may be the tool that fully erodes our honesty and trust in digital spaces. Crystal Lee’s research comes to mind: She has an excellent piece on how viral visualizations were used to mislead and create disinformation. What has happened in our modern age is that when we have economies without risk (which are, again, a fantastical proposition) and tools that enable those economies (such as ones that divorce humans from responsibility), we see that things like lying and destroying are now worth it.
Generative AI is, therefore, an enabler of this new fantasy economy, where machines “say” on our behalf and yet are capable of massive destruction. We pay nothing up front but a measly subscription fee. We have virtually no laws to regulate horribly evil acts like generative AI pornography of people without their consent, stealing the words and styling of writers and artists, producing books, research, data, and visualizations full of lies and falsehoods, eroding public and private trust in our existing infrastructures of knowledge, and so on.
The real cost
And the worst “cost” of all that generative AI hides from us? The price of extraction being paid by our planet.
The cost to us right now seems low, but the price being paid is very high. It is an existential threat, in fact.
In my fantasy world, Braven, the big twist in the meta-narrative is that magic, which is accomplished by creating “portals” between Braven and infinity, is actually just creating a portal to the future of Braven. And eventually, the day when all past magic has called on the world arrives and immediately the surface of Braven is scorched to a crisp and the veins of the earth are necrotized. Most all humanity dies instantly, all magic ceases to function, and the great demons who plotted patiently from under the earth finally emerge to consume the flesh of every burnt corpse that remains.
And I wrote this over 20 years ago as the cornerstone event of my whole world. And I watch as my own prophecy is coming true right now: we are rapidly bringing an apocalypse (from an entirely avoidable future) closer and closer to the present day, all because we have the convenience of “magic” at our hands.
What’s left? Do we dismantle these systems?
Again, I’m realistic. People won’t change their behavior unless we write laws and outline reasonable policies. Even the infamous creature of disinformation-enablement Joe Rogan believes that, for example, contractors still need to have laws or else all of our homes will fall apart.
People will cut corners over this next decade or two, politically, digitally, physically, and intellectually. And houses, metaphorical or literal, will fall apart. And maybe we will have enough forensic threads and powerful enough arms of justice to respond when our calamities come knocking.
But also, I’m a pessimist with technology. I think that both the oligarchs and the political right will rise to power because of disinformation (and well, will remain in power as they already are). They’ve been willing to leverage economies and infrastructures that are dishonest and even illegal, because they know they can get away with it. Generative AI is only incentivizing this existing trend.
Perhaps my own solution will be to create more physical things, by hand, for a while. That seems to be a haven, for now, where I can create without encroaching flies buzzing in my ear about how much faster and bigger I could make things if I was only willing to leverage an agent on my behalf.
And I’ll continue to write my own words, in the old way, and take responsibility for the things I’ve chosen to say because, for now, I still love myself enough to say and make my own things. We will see where this all goes in the coming years.
And maybe all of this is a motivation to leave academia, which is steeped in marriage to our new magics, and instead write my fiction, which is sadly becoming less and less fiction every day. If I’m lucky, I can get away with a stable job and still find the time to write. I feel like it is becoming more important than anything else now: to really say what should be said - about today, tomorrow, and worlds that we can only ever imagine existing, even if it costs me something to do so.