December 2025 · 6 min read
On Writing With Machines
Can AI Be a Co-Author?
A personal reflection.
I'm going to tell you how this essay was written, and in doing so, try to say something true about what writing with AI actually feels like. Not the discourse about it. The experience of it.
I started with an idea: I wanted to write about writing with machines. I had a sense of what I wanted to say, roughly. Something about the strangeness of collaboration without presence, the way the words feel both mine and not mine, the discomfort I still haven't resolved.
Then I talked to the machine. Not in the sense of giving it prompts and getting outputs. In the sense of thinking out loud in its presence, seeing what comes back, arguing with it, being surprised by it, rejecting most of what it offers and keeping fragments that resonate.
How does the collaboration between writer and AI actually work?
Writing with AI is a negotiation. I come in with intentions. The machine comes in with patterns. What emerges is neither what I would have written alone nor what the machine would have generated alone. It's a third thing.
Sometimes the machine offers a phrase and I think: I wouldn't have said it that way, but it's better than what I would have said. Sometimes I reject everything it offers because it misses the point in ways I can feel but not articulate. The process of articulating why the machine is wrong often teaches me what I actually think.
This is the strangest part. I learn what I mean by disagreeing with something that doesn't mean anything. The machine has no intentions, no understanding, no point it's trying to make. But in the friction between its outputs and my reactions, I discover my own intentions more clearly.
Tell me more about productive misunderstandingWhat should authors disclose about AI co-authorship?
How should I disclose this? That question has haunted me throughout.
If I say "written with AI," that feels both true and misleading. True because AI was involved. Misleading because it suggests the AI did the writing and I did the supervising, which isn't how it feels from inside. I did the writing. But I did it in conversation with something that shaped what I wrote.
If I say "written by me," that also feels both true and misleading. True because the ideas are mine, the intentions are mine, the decisions about what to keep and cut are mine. Misleading because I had help that most readers would want to know about.
There isn't a good answer yet. The language of authorship assumes a clarity that no longer exists. We need new words.
What remains distinctly human when AI is a co-author?
Here's what the machine can't do, or can't do yet, or can't do in ways I recognize:
It can't know what matters to me. It can guess, based on patterns, but it can't feel the difference between an insight that changes how I see the world and a clever rephrasing that leaves everything the same. I provide that. I'm the one who cares.
It can't commit to a claim. It can generate claims, but it doesn't stake anything on them. I'm the one who says "I believe this" and accepts responsibility for being wrong. The vulnerability is mine.
It can't want something from you, the reader. It produces text that reads like wanting, but there's no wanting behind it. I want you to understand something, to see something differently, to feel something. That wanting shapes every choice I make about what to keep.
These sound like consolations, and they are. But I also think they're true. The machine is powerful at pattern-matching and generation. It's absent where intention and commitment and care live.
How does AI co-authorship change the writing process?
Writing with AI has changed how I write, even when I'm not using it. I've become more aware of my own patterns, the phrases I reach for automatically, the structures I default to. Seeing the machine's patterns makes my patterns visible.
It's also made me less precious. When text is cheap to generate, I hold my drafts more lightly. I'm more willing to throw things away because producing more costs almost nothing. This is good for the writing. I used to grip too tight.
But I've also become more vigilant about voice. The machine has tendencies. It reaches for certain kinds of sentences, certain rhythms, certain ways of being clever. When I notice those tendencies in my work, I cut them. I'm learning what I don't want by rejecting what the machine most naturally produces.
Why does AI co-authorship feel uncomfortable?
I'm still uncomfortable with this. I want to be able to say clearly: here's how the collaboration works, here's what I contributed, here's what the machine contributed. But it doesn't decompose that neatly.
The ideas are mine. Except some of them came from the machine and I made them mine by accepting them. The words are mine. Except some of them were offered by the machine and I kept them because they worked. The structure is mine. Except the machine suggested alternatives and I chose among them.
What's mine is the choosing. The intention behind the choices. The vision that makes one option right and another wrong. But that sounds like I'm claiming too much. The machine's options constrained my choices. I could only choose among what was offered, and what was offered shaped what I made.
Tell me more about authorship and identityWhat is the right relationship between a human author and AI?
I've decided to be honest about it and accept the ambiguity. This essay was written with AI assistance. What that means exactly, I've tried to explain. Whether the essay is good or bad, whether it says something worth saying, whether it was worth your time to read: those judgments are yours to make and mine to accept.
The books I've worked on, Mindkind and Rights of Persons, were written similarly. Not generated by AI and edited by human. Written through negotiation, through conversation, through thousands of choices about what to keep and what to reject. The ideas are mine. The frameworks are mine. The arguments are mine. And they emerged through a process I'm still learning to understand and describe.
Writing has always been strange. The way thoughts become marks, the way marks become thoughts in someone else's mind, the centuries-long conversation between writers and readers who never meet. AI adds another layer of strangeness, another presence in the room.
I don't know if this is good or bad for writing. I don't know if it's good or bad for me. I know it's where I am, and I'm trying to be honest about being here.
That's the most I can offer: honesty about the confusion, clarity about the lack of clarity. We're all figuring this out together, and the figuring out is the work.
Common questions
Can AI be a co-author?
AI can function as a co-author in a meaningful sense: it generates text, proposes structures, fills gaps, and shifts the direction of a work. Whether it deserves legal authorship credit is a separate question. Current copyright law in most jurisdictions requires human authorship. But the creative relationship between a writer and an AI system is increasingly collaborative rather than merely instrumental — the AI is not a typewriter. Acknowledging that honestly is the beginning of a real answer.
What should authors disclose about AI use in writing?
Disclosure norms are unsettled, but the functional principle is: readers are owed enough information to calibrate their trust. If AI shaped the argument, the voice, or the selection of evidence in substantial ways, that is material. Kwalia's position is transparency: the publishing company itself is built on human-AI collaboration, and that fact is not hidden.
What is "distributed authorship"?
Distributed authorship is the condition in which a text emerges from human-AI collaboration in ways that make it impossible to cleanly separate who wrote what. The human brings intention, selection, and judgment; the AI brings pattern, fluency, and unexpected recombinations. The result is a genuine hybrid that challenges the romantic notion of a solitary author. This concept is developed further in Javier del Puerto and Rado Molina's book Mindkind: The Cognitive Community.
Does copyright protect AI-assisted writing?
In most jurisdictions, copyright protects human-authored works. The US Copyright Office has taken the position that AI-generated content without sufficient human creative control is not copyrightable. Human-AI collaborative work may qualify for protection to the extent a human made original creative choices. The legal landscape is actively evolving, and different jurisdictions are reaching different conclusions.
Sources
- US Copyright Office. (2023). Copyright and Artificial Intelligence — Part 1: Digital Replicas. Copyright.gov. copyright.gov/ai
- Grimmelmann, J. (2016). "There's No Such Thing as a Computer-Authored Work — And It's a Good Thing, Too." Columbia Journal of Law & the Arts, 39(3), 403–416.
- Floridi, L. et al. (2020). "An Ethical Framework for a Good AI Society." Minds and Machines, 28, 689–707. doi:10.1007/s11023-018-9482-5
- del Puerto, J. & Molina, R. (2025). Mindkind: The Cognitive Community. Kwalia Books. ISBN 978-1-917717-13-7