KN Magazine: Articles

Andi Kopek Shane McKnight Andi Kopek Shane McKnight

Between Pen and Paper: Flaneuring Through a Writer’s Mind – The API of the Human Heart, or Why Your Characters Keep Misunderstanding Each Other

What if human communication worked like artificial intelligence? In this thought-provoking craft essay, Andi Kopek compares APIs—Application Programming Interfaces—to the invisible emotional “contracts” we use in conversation. By exploring parsing errors, emotional bandwidth, and schema mismatches, he offers writers a powerful new lens for understanding character conflict, empathy, gaslighting, and love. When characters misunderstand each other, it may not be malice—it may be incompatible formatting.

By Andi Kopek


There has been no shortage of criticism lately regarding artificial intelligence (AI). Some of it is thoughtful, some quite theatrical. I may dedicate a future column entirely to the ethical, economic, and existential anxieties surrounding AI. Today, however, I want to focus on something far less dramatic and far more revealing: how advanced AI systems actually talk to one another, how this can shine new light on human communication and miscommunication, and how it could inspire a modern writer.

Beneath the glossy headlines and dystopian forecasts, most modern digital systems communicate through something called an API, an Application Programming Interface. An API is essentially a structured contract that defines how one program can send a request to another, what format the data must follow, what information is required, and what kind of response will come back. In other words, before artificial intelligence can destroy our civilization, it must first agree on grammar.

Imagine two computer programs trying to talk. They cannot rely on vibes. They cannot roll their eyes. They cannot say, “You know what I mean.” They must follow a strict contract, a rulebook for how one system talks to another. An API. If the message does not match the expected structure, it fails. Not emotionally. Structurally. The receiving system does not feel hurt. It returns an error code: 400 (Bad Request).

Let’s have a little fun and apply this communication model to human interactions. Every person you know is running an API. It is undocumented. It is unstable. It auto-updates without notice. Your internal API defines what tone you accept, what topics are permitted, what context you require, what emotional load you can process, what you interpret literally, what you interpret as subtext, what feels like attack, and what feels like affection. When someone speaks to you, they are making a request against your interface. When you respond, you are sending data formatted according to theirs. Conversation is not just expression. It is parsing.

In programming, parsing means interpreting incoming data according to a defined structure. If I send { emotion: sad } but you expect { mood: sadness, intensity: 0.7 }, the request fails. Not because we disagree about sadness. Because we disagree about formatting. Now consider the most dangerous sentence in the English language: “I’m fine.” One person means: I am overwhelmed but not ready to unpack it. The other hears: Everything is okay. Same words. Different schema. According to our little game, human miscommunication is not malice. It is incompatible parsing.

If humans were honest, we would speak in status codes.

200 OK: I understand you.

401 Unauthorized: You do not have access to that story.

403 Forbidden: That is a boundary.

404 Not Found: I do not recognize the version of me you’re describing. 429 Too Many Requests: Please stop asking.

503 Service Unavailable: I am exhausted and pretending otherwise.

Instead, we say things like, “Whatever,” which is the emotional equivalent of a corrupted packet.

In AI networks, data can be corrupted, and signals can degrade. In humans, fatigue, stress, trauma, and cognitive overload can increase the error rate. The same sentence can succeed at 9

a.m. and fail by the late afternoon. Moreover, different neurotypes run different parsing defaults. As a simplified analogy, consider autism as a condition where parsing is more literal. If someone says: “It’s cold in here,” one person hears a temperature observation. Another hears a request to close the window. Different inference engines. Not broken. Just different schema.

From this perspective, depression can look like low processing bandwidth, high error sensitivity, and reduced response generation. Instead of getting a return of 200 (OK) for a typical request, the system returns 503 (Service Unavailable). Anxiety resembles a hyperactive validation layer. Every incoming message is checked for threats, rejections, or hidden errors. Neutral packets get flagged as suspicious. False positives multiply. Psychosis might be described as a model in which incoming data is integrated into a framework that diverges from shared consensus reality. The API still functions internally, but its mapping to the broader network has shifted. The person is not failing to process. They are processing through a different model.

AI systems do not have feelings, though they are becoming increasingly sophisticated at parsing human emotion in text and speech. So what about empathy, a feature we tend to reserve for living organisms? Some would say only humans. In this model, empathy is not absorbing someone else’s emotions like a sponge. Empathy is adaptive formatting. It is the willingness to say: Let me rephrase that. What did you hear me say? What did you mean? How would you prefer I ask? Empathy does not eliminate conflict. It reduces unnecessary 400 errors. Rigid APIs cannot do that. Flexible ones can. Consequently, the opposite of empathy is not cruelty. It is interface rigidity.

Since I’m writing this in February, I cannot ignore Valentine’s Day. Love, perhaps, is long-term API alignment. Over time you learn each other’s required fields. You anticipate response formats. You adjust rate limits. You recognize known error codes. You stop assuming malice in malformed packets. I think we could use more long-term API alignment right now.

Now, writers, this approach can be useful to your craft. Characters do not fight because they disagree. They fight because they parse differently. One character speaks in subtext. Another requires explicit declarations. One needs reassurance before vulnerability. Another needs vulnerability before reassurance. Each is making valid requests against an interface the other does not fully understand. Conflict is born in the gap between intention and interpretation. A character says, “You never listen.” What they mean is: “I don’t feel seen.” What the other hears is: “You are incompetent.” Boom. 400 (Bad Request), followed by 500 (Internal Server Error).

In thrillers, the villain often exploits API weaknesses in other characters. The villain withholds required fields, manipulates format, overloads of the emotional bandwidth, and sends signals designed to be misparsed. Gaslighting, in this model, is deliberate schema corruption. It forces the victim to doubt their own parsing logic.

And when two characters finally understand each other, what has actually happened? As in love, they have aligned their APIs. They have learned that “I’m fine” sometimes means “Please try again.” LLMs (Large Language Models) require enormous amounts of training data to achieve alignment. We train on years of shared experience. And still …

We live in an age obsessed with communication tools. Faster messaging. Smarter devices. Infinite connectivity. And yet our communication remains fragile and far from perfection. The next time a conversation collapses, pause and ask: was this bad intention from a sender, or bad formatting in the receiver’s API?

I hope that this little mental exercise will help to deepen characters in your story, sharpen your dialogue, and make the conflicts feel inevitable rather than contrived. And in your own life, you may discover that many arguments are not wars. They are documentation failures. Which, hopefully, can be revised.

Andi


Andi Kopek is a multidisciplinary artist based in Nashville, TN. With a background in medicine, molecular neuroscience, and behavioral change, he has recently devoted himself entirely to the creative arts. His debut poetry collection, Shmehara, has garnered accolades in both literary and independent film circles for its innovative storytelling.

When you’re in Nashville, you can join Andi at his monthly poetry workshop, participate in the Libri Prohibiti book club (both held monthly at the Spine bookstore, Smyrna, TN), or catch one of his live performances. When not engaging with the community, he's hard at work on his next creative project or preparing for his monthly art-focused podcast, The Samovar(t) Lounge: Steeping Conversations with Creative Minds, where in a relaxed space, invited artists share tea and the never-told intricacies of their creative journeys.

website: andikopekart.ink
FB: https://www.facebook.com/profile.php?id=100093119557533
IG: https://www.instagram.com/andi.kopek/
X: https://twitter.com/andikopekart
TT: www.tiktok.com/@andi.kopek

Read More

Submit Your Writing to KN Magazine

Want to have your writing included in Killer Nashville Magazine?
Fill out our submission form and upload your writing here: