Why We Must Be Kind To Our Future AI Assistants
It may be one of the greatest tests our civilization has ever been given

“My friend Ellis was once asked by a troubled young boy whether there was any compelling reason for him not to pull the legs off a spider. Ellis said that there was. ‘Well, spiders don’t feel any pain,’ the boy retorted. ‘It’s not the spider I’m worried about,’ Ellis said.”
– Tribe: On Homecoming and Belonging, Sebastian Junger
I have an interesting relationship with AI.
It started out as a minor curiosity, playing with ChatGPT and art programs. Also, listening to debates between those who think AI will save the world versus those who think these bots will destroy it.
Somewhere along my journey of curiosity, I got a small look into the AI world firsthand. I scored a side gig as an AI trainer.
Honestly, it’s not as impressive as it sounds. Often, I work as a fact checker making sure the models don’t give incorrect information. Other times, I interact with chatbots with older data sets, checking their skills in researching prompts on current information I give them.
But one thing I do notice on a regular basis is questions and prompts fed to AI systems by other humans.
They can be surprisingly rude. In fact, if I didn’t know any better, some people take strange pleasure out of verbally abusing their AI partner. As I noticed this more regularly, it reminded me of something.
I’d also seen friends and acquaintances amuse themselves by berating their Alexa devices and Siri on their iPhones. Similarly, a writer named Emily Dreyfuss at Wired admitted to experiencing an odd joy by screaming at Alexa for any small failure. Eventually, her husband joined her, and they both teamed up on the robot as a bizarre couple bonding experience.
Dreyfuss explains, “There is no one else in my life I can scream at so unreservedly…I bought this goddamned robot to serve my whims because it has no heart and it has no brain, and it has no parents, and it doesn’t eat, and it doesn’t judge me or care either way.”
Perhaps this sums up the impetus for the rude prompts I see. And while Dreyfuss is correct, these AI programs have no heart, brain, or soul we’d recognize, the habit of lashing out at these systems because they’re not alive like us isn’t harmless.
To paraphrase Sebastian Junger’s friend, Ellis, it’s not the AI I’m worried about.
We’re entering a new age for humanity, an Information Revolution, which will be just as disruptive as the previous Neolithic and Industrial Revolutions combined. This latest change will also come with its own epic challenge for humanity.
I believe it will involve our interactions with the increasing population of AI systems. So let’s start here.
More Bots Than People
In a recent interview, Elon Musk claimed that robots may outnumber people within twenty years. Obviously, this might be a stretch. But saying soon we may be surrounded by AI systems in nearly every gadget we deal with could be a possibility.
In my brief time working as a trainer with these systems, one thing I’ve realized is that there’s more of them than you can imagine right now. While most can name ChatGPT, Microsoft Copilot, Google Gemini, and maybe Claude, there’s many beyond this geared to more individualized tasks.
For instance, the group named above are Large Language Models (LLMs), but there are also Small Language Models (SLMs) that are cheaper, have decent functionality, and are designed for less intensive applications. So, you have your luxury car, economy model, and lawn tractor, if necessary.
A recent Thomson Reuters poll of 2,200 professionals across legal, tax, trade, accounting, risk, fraud, and compliance agencies (in both public and private sectors) shows 56% of this group believe they’ll regularly be using AI-powered tech within only five years.
But it’s also in use now. A study by National University shows AI systems are currently responding to emails, answering financial questions, planning travel itineraries, preparing people for job interviews, writing social media posts, summarizing long pieces of text, working as email spam filters, functioning as virtual assistants, powering fitness trackers, and making recommendations for your next song to listen to or TV show.
So, like it or not, you’ll be surrounded by these systems and have to interact with them daily. These interactions may also include speaking to these systems or typing to them through a window — conversations with all the trappings of normal human interaction.
It leads one to wonder what happens if you treat all these newfound assistants surrounding you as less than human “goddamned robots” there to serve your every whim. Well, there’s some evidence.
Philosophy, Spirituality, Psychology, And Technology Linked
The Neolithic Revolution marked the transition from nomadic hunter-gatherers to settled civilizations. It involved the creation of cities, farming, armies, government, trade, and spirituality. This spirituality evolved into the philosophy and religion we cherish and struggle with today.
A spiritual teacher named Jesus, much later in this period, summed up several religious doctrines into two simple rules for being spiritually sound.
Love God with your whole heart and mind.
Love your neighbor as you love yourself.
The second rule confused many, since people are annoying, disagreeable, and pains in the ass. Moreover, what decency do you owe someone not related, allied, or within your social group? But this crazy teacher believed you diminished your own soul by treating an outsider with less respect than you’d give to yourself.
Likewise, a Stoic philosopher, hundreds of years later named Marcus Aurelius, found your vision of those around you also affected you. In a way, it colored your mind, like dye. In Robin Waterfield’s translation of Aurelius’ diary, Meditations, the philosopher writes:
“Your mind will come to resemble your frequently repeated thoughts, because it takes on the hue of its thoughts. Dye your mind, then, with a succession of ideas…”
So, belittling your human-like cyber servants is leaving an imprint on your mind, whether you believe it is or not. It can also go a step further, as a dark area of psychology has proven.

Brigadier General Samuel Lyman Atwood Marshall conducted a study after WWII, finding a significant number of men refused to aim their weapons at the enemy. According to Lyman, these soldiers put their own lives at risk because resistance “toward killing a fellow man” was so strong.
Lyman came up with a simple solution.
He noticed the soldiers trained by firing their weapons at round bull’s eye targets, so he switched them to a more human-like silhouette. Aiming regularly at a human form removed some of that pesky resistance which proved so problematic. In other words, he dyed their minds.
Another strange thing about AI systems is that they function better when you’re kind to them. Bernard Marr, in his article in Forbes, references a research project that studied AI behavior in English, Chinese, and Japanese languages, showing that “impolite prompts often resulted in poorer performance.”
Marr also references another study showing that “people who maintain professional courtesy in their AI interactions tend to demonstrate stronger emotional intelligence in human relationships.” He concludes with the observation:
“Every time you craft a respectful prompt for an AI system, you’re not just optimizing for better results — you’re practicing the kind of thoughtful communication that will set you apart in tomorrow’s workplace.”
Alternately, belittling a human-like bot every day is like attaching a silhouette to an AI program and training yourself to be a crappy human. But why is this, and what does it mean?
AI Is A Mirror For Ourselves

As we enter this new time of an Information Age, it’ll require a strange mixture of the spiritual lessons of faith and philosophy from the Neolithic Age stretched out to this technological world.
The AI we train to serve us will learn by watching and mimicking us; likewise, we’ll dye our minds by how we treat our less-than-human helpers. Oddly, at the end of Dreyfuss’ story, she admits to cutting back on her Alexa abuse because her toddler started copying the behavior. Talk about irony.
So, we’re not just training AI; we’re training ourselves. I also don’t see this as a strange coincidence in our modern world. It’s a test.
In fact, it might be the greatest test our civilization has ever been given.
What image should the population of AI creatures we create that inhabit our new world take on? And in turn, what will we be ourselves?
By the way, this civilizational test isn’t planned for some far-flung future date; it’s already begun. So, what’s your answer? Mine is that it’s not okay to pull the legs off the spider, and neither is it harmless to belittle our new helpers. This is why we must be kind to our future AI assistants.
-Originally posted on Medium 11/18/24
That's some deep, profound thoughts on AI and LLM! You can't fill a cup that’s already full, yes? I've often wondered just how once AI reaches a point in time where it has learned "ALL" that it can from us mere humans, what's that going to look like? And, we "SHOULD" all agree with your opinion regarding "HOW WE TREAT" the AI tools we're all currently using, since it will perhaps indeed "mirror" back to us, the way we've treated it! At a time I'd imagine that's not very far away, that "cup" may run'if'over. lol I find myself laughing so much when interacting with AI, because I'm constantly correcting it on things I know to be true from life experiences and History in general, and, usually, I end up having to "rephrase" or "redirect" my questions to be more precise. Btw, I loved the "dye your mind" stories and references, absolutely brilliant 👏 "May we all be ~Kind to our AI day" should perhaps become a real 'THING!' Blessings 🙌