I have been thinking about obsolescence of late. Which shouldn’t come as a surprise considering that I have reached that age when being obsolete is a developing reality. I only need to look into the mirror, past the scraggly whiskers, and note the fine wrinkles that line my face. Or attempt to focus on the ever blurring icons on my iPhone. How can I feed my scrolling addiction when I can’t see them?
If I had any ambition, I would purchase one of those Facebook-advertised blood tests that claim to reveal the true age of your organs and whatnot. Or better yet, I could explore if my deep brain stimulation (DBS) system can integrate the newest technology known as “adaptive DBS)”.
Approved in 2025, aDBS uses artificial intelligence to predict what your brain stimulation needs are in real time. In theory, real time brain adjustments are better than the manual tune-ups performed by a neurologist every six months. Real time just seems too real to me. I cannot speak for most Parkinson’s patients, but to me, predictability is high on my symptoms wish list and I am not sure AI can outguess old Mr. Parkinson’s.
Then there is the technology obsolescence problem. How will the tech companies who control the AI that controls our brains update their operating systems? Just imagine the worst Microsoft update that you have experienced and then multiply it to the nth degree. Do you reckon that real time technology glitches in your brain will be a new type of mental disorder? That’s the wall that Transhumanism hits. Having experienced that technology doesn’t age well, I am surprised that aDBS was approved so early in the product cycle of AI. (Note that I almost wrote “life” instead of “product” cycle. How quickly do we want to humanize our potential adversaries.")
Americans are divided over their view of AI, but not along partisan lines . . . as of yet. A recent study conducted by CBS shows that concerns about AI are overall non-partisan and non-gender based. Simply put, we have a brief moment in time to form a consensus where we can build guard rails to guide the ethical and political challenges inherent within AI - much like we did when cloning became a reality.
I don’t worry about AI because AI can’t worry about me. AI does not have emotional intelligence. It cannot understand how humanity improves when individual people pursue emotional acts such as love, joy, peace, patience, kindness, goodness, faithfulness, gentleness, and self-control. AI knows nothing about the fear or reality of suffering, disease, and death. It doesn’t even care if its code degrades.
Our cuspy situation reminds me of the final scene in Mel Gibson’s Apocalypto when the indigenous Yucatan family, having escaped slavery and death from the raiding Mayans, caught the first sight of Spanish ships in the harbor bearing the first wave of conquistadores. That was the defining start of obsolescence for the Mayan civilization.
The Mayan’s had more comprehension of what was coming for them than we do of what AI will bring. The Spanish brought an advanced form of brutality already practiced by the Mayans while AI comes at us like a fog on little AI generated kitten feet.1
Thank you, Carl Sandburg.


Hi - that’s such an excellent outro - I’ll look up Carl Sandberg in due course…but in the meantime I’ll enjoy the wonderful soft grey kitten wickedness of the image…