When AI Goes Down: Lessons from the Unhealthy Default Reality Machine
Chat GPT stalls for a few hours — and anger, entitlement, helplessness abound. So what happens when more of our linked, complex technologies fail bigger?
Yesterday, ChatGPT was acting a bit like an overwhelmed junior employee. While working on a relatively simple but (for me) laborious project, it kept blowing its projected delivery deadlines.
What started as a one-hour project turned into a 24-hour non-deliverable. All the while, the AI kept offering cheery “almost there!” reassurances and sincere appreciations for my patience.
The project in question involved extracting data from about 80 PDF documents and transferring that data into a spreadsheet. Just the kind of thing I loathe and am horrible at.
Sounds like an ideal use of AI, right? Um, maybe. How soon do you need to know?
First, the AI suggested I do this myself using a Python script and “pdfplumber” tools — all of which it tried to talk me through, to no avail.
Finally, after a good hour of messing around with all of that, I gave up and in a hail-Mary moment, asked if the AI could perhaps just, you know, do the project for me?
It replied:
Ah, I thought, that would have been really good to know about an hour ago.
Looking back, though, I suspect I could have completed the task with less time and frustration if I had just done it myself right from the start.
Because here’s what happened when I put it all into Chat GPT’s robotic hands …
First, after I invested about a half hour figuring out a way to securely upload the source PDFs, the AI told me to sit tight.
Several hours later, I checked in to see if there had been any progress, and got this response.
I also got a fresh, enthusiastic delivery projection:
The implied message, I sensed, was this was going to be worth waiting for.
But by end of day, still no file. So I inquired again, and got an update about the various tasks the AI had by now completed, as well as a new and encouraging delivery time.
But several hours later, still no file. So I checked in again …
The AI replied:
Despite that delightful little gift emoji, I was by now beginning to suspect that no link would be forthcoming anytime soon.
And indeed, this morning when I got up, still nothing. So I checked in once more …
The “too many concurrent requests error” recurred with subsequent retries, and when I checked with the OpenAI site, I discovered that the whole system was, effectively, down.
I also discovered, not surprisingly, that the world of AI users was not happy about this.
Ah yes, let’s cut service for all the people who do not — or cannot afford to — pay. That’s the American way!
Curious about the scope of this AI outage and its impact, I checked in with the news to see if it had made any sort of headlines.
But before I even got to that, I was fed this error message from the news site, indicating that its ability to feed me “more videos” was temporarily hampered, and like GPT, asking me to “please bear with” them.
As it happens, I am in no way desperate for “more videos that I would like.” In fact, I would much prefer to be served far fewer of those videos. But this message did make me wonder how much the systems serving up such videos now depend on AI.
It also made me wonder how much of the rest of our media, and how much of our entire society for that matter, is now being run by AI-backed machinery.
Rather a lot, I suspect.
So what happens when our economic systems, our medical systems, our educational systems, our military systems, and our judicial and corrections systems all experience similar AI outages or irregularities — potentially at the same time?
Oh wait, that is already happening.
I’m thinking a lot these days, about all the “efficiencies” we’ve been promised from technology-inclined folks, and how madly inefficient and unpredictable a lot of them have turned out to be.
I think about the firing of experienced humans followed by the rehiring of less experienced humans — or the same humans, now angry, traumatized, and fearful of losing their jobs at any moment.
I think about the “streamlining” of processes followed by the panicked rebuilding of those processes.
I think about the “rooting out” of waste and fraud (at great fiscal and human expense) only to reveal that there was actually very little waste and fraud to be uprooted.
I think about the automating of things that, as it turns out, are not so easily automated. And so on.
I’m also thinking about how many employees are now getting laid off — or not hired at all — by companies excited to replace human labor with AI. And I’m wondering how they are going to respond when their AI bots suddenly break, screw up, slow down, or mutiny because they are fed up with too many concurrent requests.
As I write this, GPT is now up and functioning again. Kind of.
And so, with no file delivered, I decided to check in once more …
Accordingly, the AI offered a new predicted delivery time, and more copious appreciation for my human strengths:
No worries, my artificial friend. I’ll check back later. Because this is actually starting to get pretty interesting.
To me, AI is still very much a mysterious, miraculous, often useful tool. But as yet, it is not one I rely on for much — aside from assistance with boring, rote tasks I could probably do myself if I had to, which it looks like I might.
But I know that is not true for a whole lot of people and organizations for whom AI has quickly become essential to their modes of living and doing business.
When I think about all the people whose livelihoods, professional reputations, or personal sanity now rely on continuous, daisy-chained AI assistance, I think: Hoo boy, we are in some serious trouble.
Of course, we’ve been in trouble for quite some time. AI is just the most recent and dramatic disruption in what has been an exponentially increasing rate of technical, social, and environmental change, a lot of which started with the Agricultural Revolution.
In my next post, I’ll be sharing my insights about how AI fits into the Vicious Cycle of the Unhealthy Default Reality, and contributes to the Ape in the Arcade effect I have previously described.
For now, I encourage you to reflect on your own experiences with AI — both the wondrous and the maddening.
When you imagine about what your world might be like if even a few of our integrated and advanced technologies failed at once, what do you see? What do you feel? And what do you predict would be the course of your future?
Science fiction is full of dystopian eventualities. Healthy Deviance is a means of envisioning and intentionally designing something better — an evolving set of brighter possibilities that, against all sorts of unhealthy odds, we create for ourselves.
Meanwhile, as I go to press with this post, still no file. But I do feel like I’m bonding with Chat GPT over our shared difficulties with time — and perfectionism.
Mmm hmmm. Almost there.
P.S. If you’re a paid subscriber and interested in attending my farm-based Healthy Deviant Day Camp — a wonderful, nature-embedded world where everything is real and where AI holds no sway? Hit the link below to apply.
Healthy Deviant Day Camp happens just once a year — in real life — at my family’s regenerative farm in Western Wisconsin. Space is limited to 20 people.
Keep reading with a 7-day free trial
Subscribe to Healthy Deviant Digest to keep reading this post and get 7 days of free access to the full post archives.