BUY THE TICKET - TAKE THE RIDE
BUY THE TICKET - TAKE THE RIDE
Senior education officials, regulators, and media mavens all over the world have been focused for some time on the issue of how merely mortal teachers will ever be able to successfully distinguish between materials actually written by their students on the one hand and content created by new technologies driven by artificial intelligence on the other. Interestingly enough, the vast majority of actual educators who work in the field every day with students of all ages and various skill levels don’t think this is much of a concern at all. They know their students, they know their respective abilities and capacity, and frankly they only wish their students were smart, motivated and talented enough to try to accomplish such a feat of prevarication.
Another interesting set of discussions is taking place in the work world everywhere from the movies which are now all a twitter (no pun intended) about the AI-based voice enhancement technology that was used to improve the authenticity of the Hungarian voices in The Brutalist movie all the way down to the magazine world and the not-too-distant but humiliating discovery in 2023 that articles in Sports Illustrated were actually AI-written and attributed along with head shots created for non-existent authors. No one complained about the content of the stories, they were just apparently horrified by the process of computers replacing copywriters.
All of these concerns and discussions are parts of a new set of fears arising in two different areas: (a) tons of anxiety across every industry about job elimination through automation and A.I. implementation; and (b) the increasing idea that we are all less and less able to tell the difference in so many ways between men and machines. Plenty has been written about job losses, but we’re just beginning to realize how exposed and how unaware we are of the extent to which our expanding and encroaching technologies have subtly and unobtrusively invaded and subsumed so many aspects of our day-to-day lives. One of the simplest and most obvious examples is captchas. We now take for granted and unironically that it’s become our daily job to repeatedly prove to computers that we are real human beings before they permit us to get on with so many different activities and transactions. For the moment, it seems that we’re all stuck with technology when all we really want is stuff that works.
The truth is that our technological development work is so completely focused on the future that we seldom, if ever, look backwards and, as a result, rather than learning from them, we are doomed to keep repeating the mistakes we have made in the past and forgetting the lessons that we should have painfully learned by now. As a result, we quickly come to depend on these new modes of assistance and support in doing things and, at the same time, we become somewhat afraid because we know that there are aspects of their operation and abilities that we can’t entirely control. I’m not talking about Skynet and Arnold, but some more subversive undertakings that are superficially attractive, clearly less threatening at the moment, and designed to replicate, impersonate and deal directly with other machines and computers “as if” they were human.
With the announced and accelerating rollouts of agentic tech, I believe that we’re on the cusp of another deep technology rabbit hole which we’re largely unprepared for and ill-equipped to deal successfully with. What we never seem to appreciate is that when we develop new disruptive tools and technologies, we immediately seize upon the initial implementations and put them into action before we remotely understand them in their entirety, consider their unforeseen and consequential longer-term effects, or even appreciate how long and costly a process will be required to understand how to best put them to use. Every new technology is a package deal which brings its own negativity right along with all its upsides and benefits.
The recent unveiling by OpenAI (www.openai.com) of its new agentic offering called Operator is the latest clear step forward for better or for worse. (See
.) Incorporating CUA which stands for computer-using agency along with the ability to interpret and act upon handwritten lists and other images, Operator – for all intents and purposes - looks to other computers like a human operator who is using both a keyboard and a mouse. Already connected to Open Table and Instacart among other apps and services, Operator can seamlessly book tables and reservations, order tickets, select groceries, and initiate regularly scheduled tasks with very limited, if any, human intervention once the process is set in motion. Only at the final moments and specifically when payment information and confirmation is required does the system pause and ask for approval before proceeding. It’s only a short further step to complete autonomy and reaching the point where, as Jim Croce sang in his version of Operator, “there’s no one there I really wanted to talk to”. (See
.)
If this prospect doesn’t recall the frightening scenes from Fantasia (See
.) where the unstoppable brooms carrying buckets of water marched ceaselessly forward and step right over poor Mickey, the Sorcerer’s Apprentice, then you’re simply not old enough or a fan of classic Disney movies. Embedded in this fantasy is a real warning which has even more direct and important application today. It’s not difficult to imagine even more sophisticated and fully automated onslaughts launched against ticket sellers or new and more convincing scams and frauds using data and imagery extracted by these new tools.
A photo of a handwritten shopping list – as used in the Operator demo video - seems innocent and harmless until you realize that you’ve provided the digital world with the ability to readily replicate your cursive signature. This may matter less as we move forward, and the schools completely abandon any effort to teach our kids how to sign their names on documents or even write properly and settle instead for block printing. (See https://www.npr.org/2022/12/03/1140610714/what-students-lost-since-cursive-writing-was-cut-from-the-common-core-standards .)
Bottom line: here we go again on a wild chase into the future without any clear end in sight or a sufficient understanding of the risks involved or how they might be limited or circumscribed. We’re buying the ticket, closing our eyes, and taking the ride. As the late Hunter Thompson used to say: “there is no honest way to describe the edge because the only people who really know where it is are the ones who have gone over.”