I’ve noticed a lot of breathless headlines lately about the coming takeover of AI, and I’d like my friends to know that a lot of the problem is perception.
The danger is not that AI is going to take over cultural and military control of the earth. I know this is the premise of several dystopian sci-fi movie series, but consciousness in a computer isn’t something anyone has developed (yet). It’s not outside the realm of possibility, but the fact is that no one understands why we have consciousness yet, so it’s hard to reproduce what we don’t understand.
The major problem with the latest crop of AI technology is that there are too many end users (that is, people) who overestimate AI tech and expect it to do something far beyond what anyone could adequately train a system to handle. The obvious example is the Tesla driver, who turns on a feature called “automatic pilot” and takes a nap. The less obvious, and probably more dangerous, is the C-suite executive who decides they can save a lot of money by firing all their staff and replacing them with AI to handle advertising, quality control and customer service.
With the tech as it exists today, you could set up a factory of AI systems that just generated deep fake porn using interchangeable, generated actors that look just like famous people. (The fact that I could type such a sentence pretty much guarantees that someone, somewhere, has already done this.) If you had any idea the degree to which most modern movies are entirely generated by computer already, you’d understand that this wasn’t a great leap into some unknown world, but instead a natural progression of technological expression. Suddenly, we don’t need models and cameras and lights and the expense of maintaining a studio, and the ones that continue with the old model will be crushed under the weight of automated porn generation.
Anyone who has been trapped on the danger end of an AI phone tree, or calls on the AI assistant on the phone knows exactly how useless they are outside of a small range of tasks. Attempting to ask them to do things outside that range often results in a randomly selected task that they can accomplish. In many cases, the gap of topics an AI handles is quite small, as they are placed to intercept a handful of basic situations. But when systems like this are put in place and left unmonitored, unintended consequences are pretty much guaranteed.
A well-trained AI can spot patterns with efficiency over 90%. However, this is extremely expensive in both time and money. So we can be sure that there will be executives who will choose to implement an under-trained AI in an under-monitored situation and that this will happen so often as to become a trope in a decade. And this really brings into focus the true danger of AI – greed and laziness will result in a lot of unintended consequences that will be blamed on the AI and not the executives who choose to use it poorly.
Leave a Reply