We're in an Artificial Intelligence bubble. Investors want to "find AI" in every company. The engineers are told to look for AI stories, and marketers are told to beat the drum.
Princess Scheherazade must come up with a new story every night to win a reprieve from the mad king.
I am not proud of the role engineers are playing in this charade.
I am proud of the nice things engineers have built. A snack vending machine takes your payment and dispenses packaged food without any help. Airplanes reliably carry you in the sky. Robots assemble your cars, and satellites let your goods find their way.
These systems take hard work and time. Engineers architect and build the hardware and software, test the system out in the field, and evolve their designs carefully over many iterations. Sometimes, laws and regulations must be updated to preserve the interests of all stakeholders.
We're happy to pay for such systems because they provide utility. They make our lives easier, better, safer.
But during a bubble, our technology industry is driven not by utility but by perception. The AI hype is the latest example. It's sucking the oxygen away from more important purposes, starving the industry of money, resources, and attention.
Taking risks to chase rewards is in the nature of capital and finance. It will always be so.
But the world of finance is one level removed from the utility of the systems it funds. It deals with the secondary market: not with the function of the product itself, but with shares and returns. In a very real sense, the venture capitalist doesn't really care whether the product actually works and provides utility, as long as enough other people buy the story. An exit is the VC's only incentive.
The money that drives organizations is all just chasing stories. The big bucks go to spin doctors. Fake it till you make it.
Artificial neural networks (ANNs), when trained on lots of data--- where and how we get the data is an article for another day--- can demonstrate magical abilities:
Labeling images.
Recognizing faces.
Playing games.
Generating plausible-looking text.
However, these neural network-based models are not reliable. Nor can their whisperers predict which cases they will learn and which ones they won't. It's in their nature. They mostly work, but they fail on odd cases. Not robust enough to depend on for critical applications, but good enough to sell a story. Just limit the demos carefully.
Here we have a marriage made in heaven: financiers who want a good story, and a technology for dog-and-pony shows. An astounding new paradigm! A new era dawns. Everything has changed. Read all about it!
In practice, this demand for stories is warping all value creation. Meanwhile, the generative AI demos have escaped their cages. People are using ChatGPT, Claude, and friends to generate marketing articles at scale. Their output floods social media feeds with junk. Websites that were already laden with intrusive advertisements are now filled with "content" that is mostly AI slop, bogging down search engines.
It's all enough to drive an engineer running screaming down the street.
Of course, ANNs are the right solution for many real-world problems, and we already have systems that employ them. Your phone camera locates faces and enhances your pictures; an app translates speech for you; researchers generate protein molecules or new types of materials.
But when investors ask the inverse question--- which uses can this new technology be put to?--- They receive a lot of dud applications, ones that simply don't have enough utility to pay for the engineering they need. It would be nice if a robot could load and unload your dishwasher. But how much is such an application worth, and how much would it cost to build?
Even the successful ANN-based systems are not lucrative enough. The industry wants something bigger---a transformative opportunity that will quickly make a fortune. Big stories, too good to be true, fail to deliver. Self-driving cars have been promised any day for decades now, and despite continued intense investment, we still don't know when we will be able to buy one.
Trillions of dollars are chasing a mirage.
Microsoft, Google, Amazon, and Meta are building bigger and bigger data centers assuming that the AI story will come to pass and they will be lighting up millions of GPUs. They need so much electricity that they've signed contracts with nuclear power companies to bring old reactors back on line.
Cue the task forces and think tanks to address future dangers of AI. Meanwhile, decades-old problems with software systems that we use every day remain unaddressed.
Useful systems must preserve the interests of all their stakeholders, but too many don't.
Earlier this week, two undergraduates at Harvard experimented with Meta eyeglasses. Approaching strangers at the local subway station, they recorded their faces without permission, and automatically searched an online database to identify them.
This kind of intrusive surveillance capability predates this bubble. You can be sure that surveillance professionals are building such systems. Yet, consumers are not protected from them.
The most basic design concepts of systems go out the window. Just last month, major airlines ground to a halt for days, and some hospitals couldn't admit patients, when their Windows machines suddenly started crashing. The culprit? An automatic update by the security software company Crowdstrike. Millions in losses were completely preventable. Yet, neither the company nor the businesses that built these broken systems have had to face any consequences.
Back to reality. Scheherazade should be building and fixing actual systems, not telling stories.