Super Bowl Commercials and the AI Bubble — Why Investment Looks Dangerous
The Super Bowl commercial landscape is a fascinating cultural artifact. The prevalence of AI in this year's ads might tell us something about the Super Bowl: it's a bubble predict predictor.
One of the outstanding things about the Super Bowl is that even if the game is a boring blowout — which it was this year — it can provide us with a window into so many other elements surrounding the NFL landscape and culture at large.
It doesn’t take much understanding of the game, for example, to know that Kendrick Lamar’s halftime show performance was the final step in his evisceration of Drake. So, too, with the commercial landscape.
One doesn't have to be familiar with the fact that the Eagles turned up the pressure on Patrick Mahomes without blitzing him once in order to get that this year’s Super Bowl ad landscape gives us a preview of what’s coming next in capitalism.
This year’s Super Bowl ad spree features advertisements for all sorts of artificial intelligence products; some are from companies dedicated to providing generative AI as their primary product, like OpenAI, while others are focusing their ad space for their AI offering — Google is advertising Gemini, Microsoft is showcasing Copilot, GoDaddy and Salesforce are highlighting the AI tools within their platform and Meta is advertising its… sunglasses?
“Artificial Intelligence” is a broad term encompassing a wide range of technologies, but recently has come to mean the type of “creative” artificial intelligence produced from Large Language Models like ChatGPT. These models largely convert units of language — words or phrases — into tokens, all of which have particular values attached to them.
Using these combinations of tokens and their respective values, they can “generate” new, purportedly unique pieces of text so they can fit appropriate words into sensible orders that don’t appear elsewhere in literature or online.
It is perhaps overly simplistic to call these models more advanced versions of predictive text tools that cell phone users have had for decades, but it’s also not entirely inaccurate. That fact may make it suitable for some applications — LLMs seem to be pretty good at producing code snippets for programmers (though that is also sometimes suspect) — and poor at other tasks. Like, evidently, basic math.
Other “artificial intelligence” products exist outside of the realm of these LLMs — Amazon’s AWS solutions that highlight which players might be most likely to blitz or how difficult a catch was are built off of “discriminatory” AI that is bound not to creating new data but building off of already extant data.
In a sense, LLMs “guess” what word might be next based off of a series of relatively imprecise token-matching functions while other machine learning models tell you the probability of a particular outcome based off of the history of variables for a similar event.
That might not seem like a big difference, but it is — the Amazon prediction machine may have miscalibrated the probabilities of a blitz on a particular play, but they won’t invent a new defensive technique. LLMs can often invent “facts” out of whole cloth, generating what are called hallucinations. It’s a big problem.
The new AI systems are “built to be persuasive, not truthful,” an internal Microsoft document said. “This means that outputs can look very realistic but include statements that aren’t true.”
None of this is to say that LLMs don’t have use cases. But rather that the use cases that have been advertised, suggested or even implied run far broader than what has been functionally possible. And that has led to some of their bloated valuations.
Much of this was covered in a previous piece, one about how sports media organizations will attempt to use LLMs to replace sportswriters and stuff sports media with “artificial intelligence.”
Perplexity, a company that advertised that it didn’t advertise during the Super Bowl, offers a similar product to ChatGPT. To test it, I asked it a question that a natural language database search should be able to unveil but was not easily accessible via a simple web search or typical lookup tools: “Which team has started the most quarterbacks in a single season?”
The answer it gave was incorrect. Neither the 2015 Cleveland Browns nor the 2020 Denver Broncos had achieved this ignoble accomplishment. Not only that, neither they nor any other team in NFL history has started six quarterbacks in a single season, as far as my research could tell.
Both of those teams started three quarterbacks in those seasons. Curiously, the Browns just last year tied the NFL record with five: Deshaun Watson, Joe Flacco, P.J. Walker, Dorian Thompson-Robinson and Jeff Driskel.
That record is shared with four 1987 strike-era teams (Chicago Bears, New England Patriots, Atlanta Falcons and Kansas City Chiefs), the 1984 Chicago Bears and the 1961 Buffalo Bills.
It’s a difficult question to look up; I didn’t expect Perplexity to have the answer to this fairly niche query. But, this demonstrates that there isn’t much benefit to using these LLMs as search engines, despite the fact that many people do. Nearly everything an LLM can do as a fact-checker a search engine could already do ten years ago.
Keep reading with a 7-day free trial
Subscribe to Wide Left to keep reading this post and get 7 days of free access to the full post archives.