Examining Broad vs. Narrow AI

Simon won the Nobel Prize for Economics and the Turing Award for Computer Science. And yet in 1965, he posited that TAI would be available in 20 years.
May 6, 2024
Table of contents

AI hype has boiled over, and its “pasta water” is everywhere. What a mess! How do we make sense of it all? 

I’ve made the necessary mistakes to become the go-to person for AI-related questions from friends, teammates, customers, and my broader community. To be informed as I fielded an onslaught of questions such as: 

  • Are programmers going to be replaced?
  • How will this impact the film industry?
  • My boss wants me to start using LLMs - what do I do?

I set out to make sense of the State of AI we find ourselves in. I looked at AI’s history to understand the present moment, dug deep into the promise and pitfalls of generative AI, and sought to create a framework for categorizing AI solutions. 

AI’s decades-long history

AI isn’t new. The term “artificial intelligence” was first used in 1955 by John McCarthy at a workshop at Dartmouth1. “Machine learning” came into use in 1959 when Arthur Samuel gave a speech about teaching machines to play chess2

Computers were conceived as a tool to augment human intelligence. Early forays into AI research showed glimpses of human intelligence were not just augmented, but surpassed. This ambition of machine intelligence, often called “general intelligence” or “transformative AI,” has long been held as the end goal of AI and computation.

And, it is always said to be right around the corner! Herbert Simon, recipient of both the Nobel Prize for Economics and the Turing Award for Computer Science, predicted that AI systems would be capable of any human work within 20 years - that was in 1965. A few years later, Marvin Minksy, the co-founder of MIT’s AI laboratory, wrote “Within a generation [...] the problem of creating ‘artificial intelligence’ will be substantially solved.”3

Simon and Minsky were luminaries of AI; yet, they were way off on their prediction. Twice before, once in the 70s and again in the 80s, the field of AI found itself in a “winter,” or a period where annual global investment decreased by ≥50 percent, and public interest evaporated. Due to one thing: ambitious claims and a failure to produce formidable results. So, is this time different?

A Framework: Broad vs Narrow AI

The United States government defines AI as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.”4 Essentially encompassing how humans use computers to make decisions. But wait, they’re on to something: AI is a vast spectrum, and there are specific waypoints along this spectrum essential to consider.

  1. Algorithm: a set of instructions to perform computation
  2. Narrow AI: machine learning5 models trained on data with a goal pertaining to a specific task
  3. Broad AI (or generative AI): machine learning models designed to produce plausible output content

I am an avid gardener and spent this past weekend clearing leaves and debris from last season out of my four raised beds. When I started, the air was crisp with Spring smells. Shortly after, a distant din grew nearer: armies of landscapers, revving their gas-powered leaf blowers, soon flanked me and the air grew thick with exhaust. Could I have used a gas-powered leaf blower to blast my raised beds? Would I feel better about an electric leaf blower? Or how about an old fashioned rake? I used my hands.

Choosing the right tool for a job is crucial, no matter the context. I don’t need a team of landscapers armed with high-rpm tools to clear my raised beds, and you may not need a massive energy-intensive 100M USD generative model for most tasks.

Broad AI is being sold to us as a thing to use for everything, every day. It’s being pushed into our core software stack. However,  I’d argue that there remains a massive opportunity to turn first to humble algorithms and Narrow AI to solve pressing problems at hand. They are interpretable, cost less to develop and maintain, and for most tasks will perform exceptionally well.

The same John McCarthy who coined the “AI” term lamented: “As soon as it works, no one calls it AI anymore.” Narrow AI has been so effective in its problem solving, we stop calling it AI. Physical modeling, optimization problems, voice-to-text, image recognition, spam filtering, recommendation algorithms – all Narrow AI that works, and has worked for a long time. My company was able to produce the most accurate operational streamflow forecasts, HydroForecast, via rigorous and science-informed Narrow AI.

Broad AI advancements are no doubt impressive demonstrations. And there are a few great functions including text summarization, translation, and creative human-in-the-loop brainstorming. But, the pitfalls and opacity require careful application and meticulous review of output. Ambitious claims, no doubt. It remains to be seen how formidable the results will be.


  1. https://computerhistory.org/profile/john-mccarthy/#:~:text=McCarthy%20coined%20the%20term%20%E2%80%9CAI,programming%20language%20lisp%20in%201958.
  2. http://infolab.stanford.edu/pub/voy/museum/samuel.html.
  3. “Computation: Finite and Infinite Machines” by Marvin Minsky, 1967.
  4. 15 U.S.C. 9401(3) [section 5002(3) of Pub. L. 116–283]
  5. A subset of AI