AI lab – AI in Action | Episode 01: AI History

Caveat: This Artificial Intelligence (AI) timeline does not claim to be all encompassing. Many steps have led to the recent peak in attention for AI, and this story and the accompanying infographic aim to show that AI is not the new kid on the block.

Today, we are kickstarting our AI in Action series by diving headfirst into the key milestones that led to the gradual deployment of Artificial Intelligence, or AI for short. You might think it’s some shiny new invention, looking at all the recent media coverage about robots taking over your jobs and writing bad poetry. But hold on to your Roomba, because AI has been around longer than your grandma’s pocket calculator.

Because after all, AI is a blend of mathematics, logic and computing power. Or to put it more trivially, AI, is like having a particularly clever parrot on your shoulder that not only mimics you but also learns from you, makes decisions for you, and occasionally, just occasionally, outsmarts you. It’s this magical concoction of computer science, stubborn optimism, and a dash of existential dread that aims to create machines which can think, learn, and maybe even feel. Kind of like teenagers but with more processing power and less attitude.

Our story begins in 1642, with a young French rebel named Blaise Pascal. Instead of doing whatever 17th-century teenagers did for fun, he built the “Pascaline,” the world’s first mechanical calculator. This wasn’t Space Odyssey’s HAL 9000, but it was a sign that people were already thinking about how machines could help us with some number crunching.

Then things get philosophical!  Just a few years later, Thomas Hobbes ponders the connection between reasoning and math in his book “Leviathan.”  Basically, he’s asking if machines could ever truly think. A question that, let’s be honest, AI researchers are still grappling with today.

Fast forward a couple of hundred years, and we’ve got Charles Babbage dreaming up this amazing “Analytical Engine” in 1837. Now this wasn’t just a glorified calculator, this was a machine that could be programmed to do different things, kind of like the smartphones we all carry around. And get this, in 1843, a brilliant woman named Ada Lovelace wrote the first ever computer program for this very machine. That’s right, folks, the world’s first computer programmer was a woman! Way to go, Ada!

The point is, AI has been brewing for centuries. In fact, the concept of neural networks, a cornerstone of modern AI, goes all the way back to the 1940s and 50s with the work of Warren McCulloch and Walter Pitts who, in 1943, created the first model of a neural network, not for Instagram, but for science. 

And the timeline just keeps rolling on!

1950 brought us the “Turing Test” by Alan Turing, challenging machines to fool us into thinking they’re human. A test many humans would fail today, especially on Twitter.

There were even early attempts to build machines that could learn, like the “SNARC Maze” solver in 1951, which used a whopping 3000 vacuum tubes to navigate a maze. Yes, they basically spent months building the world’s most complicated virtual rat, a precursor to today’s most advanced AI: scrolling through Netflix without picking anything to watch.

AI was basically simmering under the surface, the groundwork being laid.

In the mid-1950s AI wasn’t just happening anymore, it had a name. At a now-famous workshop at Dartmouth College in 1955, a group of brilliant minds including John McCarthy, Marvin Minsky, and Claude Shannon tossed around the term “Artificial Intelligence” for the first time. This wasn’t just a fancy label, it was a defining moment. AI was no longer some obscure concept, it was a field of study with a clear mission: to create intelligent machines.  But even then, the public was still mostly clueless.

By 1959, the term “Machine Learning” was being thrown around, and Arthur Samuel popularized the idea that computers could learn and improve without needing to be explicitly programmed.

The 1960s brought “ELIZA,” one of the first chatbots, a primitive AI you could actually converse with. Sure, it probably wouldn’t win a debate with Socrates, but it shows how far AI has come since those early days.

The 80s and 90s were a rollercoaster ride for AI. The initial excitement of the 50s and 60s led to some overblown expectations. By the mid-70s, the first “AI winter” hit. Funding dried up as progress stalled and the promised intelligent machines seemed like a distant dream.

But even in the cold, research continued. Groundbreaking developments like “Backpropagation” in 1986 laid the groundwork for future advancements. And in 1989, “Convolutional Neural Networks” showed promise in areas like image recognition, proving AI wasn’t a dead end. The 90s saw a second “AI winter,” but researchers kept pushing. Then, in 1997, IBM’s Deep Blue shocked the world by defeating chess champion Garry Kasparov. This wasn’t just a game of checkers, it was a sign that AI was back, and more powerful than ever.

The new millennium brought more innovations. In 2002, the Roomba, the first mass-produced autonomous vacuum cleaner, hit the market, proving robots weren’t just for sci-fi movies or episodes of The Jetsons anymore. Fast forward to 2011, and IBM’s Watson not only dominated the US game show Jeopardy!, but also showcased AI’s potential for real-world applications.

And let’s not forget the rise of virtual assistants like Siri, Google Now, and Cortana in the early 2010s. These AI-powered companions put the power of information and automation right in our pockets.

The breakthroughs kept coming. In 2014, Generative Adversarial Networks (GANs) were developed, where two neural networks essentially play a game against each other, pushing each other to improve. This led to significant advancements in image generation and manipulation.

Then came 2016, and the world watched in awe as DeepMind’s AlphaGo program defeated Lee Sedol, the world champion Go player. This complex game, unlike chess, relies heavily on intuition and strategy, demonstrating the growing sophistication of AI in decision-making tasks.

The party didn’t stop there. In 2017, Google researchers introduced the Transformer, a groundbreaking neural network architecture that revolutionized natural language processing (NLP). And who can forget Sophia, the first humanoid robot to be granted citizenship by a country, namely Saudi Arabia, in 2017? A little creepy, sure, but a sign of the blurring lines between humans and machines.

The late 2010s saw the rise of large language models (LLMs) like OpenAI’s GPT-1 and BERT, along with the exploration of Graph Neural Networks. These advancements unlocked new possibilities for AI to understand and generate complex text formats.

And wouldn’t you know it, the 2020s haven’t disappointed. GPT-3 took the world by storm, showcasing the power of self-supervised learning in natural language processing. Meanwhile, DeepMind’s AlphaFold 2 dominated the CASP competition, a challenge focused on protein structure prediction – a critical step in understanding diseases and developing new medicines.

The story continues in 2021 with Google’s LaMDA language model and the launch of GitHub Copilot, an AI tool that assists programmers with writing code. And who can forget Dall-E 1, OpenAI’s text-to-image generator that started turning our wildest dreams (or nightmares) into digital art? Just don’t look at the number of fingers!

2023 and 2024 are witnessing an explosion of AI for the general public. We’re not just talking about chess-playing machines anymore.

Everyone heard about the release of ChatGPT-4, Dall-E 3, Midjourney, and Stable Diffusion.

More importantly, AI is seeping into every corner of our lives. It’s not just about entertainment either, AI is being used in healthcare for faster diagnosis and drug discovery, and even in climate science to tackle some of our planet’s biggest challenges.

Looking back at this timeline, it’s obvious that AI has been around for decades, quietly evolving and growing in complexity. So next time someone talks about robots taking over, remind them that AI’s been around since your great-great-great grandpappy was wearing a top hat. And even before that!

Join us next time as we delve into the AI terminology to understand the reality behind the buzzwords. In the meantime, stay informed, inspired, and ready for action.