WHY EVERYONE'S SUDDENLY TALKING ABOUT AI (AND WHAT ACTUALLY CHANGED)

Why AI exploded in 2023, what's actually new, and what's just rebranded old technology.

Context: This piece reflects how AI tools and public debate looked at the time it was written.


Introduction

AI isn't new but in 2022 it became good enough, fast enough, and cheap enough for ordinary people to use. The breakthrough wasn't intelligence but usability. This explains what changed, why ChatGPT went viral, and why this isn't AGI despite the hype.

The short answer: AI got good enough, fast enough, and cheap enough that ordinary people could use it. That's the shift - not intelligence but usability.

AI has existed for decades - what's different now?

Artificial intelligence as a field started in the 1950s, and researchers have been working on it ever since with machine learning, neural networks, and natural language processing existing for years.

Early AI could play chess, recognise handwriting, filter spam, and recommend films - useful, but narrow where each system did one thing. None of them could hold a conversation, write an essay, or generate an image from a description.

The breakthrough wasn't inventing AI but scaling it up to the point where it became general-purpose enough to be useful to non-experts.

The breakthrough was transformer models and massive scale

In 2017, researchers at Google published a paper introducing a new architecture called the transformer, which is a way of processing language that's much more efficient than previous methods.

Transformers made it possible to train models on vastly more data than before, and it turned out that when you trained these models on huge amounts of text (billions of words), they got dramatically better not just at one task but at many tasks.

The massive scale mattered more than anyone expected. Bigger models trained on more data didn't just get incrementally better but made qualitative leaps where they started being able to do things that smaller models couldn't.

By 2020, OpenAI had trained GPT-3, a model with 175 billion parameters (the internal settings that control how it predicts text), and it could write coherent essays, answer questions, generate code, and translate languages. It wasn't perfect, but it was shockingly capable for a system that was just predicting the next word.

Why ChatGPT's release in 2022 was the inflection point

GPT-3 existed, but it wasn't easy for ordinary people to use because you needed technical knowledge or had to pay for API access. It was a tool for developers, not the public.

Then in November 2022, OpenAI released ChatGPT with the same underlying technology (a fine-tuned version of GPT-3.5) but wrapped in a simple chat interface where anyone could use it for free with no setup required. Just type a question and get an answer.

It went viral within a week with a million people trying it, and within two months 100 million people had used it, making it the fastest-growing consumer application in history.

Why? Because it was the first time most people had interacted with AI that felt genuinely useful rather than being a gimmick or narrow tool. They discovered that it was something that could help with real tasks like drafting emails, explaining concepts, brainstorming ideas, and writing code.

It wasn't smarter than previous AI but was more accessible, and that made all the difference.

What made this generation suddenly useful

Three things changed.

It was good enough to be helpful where previous AI systems were either too narrow (they only did one thing) or too unreliable (they failed too often to be trusted). This generation of AI is general-purpose and competent enough that you can actually use it for many everyday tasks.

It was fast enough to feel interactive. Older AI systems were slow where you'd submit a query and go and put the kettle on, but modern AI responds in seconds which makes it feel like a conversation and changes how people use it.

It was cheap enough to be free or nearly free. Training these models costs millions, but once they're trained, running them is relatively cheap so companies can offer them for free with limits or for a small subscription fee. That puts them in the hands of millions of people rather than just researchers or businesses with big budgets.

Why this isn't AGI (and what that term means)

Despite the hype, this isn't artificial general intelligence. AGI would be AI that can do any intellectual task a human can where it would understand the world, reason about it, learn new skills on its own, and apply knowledge across domains.

We don't have that yet. Not even close.

What we have is narrow AI that's good at a wider range of tasks than previous narrow AI where ChatGPT, for example, can write essays, answer questions, and generate code, but it can't reason, can't understand, and can't learn from experience. It does what it was trained to do and nothing more.

AGI is still theoretical where some researchers think it's decades away, some think it's impossible with current approaches, and some think it's closer than that. No one knows for sure, but what we do know is that calling current AI "AGI" or "almost AGI" is wrong. Impressive doesn't mean intelligent, useful doesn't mean conscious, and sounding smart doesn't mean understanding.

What actually happened

The shift wasn't a sudden leap in intelligence but a shift in usability where AI crossed a threshold and became good enough, fast enough, and accessible enough that ordinary people could use it for ordinary tasks. That's new, and that's why everyone's talking about it.

But the underlying technology (predicting text based on patterns in training data) hasn't fundamentally changed - it's just bigger, faster, and more polished.

Understanding that helps cut through the hype because this is a powerful tool but not magic, and it's definitely not the dawn of machine consciousness no matter what the headlines say.

Browse all topics → Index