- Truffle Dog Digital newsletter
- Posts
- Should I be Worried About AI?
Should I be Worried About AI?
Modern AI excels at language but can't set goals or learn independently.
Major advances in robotics and real-world data collection are needed before AI approaches human-like capabilities.
AI will augment human capabilities rather than replace them entirely.
Contrary to popular impression, Artificial Intelligence (AI) is nothing new—it's been around for over 70 years.
The early theory of AI was fathered by England's Alan Turing, who also fathered the computer and was instrumental in breaking the Nazi's "enigma code" (which is one of the key deciding factors in the allies winning World War 2). Turing was the subject of "The Imitation Game" starring Benedict Cumberbatch, which by all accounts is a fairly accurate depiction of aspects of his life.
"The Imitation Game" is actually a reference to a test of the same name that Turing put forward in his 1950 paper, which was designed to distinguish between humans and machines. Tragically, Turing committed suicide a couple of years after being convicted of "gross indecency" after being outed as gay by the police. His conviction meant he was no longer able to work on anything secret.
Despite a number of scientific/mathematical progressions in the last millennium, not much happened in the public eye until the late noughties with the advent of "machine learning". This was the first AI wave you're likely familiar with, arguably the first widely useful form of AI.
With a huge amount of data and a heap of human toil, this type of AI could be built for some very specific tasks, like identifying and interpreting number plates in images of cars or categorising text as "happy", "sad" or "angry" in customer support data. This form of AI is referred to as "Artificial NARROW Intelligence" because it's only useful for really specific use cases.
In contrast, the Turing Test ("The Imitation Game") is designed to detect "Artificial GENERAL Intelligence"—what you and I would most commonly think of in the movies, which is "being like a human".
The limitations of this AI until the late 2010s was what I referred to as "a heap of human toil". In the industry, this is referred to as "supervised learning", i.e. supervised by humans. We didn't have the option (in economic terms) of just adding a million data scientists to "supervise" the learning of the machines. Still, narrow intelligence was good enough to get a car to drive itself, by combining a bunch of these narrow AIs together.
In a series of breakthroughs this past decade, clever mathematicians and engineers figured out a way to remove humans from most of the "learning" process. This is most often referred to as "unsupervised learning". In reality, the goals are still human, but the computer does the bulk of the grunt work of reading and interpreting information without needing input from a mass of humans.
Combined with advancements in computing power, this gave birth to today's wave of AIs like ChatGPT. These are referred to as "Large Language Models" (LLMs) or "Generative AI" (GenAI). It's important to understand though that LLMs or GenAI are still considered by experts as "Artificial NARROW Intelligence". That is, they're really good at one thing—language.
When you're using a GenAI product like ChatGPT, it's really easy to fall into the trap that you're talking to a human. Actually, this made me realise how much of "being human" is about language. A bit like how much our subconscious is biased toward sight more than all the other senses (play any virtual reality game while someone's talking to you and you'll understand what I mean—you just don't even notice the person talking to you, right next to you).
So what's the gap between conversing like a human and being like a human?
I would argue that GenAI has actually ticked off a bunch of these, but they're missing some (very) important things we possess:
First, GenAI isn't setting goals—that's us when we ask the question. Once we have a goal in mind, the GenAI can use language to infer what our goal might be, but it's not setting goals of its own.
Second, GenAI isn't independent, it's not making its own decisions. It's reactionary only. If we don't ask, the GenAI doesn't answer.
Thirdly, GenAI isn't able to direct its own learning. Even with "unsupervised learning" the data has been generated by humans and supplied by humans to the machine. Imagine a child growing up in a room with no doors or windows—that's GenAI without a human. In fact, this area won't really progress much faster until the current AI players get access to a LOT more real-world data (think wandering around looking at things) and for this to happen robotics will need to become a part of everyday life. Then computers will be able to explore and learn independently. The smartest players in this space have been working on AI for over a decade, but they're also working on robots.
If you think about it, we've been using narrow technology since the first animal struck two rocks together. Every time a big technology has come to be, the incumbents panicked about everyone being out of job—and although there have been isolated impacts, these have typically happened over a decade or two.
Think about how long the internet ACTUALLY took to make a major dent in retail shopping, for example. So I'm squarely in the camp where we become cyborgs—where AI powers us to be better and do more. Really, that's already happened to most of the population. Think about how useful you'd be today without using a smartphone. Ok, it's not embedded, but you use it every day and you are arguably more productive with it than without it.
Think about when calculators first came into widespread use, same thing. Obviously jobs like care and social work will be the last to go because they rely the most on real-world interactions. If any jobs are under threat, it would be software engineers and I've not seen anyone lose their jobs yet—to the contrary, I see it opening up more opportunities to enter the market for young people, lowering the barriers.
I believe that we're a fair way off AI generally (pun intended) taking over our role in society and jobs because the current LLM-based AIs aren't looking like they'll bridge the gap to independence and interacting directly with the real world. AI tools are not going to take over your job, they’re going to help you. There are some big gaps between today's AI and what's needed for human-like intelligence, and even then robotics has to take a huge leap before it can be useful in the real world and learn autonomously.
Andrew Walker
Technology consulting for charities
https://www.linkedin.com/in/andrew-walker-the-impatient-futurist/
Did someone forward this email to you? Want your own subscription? Head over here and sign yourself right up!
Back issues available here.
Reply