Skip to content

What actually is AI and how does it work? Part 1

Estimated reading time: 6 minutes


This is Part 1 in a three-part series on AI.

AI – the biggest buzzword of 2023, used to describe everything from chatbots to self-driving cars. But what actually is AI? Is it some kind of magic that allows machines to think and reason like humans? And is it really an existential threat to our careers and livelihoods?

In reality, Artificial Intelligence is a complex and multifaceted field that encompasses a wide range of techniques and approaches. Getting beaten at chess on your phone is as much AI as getting chatGPT to do your university coursework. While It may feel like magic as you chat away endlessly with chatGPT, cutting out hours of monotonous, tedious tasks from your workday, in reality, it is a set of tools, algorithms, and some smart maths that enable machines to perform tasks that would normally require human intelligence. Despite social media’s portrayal of AI as an enigmatic and often intimidating subject, understanding the basics is more crucial than ever. In this three-part series, I will attempt to break down the basics of AI in a way that is more digestible, with just a sprinkling of maths, so that you feel confident discussing what really is the biggest innovation since the creation of the internet.

In Part 1 we will explore AI’s history, and how we got here. Part 2 will clarify some of the misconceptions and confusion around how AI actually works by exploring the key concepts, techniques and algorithms used to train AI. Finally, Part 3 will look at what our future looks like with AI, and how we can prepare for it.

AI: A brief history

The history of Artificial Intelligence actually traces back to the mid-20th century, specifically to 1956, when the ‘Dartmouth Summer Research Project on Artificial Intelligence’ was held. During this event computer scientists, John McCarthy and Marvin Minsky defined for the first time their dream of a great collaborative effort towards ‘Artificial Intelligence’. Although this conference is often referred to as the birthplace of artificial intelligence, related concepts were already being investigated and laid the groundwork for what McCarthy and Minsky defined in 1956.

For instance, in 1950, Alan Turing, the protagonist of the movie ‘The Imitation Game’, who cracked the enigma code during the Second World War coined the question ‘Can machines think?’ and endeavoured to find out for himself. While never explicitly describing it as ‘Artificial Intelligence’, he spent time developing his ‘Turing Test’ which involved a human evaluator trying to determine if the text was human or machine-generated.

After the Dartmouth conference, AI initially got off to a slow start, with computers being expensive and lacking processing power, and pessimistic views of the usefulness of AI being the primary obstacles. However, the 1980s marked a turning point as AI began to flourish, primarily through the implementation of Expert Systems. Expert systems used a rule-based approach to decision-making, often having rules programmed by experts such as doctors or engineers, the machines would then carry out decisions based on the defined ruleset. Typical examples would include banking systems using expert systems to assist in fraud detection, or in traffic management systems where they can be used to optimise signal timings and reduce congestion such as smart motorways.

Deep Blue: A Game Changer

IBM’s development of Deep Blue marked a pivotal moment in AI history. Revolutionary at the time, Deep Blue was simply a computer that was trained to play chess. Deep Blue was capable of analysing millions of possible moves and outcomes and was trained with an extensive library of past chess games and strategies. It used what is known as a ‘Brute Force’ algorithm – where every possible outcome is analysed and the best one chosen. Unlike humans, who can only consider tens of moves per turn, Deep Blue had the processing power to evaluate hundreds of millions of possible moves. With up to 10^40 (that’s one followed by 40 zeros) possible legal moves in chess, this processing power certainly came in useful. After several failed attempts, Deep Blue successfully defeated world champion chess grandmaster Garry Kasparov in 1997, drawing public interest in the intelligence of machines for the first time.

Neural Networks: A cat is a cat

If we leap forward to the 2010s we see another turning point in AI, bringing us closer to our current understanding – neural networks. Neural networks are a type of machine learning algorithm that is designed to recognise complex patterns and relationships in data by simulating the way neurons work in the brain. Without focusing too much on the mathematical foundations, neural networks consist of layers of interconnected nodes, which receive input data and transmit it to other nodes in the neural network. Each node processes the information and transmits it to other nodes until the output is produced.

Work on neural networks can be traced back to the 1940s, with further research intensifying in the 1950s and 60s. Unfortunately, similar to other areas in AI, interest in neural networks was limited due to scepticism around their ability to solve complex problems. This changed in 2012 when Google’s secretive research laboratory ‘X Labs’, reported that they had trained a 16,000 computer processor neural network to identify cats on the internet.

They achieved this by allowing the neural network to explore Youtube freely, allowing it to watch any video it selected, and monitoring what the neural network began to learn. Astonishingly, one of the neurons had developed the ability to detect cats in Youtube videos. What is particularly interesting in this case is that the neural network was not told, or programmed to look for cats, but rather had developed an understanding of what cats look like autonomously.

Conclusion

In this preliminary part of our ‘What is AI and how does it work?’ series, we have explored AI’s history, from its humble beginnings with the Turing Test and the Dartmouth conference to the challenges it faced, and its rise through the use of expert systems and neural networks. These milestones demonstrate the remarkable progress made by AI over the past decades.

Next time we will attempt to demystify some of the concepts around how AI works – what sort of algorithms are involved in AI development, and how these algorithms allow machines to develop ‘intelligence’. Find part 2 here.

Dylan is the Web Developer at Imaginaire and likes writing about all things Web Dev, AI, and UI/UX

Read these next...

close

Signup to our newsletter and get the latest tips and trends from the world of ecommerce, straight to your inbox

[gravityform id="15" title="false" description="false" ajax="true"]