In 2023, “AI” has been declared the word of the year by the Collins Dictionary. According to the publishers, the use of this term has quadrupled. It can be asserted that 2023 will be remembered as the year that ushered in a new era of digital technology.
Wherever we turn, the presence of AI is evident in our daily lives – whether it’s in the creation of personal photos, video dubbing, the latest versions of company chatbots, or even in the new Beatles song playing on radio and music streaming platforms. This leads us to a question posed long ago by the mathematician and computer scientist, Alan Turing:
Can machines think?
This query forms part of a technical exercise proposed by the scientist in his 1950 article, famously dubbed the imitation game. In this game, a human judge engages with both a machine and a human without knowing which is which. If the judge cannot reliably distinguish between them based on their responses, the machine is deemed to have passed the Turing Test, showcasing a degree of artificial intelligence. The objective is to evaluate a machine’s capability for human-like conversation and behaviour.
This test serves as a potential origin for what we now recognize as machine learning. The prospect of encoding thoughts on computers, akin to those of living beings, marked a significant milestone for humanity. Presently, this concept finds application in diverse areas, with certain tasks exhibiting superior performance compared to those carried out by humans.
Decoding the Jargon
Here is my selection of terms that often confuse:
- Artificial Intelligence (AI): The expansive field aiming to develop intelligent machines capable of emulating human cognition.
- Machine Learning (ML): A branch of AI that concentrates on algorithms and statistical models, empowering systems to discern patterns and make decisions without explicit programming.
- Deep Learning: A specialized variant of machine learning that utilizes neural networks with multiple layers to extract high-level features from data.
- Statistical Learning: The broader concept encompassing machine learning, emphasizing the utilization of statistical methods to formulate predictions or decisions.
Machine learning
Tom Mitchel once stated, “A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.”. This might sound complex but let’s simplify it.
Imagine creating a program to predict the accumulated precipitation in the next hour based on past data. The task (T) here is to estimate the precipitation accumulation for the upcoming hour, with the performance (P) measure being some error metric, such as the difference between the predicted and observed values. The experience (E) involves various attempts to make the forecast. The program learns as its prediction approaches the observed values during these experiences. The program learns as its predictions approach the observed values during these experiences. The process by which the program learns is linked to a predefined set of configurations known as hyperparameters.
Types of Machine Learning
In general, there 3 types of machine learning:
Supervised Learning
In this paradigm, the model is provided with a dataset and already knows what the correct output should resemble; in other words, each given example has an associated label or target. A model based on supervised learning endeavours to identify the mapping from input to output, allowing it to offer precise predictions when presented with news, unseen data. This is particularly applicable in image recognition, speech recognition, and spam filtering scenarios.
Supervised learning algorithms can be categorized into regression and classification problems. In regression tasks, the model must aim to fit a function that best approximates the input data with the output data. Classification models seek to fit a function that best distinguishes a set of categorical variables.
Let’s consider a scenario where a botanist collects measurements associated with iris flowers, including the length and width of the petals and the length and width of the sepals, all measured in centimeters. These iris flowers have been previously identified by an expert botanist as belonging to the species setosa, versicolor, or virginica. If we want to build a machine learning model that can learn from the measurements of these irises, whose species is known, so that we can predict the species for a new iris, we are dealing with a classification problem. This is because we aim to categorize new irises based on a labeled dataset.
Now, imagine that we want to create an algorithm that predicts the price of a house based on its size and location in the real estate market. Price as a function of size and location is a continuous output, so this is a regression problem.
Unsupervised Learning
In contrast, unsupervised learning is a technique that tackles problems with little or no prior knowledge of what our results should resemble, using unlabeled data. This technique follows the outlined flow below:
So, imagine you have a basket of various fruits, but you don’t know which fruits belong to which category. Through unsupervised learning, the algorithm might group the fruits based on similarities in features like shape, color, and size. The algorithm, without any prior knowledge of specific fruit names, autonomously identifies clusters, revealing, for instance, that apples, oranges, and bananas share certain characteristics.
Reinforcement Learning
This subset of machine learning enables an AI to acquire knowledge through experimentation and feedback from its actions. This feedback can be either negative or positive to maximize cumulative reward.
In a certain sense, we can say that RL shares similarities with supervised learning when it involves mapping between input and output. However, in RL, the agent autonomously decides what actions to take to accomplish a task correctly.
This approach finds significant application in games like chess, where an agent refines its strategy based on accumulated experiences over time. Consider another example: suppose we want to develop an algorithm that guides a robot to explore and clean a room. It receives positive reinforcement when it successfully cleans a dirty area and experiences negative reinforcement when encountering obstacles or failing to clean certain areas. Through this feedback loop, the robotic vacuum learns to navigate efficiently, avoiding obstacles and optimizing its cleaning strategy over time.
Conclusion
In conclusion, delving into the realm of AI is akin to embarking on a journey of continual adjustments and twists. Changes don’t happen in the blink of an eye; they’re more like a slow burn. Yet, many individuals overlook these shifts. The trick? It’s all about hitting the books, maintaining a vigilant eye on the everyday grind, and giving things thoughtful consideration. These skills aren’t just useful; they’re the secret sauce for staying on the AI adaptation rollercoaster. No quick fixes here; it’s an ongoing commitment. So, let’s keep our learning hats on, stay curious, and ride the waves of AI’s ever-evolving journey!
Warning: This article was written with AI help 😉
Joyce Araujo
Sr. Software Engineer
References:
Mitchell, Tom M. 1997. Machine Learning. First. McGraw-Hill Science/Engineering/Math.
Turing, Alan 1950 https://academic.oup.com/mind/article/LIX/236/433/986238
BBC News, AI named word of the year by Collins Dictionary https://www.bbc.com/news/entertainment-arts-67271252
Andreas C. Müller & Sarah Guido. Introduction to Machine Learning with Python: A Guide for Data Scientists.
York University, what is reinforcement learning https://online.york.ac.uk/what-is-reinforcement-learning/
Unsupervised learning image
https://nixustechnologies.com/unsupervised-machine-learning/


