AI For MY Future

An initiative by Microsoft in collaboration with Pepper Labs to equip Malaysians with essential AI skills.

1.1 - What is artificial intelligence

1.1 - What is artificial intelligence

This video provides a brief history of artificial intelligence (AI), tracing its origins back to Alan Turing’s pivotal question in 1950, “Can machines think?” It highlights key milestones, including the development of the first AI program in 1956, advancements in pattern recognition and algorithms in the 1960s and 70s, and the data-driven shift fueled by the internet in the late 1990s. The 2000s saw the rise of deep learning, while the 2010s brought AI into everyday life through virtual assistants. By 2021, generative AI, such as GPT and DALL·E, revolutionized content creation. The video emphasizes the importance of understanding AI’s evolution to unlock its potential and foster innovation.

Artificial intelligence, or AI, has become a buzzword you might have heard recently through social media, in conversations, or even on TV. However, it isn’t new. In fact, you’ve been using AI for years and might not even realize it. At times, AI is portrayed as a force capable of changing the way we live, work, and interact. But beyond the hype, AI is a tool- one that has been evolving over decades, shaped by the contributions of countless individuals across various fields. And AI is not just for those who work with technology, it’s for all of us. So, what is artificial intelligence? AI is the ability of a computer system to learn from past data and errors, enabling it to make increasingly accurate predictions for future behavior. This encompasses a broad range of activities, such as problem-solving, speech recognition, learning, and decision-making. So, what makes AI “intelligent”? At its core, AI is the intelligence demonstrated by software or machines in performing tasks that typically require human intelligence. These tasks can include recognizing patterns, solving problems, or making decisions. For example, when you interact with Siri on your mobile phone, it’s not exhibiting intelligence in the same way a human would. Rather, humans have programmed it to respond to certain prompts and execute functions, such as providing the latest weather update in your area or letting you know your local news. So, is it intelligent, or is it simply trained to respond to your prompts? Consider a robot playing chess with a human. We observe it making strategic moves, but does that mean it’s “intelligent” in the human sense? Well, the answer is no. The robot is merely following algorithms and strategies it has been programmed with, and it learns from past games through machine learning, which allows it to improve its performance over time. The question of whether a robot’s actions during a chess game equate to intelligence, or if it’s just executing a programming process, isn’t a new one. Back in 1950, Alan Turing, a well-known mathematician and computer scientist, asked the question: “How can we determine if a machine is intelligent or not?” To answer this question, he created the famous Turing Test, in which a human questioner would interact with either a human respondent or a computer. The questioner would then have to evaluate whether it was a computer or a human being that was responding. The purpose of the test was not to see if the answer was correct. For example, when asking a question like “what is the sum of two plus two,” Turing did not seek to assess if the answer would be four, but more specifically, to evaluate whether the respondent was a machine or a human. Back then, when Turing created this test, he was trying to reach the point where it would no longer be possible to distinguish between responses from machines and humans, where machines would be able to respond with the same functionalities and nuances that human beings have. And this analysis of intelligent systems didn’t stop in 1950 with Turing. AI has advanced significantly over the years and is increasingly present in our daily lives. Take for instance your interaction with social media apps, like Instagram. The app uses machine learning, a subset of AI, to determine the content to show you next based on your recent activity. A similar process is at work when you finish a show on certain streaming apps too. Even though AI is all around us, it can be portrayed inaccurately, a common occurrence in Hollywood films. Some movies depict AI as having feelings or being superior to humans. Usually, these narratives serve to provoke thought rather than accurately represent how humans interact with AI. These representations are artistic interpretations and do not actually reflect AI’s capabilities. So, while AI can mimic certain aspects of human intelligence, it’s important to remember that it operates very differently from human intelligence. AI doesn’t have consciousness, emotions, or the ability to understand context in the same way humans do. It’s a tool created and controlled by humans, and its capabilities are limited to what it’s been programmed to do. That is why it is so important to understand what AI is – it allows you to discern fact from fiction.
1.2 - Knowledge vs Intelligence

1.2 - Knowledge vs Intelligence

This video explores the concept of intelligence in machines, discussing John Searle’s “Chinese Room Argument” to illustrate the difference between knowledge and true intelligence. The analogy shows that while a machine can recognize patterns and respond appropriately, it doesn’t actually understand the language or concepts it’s working with. The video also highlights how AI, like virtual assistants and diagnostic systems, can efficiently analyze data and identify patterns to streamline processes, yet still lacks genuine comprehension. Ultimately, it emphasizes that AI operates on data, using it to make predictions and learn from experience, but its capabilities are different from human intelligence.
Continuing with the question, “Can a machine ever truly be considered intelligent?” In 1980, John Searle, an American philosopher, created a test to answer this question. That’s right; artificial intelligence is not a new concept, as it has been developed and tested throughout history. This test is called the “Chinese Room Argument.” Let’s dive in. Imagine you know how to speak Mandarin, but your friend doesn’t. Your friend enters a room filled with symbols and a rulebook. The rulebook states: “When you see this symbol, answer with that symbol.” You’re outside the room, sending your friend a message in Mandarin. Your friend, inside the room, doesn’t understand Mandarin, but follows the rulebook. They look at the symbols, figure out the pattern, and respond accordingly. When you receive their response, you may think your friend understands Mandarin, but in reality, they’re just following a set of rules without understanding the language. This test is intended to show that knowledge and intelligence are two very different things. When it comes to AI, a machine may not know Mandarin, but it knows how to recognize the pattern and answer with a message. Perhaps this analogy with Mandarin is not part of your daily routine, but interacting with AI virtual assistants like Siri might be. Who hasn’t asked Siri to tell a joke or share a story, right? At the end of the day, these systems convert our language into something they can recognize. They’re constantly searching for matches in their databases, but they don’t truly comprehend our language. Let’s look at another example of this. Imagine you’re not feeling well, and you arrive at a doctor’s office for a visit. You’ll go through a series of questions asked by nurses, ‘How long have you been feeling like this?’, ‘What is your pain like, on a scale from 0 to 10?’ They’d check your vital signs, and then when you finally see the doctor, you’d repeat much of the same information. It’s a repetitive process that takes up time and resources. So, understanding the difference between knowledge and intelligence can help us make these processes more efficient. Now, imagine a second situation: Upon arriving at the doctor’s office, an AI system asks all these questions. It swiftly analyzes your responses, takes your vital signs, and any other relevant information. By the time you meet with the doctor, the system has already generated a preliminary diagnosis based on the patterns it identified. It streamlines the process, allowing the doctor to address your needs more efficiently. This system operates intelligently by identifying patterns within the collected data thereby accelerating the diagnosis and treatment process. As we’ve seen, knowledge and intelligence, while interconnected, are fundamentally different. Knowledge is about understanding the world around us, while intelligence is about applying that knowledge in new and innovative ways. But what fuels this programming? What allows AI to recognize patterns, make predictions, and learn from past experiences? The answer is data.
1.3 - Data everywhere

1.3 - Data everywhere

This video highlights the importance of data in today’s world, emphasizing how AI systems rely on data to analyze and make predictions. It explains that the rise of the internet in the 1990s marked the beginning of a data-driven world, where data is constantly being generated through our daily actions. The video stresses that the quality and diversity of data are crucial for developing accurate AI models, using examples like music streaming services that analyze user behavior to recommend songs. Ultimately, it shows how data is not just about quantity, but about recognizing patterns to predict future trends and unlock the full potential of AI.
Have you ever wondered why data collecting is so important in today’s world and what it can be used for? Well, data is the raw material AI systems use to analyze and make predictions, and you probably produce way more data than you realize, even just through your mobile phone. Back in the 1990s, the rise of the internet marked a significant leap in access to data and the use of AI. This era set the stage for the data-driven world we live in today. Fast forward to the present, we have no shortage of data fueling analyses and personal pattern recognition processes. Just imagine how much data is being created while you watch this video. Maybe you’re sending a text message to a friend. At the same time, someone next to you might be on a call. Somebody else has just posted a photo on their social media page while another has started a new video series that piqued their interest on a streaming app. That’s right… all of these actions create new data. Data is constantly being generated with each passing second. Yet this raw data requires careful refinement. Data needs to be refined because the quality of the dataset used to train algorithms have a direct influence on the accuracy of the AI model. That means that diverse and representative high-quality data is essential to developing advanced AI models. Consider a music streaming service. It uses the data from your listening habits, such as the songs you skip, the ones you play on repeat, and the playlists you create, to recommend new music that you might like. This is a practical example of how data is used to enhance our everyday experiences and how datasets train the AI models to recommend the next song for you. And that’s what we use machine learning for; to analyze and learn from the data. But it doesn’t stop there. Once we have this information, we can use it to train AI models, improve products and services, make predictions, and even uncover new insights that were previously hidden. So, you see, understanding data is not just about quantity, but also about quality and diversity. It’s about recognizing patterns and making connections that might not be immediately obvious. It’s about using these patterns to predict future trends, behaviors, and outcomes. So, the next time you use your mobile phone, remember that every action you take, every piece of data you generate, contributes to this vast, interconnected web of information. And it’s through understanding this data that we can truly unlock the potential of AI.
1.4 - Finding patterns in data

1.4 - Finding patterns in data

This video explains how AI models recognize patterns in data to personalize user experiences. Using LinkedIn as an example, it shows how AI identifies patterns not only in professional networks and job preferences but also in the timing of user activity. The video highlights how this pattern recognition occurs across various platforms and how it helps companies customize services to meet user needs effectively. By analyzing data from user interactions like clicks and searches, AI models tailor experiences and even predict trends, such as identifying popular products before events like the World Cup, all through machine learning.
Data is constantly being created and refined to train AI models. But what exactly are these models looking for in the data? They’re looking for patterns. Patterns are identifiable repetitive behaviors. AI models are particularly good at recognizing these patterns, mainly because they have access to a large volume of data. Let’s consider LinkedIn as an example. Have you ever noticed how LinkedIn suggests people you might know or jobs that you might be interested in? That’s pattern analysis at work. It’s not just based on your connections or job search history, but also on the behaviors of people all around the world who have similar connections or job interests. What’s really interesting is platforms like LinkedIn may look for patterns not only within your professional network and job preferences, but also in relation to the specific times you are active on the platform. For example, many people tend to browse apps during their lunch breaks or after work hours, so apps may suggest new connections or posts at those times too. This pattern identification doesn’t just occur on platforms like LinkedIn, but with many companies across many apps and platforms. Imagine the benefits of understanding what users really want or even creating new consumption trends based on the market. This understanding allows companies to customize their services for each user, thereby enhancing the user experience and boosting satisfaction. It’s about utilizing data to meet the needs of the user in the most effective way possible. Think about the World Cup. What are potentially the best-selling products before the tournament? You might be thinking team shirts. Recognizing this trend of team shirt consumption can provide a unique competitive edge for companies. It could even help in the new development of products or features. Every click, every like, every search – these are all pieces of data. AI models process this vast amount of data to notice patterns. These patterns help the AI model tailor your experience, making it more relevant and engaging. It’s like having a personal assistant who knows exactly what you need, even before you do! But how does an AI model learn to recognize these patterns? Well, it does so through machine learning.
1.5 - Machine learning

1.5 - Machine learning

This video explains the difference between artificial intelligence (AI) and machine learning (ML), highlighting that while all ML is a form of AI, not all AI involves ML. Machine learning is a subset of AI where machines learn from data to identify patterns and improve performance over time. The video uses the example of a checkers program created by Arthur Samuel in 1959, which learned from repetitive plays, to illustrate how ML works. It also compares the process of learning to ride a bicycle, where the model adapts to new situations based on patterns it has learned, making accurate predictions and performing tasks over time.
You might have heard the terms “machine learning” and “AI” used interchangeably, and you may be wondering, “What’s the difference?” Well, there is a difference, and it’s important to understand how they are different. Artificial Intelligence refers to the intelligence exhibited by software and machines. Machine learning, on the other hand, is a subset of AI. This means that while all machine learning is AI, not all AI involves machine learning. Machine learning is a type of AI where a machine learns from the data it has been given and can identify patterns in that data. It’s also the process by which machines learn from data and improve their performance over time. It utilizes different types of techniques such as supervised learning, unsupervised learning, and reinforcement learning. The term “Machine Learning” was introduced by Arthur Samuel in 1959. Samuel, an American pioneer in the field of computer gaming and AI, created a program that played checkers against itself. The machine analyzed the game through repetitive plays, identifying strategies to win and avoid losses. Through playing the game and learning what to do to win, it began to detect recurring patterns and would increasingly follow a pattern. To achieve this, Samuel had to use mathematics. Concepts such as linear algebra, calculus, probability, and statistics play a crucial role in understanding how machine learning algorithms learn from data and make predictions. These mathematical concepts help in optimizing the performance of the model, understanding the relationships within the data, and making accurate predictions. This checkers game is a prime example of a machine learning program. It was learning from its experiences, replicating successful results in other games, and refining its performance over time. Need further clarity? Let’s consider the analogy of learning to ride a bicycle. When you first learn to ride a bicycle, you might have started with training wheels. These training wheels are like the initial dataset that we feed into the machine learning model. They provide the basic guidance and stability that the model needs to start learning. As you practice more and more, you start to understand how to balance, when to pedal, and how to steer. This is like how a machine learning model begins to recognize patterns and relationships in the data during the training process. Eventually, the training wheels come off. Now, you’re not just riding the same bike in the same way. You’re adapting to different situations – maybe you’re riding on a hilly path or navigating through a crowded park. Similarly, a machine learning model uses the patterns it has learned to adapt to new data and make accurate predictions or perform tasks it was designed for. Just like how you can ride different bicycles after you’ve learned the skill, a machine learning model can apply its learning to different but related problems. The bicycle you ride today might not be the one you learned on, but the skill you learned transfers across. The fundamental concept of machine learning is when a model learns from data, identifies patterns, and uses these patterns to make predictions or decisions. Just like Samuel’s checkers program, machine learning models improve their performance over time through continuous learning and adaptation. So, remember, all machine learning is AI, but not all AI is machine learning, and machine learning is broken into different types such as supervised, unsupervised, and reinforcement learning.
1.6 - Types of machine learning

1.6 - Types of machine learning

The video explains how machines learn through three main methods: supervised learning, unsupervised learning, and reinforcement learning, using soccer as an analogy. In supervised learning, a coach teaches the rules, while in unsupervised learning, you learn by observing patterns without guidance. Reinforcement learning is demonstrated through penalty kicks, where rewards and punishments help improve performance. The video also introduces deep learning, an advanced form of machine learning that mimics the brain’s learning process using artificial neural networks with multiple layers to make independent decisions.
Ever wondered how you learned to ride a bike or play an instrument? It’s all about learning from experience, right? Well, machines can do that, too! They can learn in three main ways: supervised learning, unsupervised learning, and reinforcement learning. Let’s break down each with a simple analogy: learning to play soccer. In the first scenario, you have a coach who supervises you and teaches you all the rules. They carefully explain that when the ball enters the goal, you earn a point, and when the ball goes out of play over the sideline, it must be thrown in by hand. They explain all the rules, how many players there are, how long the game lasts, and so on. In this case, you have a person supervising you and teaching you all the rules of the game. This is like supervised learning. In the second scenario, you’re on your own. You start attending games every Thursday and Sunday. Initially, you’re puzzled. Why are players using their hands from the sidelines in a game primarily played with feet? Why is the crowd cheering when the ball hits the net, and why do they groan when it doesn’t? But as you keep watching, you start recognizing patterns and understanding the game’s dynamics. This is like unsupervised learning. Now, let’s consider a third scenario, which is like reinforcement learning. Imagine you’re practicing penalty kicks. Each time you score a goal, you feel a sense of achievement, which is a positive reward. Each time you miss, you feel disappointment, which is a negative reward. Over time, by trying different ways of kicking the ball and learning from the rewards, you improve your ability to score penalty kicks. So, remember, all machine learning is AI, but not all AI involves machine learning. Machine learning can be broken down into these types: supervised, unsupervised, and reinforcement learning. But we can go even deeper. There are advanced forms of machine learning that mimic the human brain’s own method of learning. They take the concepts of supervised, unsupervised, and reinforcement learning and apply them on a much larger scale. This is called deep learning. Just as neurons in the brain are connected to form a vast network, deep learning uses artificial neural networks with several layers – hence the term ‘deep’. These networks can learn and make decisions on their own. Intriguing, isn’t it?
1.7 - Deep learning

1.7 - Deep learning

This video explains how deep learning, like a large ship navigating the ocean of data, functions to make accurate predictions using neural networks that mimic the human brain. It learns through trial and error, much like how we learn to cook or recognize images. The more data available for training, the better the computer becomes at making accurate predictions. The video also discusses the use of deep learning in natural language processing (NLP), where computers learn to generate human language by analyzing more textual data.
Imagine you’re standing at the edge of a data ocean, an ocean that’s growing every second with waves of information from all around the world. Traditional machine learning models are like small boats, they can only handle so much before they start to sink. But what if we had a bigger boat, like a ship? Well, that’s deep learning. Deep Learning is our powerful ship, designed to navigate the vast ocean of data. It’s inspired by the most complex system we know – the human brain. Just as our brain consists of billions of interconnected neurons working together to make sense of the world around us, deep learning uses neural networks to learn from data and make informed predictions. Have you ever wondered how the human brain works? How do we learn? Think back to when you were a child. You probably played games that involved image recognition at school. Remember those cards? You’d decide if it was a dog or a cat, and your teacher would confirm. Learning happened through repetition and feedback. This is similar to how a computer learns. The neural network makes assumptions and can be, let’s say, 70% certain that the image is correct or not. Instead of guessing, it adjusts its parameters and retrains over time. So, why does this matter? Well, the more data the computer has to train on, the faster it will be able to correctly recognize an image- whether it’s a dog, a cat, or even a flower. This is why the topic of data volume is so important. Now, let’s think about cooking. When you start learning, you begin with simple recipes, like frying an egg or cooking rice. Each time you cook, you learn something new – how high to turn the heat, how long to cook the eggs, how much water you should put in your rice. Over time, you become more proficient, and you can cook these dishes without even thinking about it. This is like to how traditional machine learning works. But what if you want to learn to cook a complex dish, like fried chicken? There are so many variables to consider – how you bread the chicken, what temperature you fry it and what sort of oil you use. It’s not enough to just practice; you need to understand how all these factors interact. This is where deep learning comes in. Deep learning, like frying chicken, involves a lot of trial and error. The neural network makes an assumption (or a guess), checks how close it was to the right answer, and then adjusts its parameters for the next guess. This process is repeated over and over again, each time getting closer to the correct answer. Just like having more recipes can enhance your cooking skills, the more data the computer has to train on, the better it will be at making accurate predictions. And as a child learns to recognize images from animal cards, a computer can learn to generate human language. Another branch of AI is natural language processing, or NLP, which uses similar principles as deep learning but focuses on language. The more textual data a computer learns from, the more proficient it becomes at producing human-like language.
1.8 - Natural language processing

1.8 - Natural language processing

This video explains how Natural Language Processing (NLP), a branch of AI, helps machines understand and interpret human language. It covers how NLP is used in everyday tasks like searching for song lyrics, asking for the weather, or using autocorrect. Through techniques like text analysis, sentiment analysis, and speech recognition, NLP allows devices to not only process words but also understand their meaning. The video highlights how AI, including NLP, is behind many tasks we take for granted, making our interactions with technology smoother and more intuitive.
Did you know that every time you ask your phone about the weather or when autocorrect saves you from sending a text full of typos, you’re interacting with a branch of AI called natural language processing, or NLP? NLP can read, decipher, understand and make sense of human language. It does this through various methods such as text analysis, translation, sentiment analysis, and speech recognition. For example, think about the last time you used a search engine like Bing to find the lyrics of your favorite song. You typed in the song name and lyrics, and just like that, Bing showed you exactly what you were looking for. But how did it know you wanted the lyrics and not the music video or information about the singer? That’s NLP at work! NLP acts like a translator between us and our devices. It helps our devices understand not just the words we say, but also what we mean. So, whether you’re searching for song lyrics, asking your phone about the weather, or translating a sentence into a different language, NLP is there, making it feel like you’re chatting with a person. When we talk about different branches of AI like NLP, we can sort them based on the kind of problems they solve. Some AI algorithms are great at recognizing images, some are experts at understanding language, and others are pros at predicting trends. So next time you use your phone or computer, remember AI is working behind the scenes making you more connected than ever before.
1.9 - AI algorithms

1.9 -AI algorithms

The video explains how AI algorithms, such as classification, regression, clustering, and optimization, are used to solve everyday problems. It highlights how machine learning, a subset of AI, utilizes these algorithms to help computers learn from data. Examples include how email services sort spam, how real estate websites predict house prices, how music apps recommend songs, and how optimization algorithms find the quickest delivery routes. The video emphasizes the growing complexity of these algorithms, which help make our digital lives more efficient and convenient.
Ever wondered how your email knows what’s junk and what’s not? Or how does your music app always hit the right note with its recommendations? That’s all thanks to something called AI algorithms. Imagine these algorithms as a set of instructions that guide a computer to solve a problem, much like a recipe guiding you to bake a cake. Machine learning, a subset of AI encompasses deep learning as one of its own subsets. These subsets represent different methodologies through which a computer system can learn from data to solve problems. Within these, there are different learning methods like supervised learning, unsupervised learning, and reinforcement learning, and each of these methods uses a specific set of instructions, or algorithms, which guides the computer on how to solve a problem. And depending on the problem, the algorithm might be quite different. There are a few main types of these algorithms. Imagine you’re sifting through your emails. Some are important, others… not so much. How does your email service know which ones to put in your inbox and which ones to label as spam? That’s the work of a type of algorithm called classification. It’s like a detective, sorting each email into spam or not spam. Now, let’s say you’re house hunting. Ever wondered how real estate websites predict the price of a house? They use a type of algorithm called regression. It’s like a fortune teller, predicting the future based on information like size, location, and other factors. Ever noticed how your favorite music app always knows just the right song to recommend? That’s clustering at work. It’s like a party planner, grouping together songs you like so it can suggest similar ones. And when you’re hungry and waiting for your food delivery, an optimization algorithm is finding the quickest route to get your food to you. It’s like a navigator, always searching for the best solution. Data scientists use these and other types of algorithms to make sense of the world around us. They’re like the secret superheroes of the digital age, using their powers to make our lives easier and more convenient. And the best part? These algorithms are getting more advanced every day, learning from the vast amounts of data we generate.
1.10 - AI inaction

1.10 - AI inaction

The video explains how machine learning, deep learning, robots, self-driving cars, and the Internet of Things (IoT) are interconnected. It illustrates how devices, from smart appliances to wearables, collect and share data, which can be used by AI systems to perform tasks and make smarter decisions. Robots and self-driving cars, for example, generate valuable data through their movements and interactions, which enhance AI models. The video emphasizes how this vast network of connected devices and data is driving innovation, making our lives more efficient and smarter by enabling better AI solutions.
Machine learning and deep learning use specific algorithms to learn from vast amounts of data and complete tasks. But how does it all fit together in AI systems like robots, and self-driving cars? Let’s explore. You’ve probably heard of robots and self-driving cars, but what about the Internet of Things? Simply put, the IoT is a vast network of interconnected physical devices, ranging from everyday household items like smart door locks and smart thermostats to wearable devices such as smartwatches. These devices are connected to the internet, collecting, and sharing data. This connectivity allows these devices to communicate with each other and us, streamlining our lives in the process. Now, you might be wondering, “What’s the connection between all of them?” Well, robots are often seen performing repetitive tasks and interacting with humans. But they’re also a goldmine of data! Every movement, interaction, and task a robot performs can be translated into valuable data. The internet isn’t just about your smartphone or laptop. It’s much more expansive. The internet has woven itself into our everyday lives. From connected homes to smart appliances, the internet is everywhere. Imagine a refrigerator equipped with technology that knows exactly when you’re running low on milk and sends you a reminder to pick some up on your way home. That’s the power of IoT. Then, consider self-driving cars. These vehicles collect vast amounts of data about their surroundings and use this information to navigate the roads safely and efficiently. This data not only enriches our AI models but also empowers us to drive innovation and efficiency. So, the next time you think about the physical world connecting with the virtual world, remember, it’s not just about convenience. It’s about harnessing the power of data to make our AI models smarter and our lives better.

For professional certification, you may register here directly for access to the Learning Management System.

In collaboration with Microsoft, Pepper Labs is leading a transformative movement to democratise technology and innovation. This bold initiative is set to break barriers, redefine possibilities and make AI accessible to all.
 
The future of inclusive technology starts now—stay tuned for the revolution. While at it, please visit Microsoft AI Resources to explore the courses that have been designed to equip you with the knowledge and skills needed to harness the power of AI in your daily tasks, enabling you to work smarter, automate processes, and unlock new opportunities for growth. 

Contact us at:

Follow us for more updates:

© 2025 All rights reserved.