Artificial Intelligence

A Brief Introduction to Artificial Intelligence

What is AI and How Is It Going to Shape the Future

By Dibbyo Saha, Undergraduate Student, Computer Science, Ryerson University

What is Artificial Intelligence?

Generally speaking, Artificial Intelligence (AI) is a computing concept that enables machines to think, learn, and solve complex problems in ways that resemble human intelligence. As humans, we perform tasks, make mistakes, and learn from those mistakes (at least the wise among us do). Similarly, AI systems are designed to approach problems, make errors during the process, and improve through self-correction as part of continuous self-improvement.

To better understand this, imagine playing a game of chess. Every poor move reduces your chances of winning. When you lose, you reflect on the mistakes you made and try to avoid them in your next game. Over time, your strategy improves and your probability of winning increases significantly. AI systems are programmed to operate in a similar manner. They analyze outcomes, adjust internal parameters, and gradually improve their accuracy and performance.

Artificial Intelligence vs Traditional Robotics

When we hear the word “robot,” we often imagine a metallic humanoid figure with glowing eyes and a mechanical voice — an image shaped largely by popular culture. Movies and television have long portrayed robots as either heroic saviors or terrifying villains. However, real-world robots are far less dramatic and far more specialized.

Traditional robots are programmed to execute specific tasks according to predefined instructions. They function strictly within the boundaries of their programming. Consider a self-driving car designed using traditional robotic principles. It might follow a fixed, pre-programmed route to a destination without adapting to real-time traffic conditions, roadblocks, or unexpected changes. This rigidity can lead to inefficiencies or even accidents.

A human driver, on the other hand, would evaluate traffic updates, choose the fastest route, and adjust decisions dynamically. This adaptive, creative thinking is what traditional robotics lacks. If a command falls outside the robot’s programming, it may fail to respond entirely. Artificial Intelligence is being developed to overcome these limitations. Unlike conventional rule-based systems, advanced AI systems aim to simulate intuitive reasoning and complex decision-making processes similar to the human mind.

A Brief History of AI

The concept of Artificial Intelligence is older than many people assume. In 1950, Alan Turing proposed the idea of the Turing Test, a method to evaluate a machine’s ability to exhibit intelligent behavior indistinguishable from that of a human.

In the 1960s, the first chatbot program, ELIZA, was created, demonstrating early natural language processing capabilities.

In 1977, IBM Deep Blue was developed and later defeated a world chess champion in a historic match.

In 2011, Siri was introduced by Apple as a digital assistant, bringing AI-powered voice interaction to mainstream consumers.

In 2015, Elon Musk and several other prominent figures founded OpenAI, further accelerating AI research and development worldwide.

Artificial Intelligence vs Machine Learning vs Deep Learning

Artificial Intelligence is a broad and rapidly expanding field that includes several subfields, most notably Machine Learning and Deep Learning.

Machine Learning refers to systems that improve their performance by learning from data through algorithms. Instead of being explicitly programmed for every task, these systems identify patterns and make predictions based on data.

Machine Learning includes several learning paradigms:

Supervised Learning:
In supervised learning, machines are trained using labeled datasets. For example, a model may be trained using thousands of labeled images of dogs from different breeds, angles, and lighting conditions. Over time, the system learns the defining characteristics of dogs and can accurately identify a dog in a new image that was never part of its training data.

Unsupervised Learning:
In unsupervised learning, the data is not labeled. The machine analyzes raw datasets and identifies hidden patterns or structures independently. It clusters similar data points together and draws conclusions without human-provided labels.

Reinforcement Learning:
Reinforcement learning is based on feedback. The system makes predictions or decisions and receives positive or negative feedback depending on accuracy. For instance, if a machine incorrectly identifies a basketball as a tennis ball, it receives negative feedback. Through repeated feedback loops, it gradually improves its decision-making accuracy.

Deep Learning, a subset of Machine Learning, simulates how the human brain processes information. It relies on artificial neural networks to analyze vast amounts of data. Deep learning systems require enormous datasets and substantial computational power but are capable of highly advanced tasks such as image recognition, speech recognition, and language translation.

Applications of Artificial Intelligence

AI is already deeply integrated into our daily lives. Smart personal assistants like Siri and Alexa by Amazon continuously learn from user interactions.

Streaming platforms such as Netflix use recommendation algorithms to suggest movies and TV shows based on viewing behavior. As datasets expand, these recommendations become increasingly accurate.

AI also strengthens cybersecurity systems and helps financial institutions detect fraudulent credit card transactions. In healthcare, AI assists in analyzing genetic data, performing high-precision surgeries, and improving diagnostic accuracy.

Companies like Tesla and Apple are developing self-driving vehicle technologies that could transform global transportation systems.

Concerns About AI

Despite its benefits, AI raises several concerns. One major issue is job displacement. Automation powered by AI is expected to replace many roles traditionally performed by humans. Some reports predict significant job losses due to automation in the coming years.

Another concern involves bias. Since AI systems learn from human-generated data, they may inherit human biases. Additionally, the development of autonomous weapons raises ethical questions regarding misuse by governments or malicious entities.

However, many fears are exaggerated. Current AI systems are far from achieving super-intelligence or global domination as portrayed in science fiction. Nevertheless, responsible development and regulation are strongly advocated by industry leaders such as Elon Musk.

Artificial Intelligence and The Future

AI is often described as one of humanity’s most transformative technological advancements. Its applications in image recognition, speech analysis, and predictive modeling are already surpassing human-level accuracy in specific domains.

In healthcare, AI research aims to improve treatments for Alzheimer’s disease, assist individuals with dyslexia, and enhance cancer research through advanced bioinformatics and genetic analysis.

In education, AI can personalize learning by analyzing individual capabilities, preferences, and limitations to create customized curricula. This could make education more inclusive and efficient.

Transportation is expected to evolve significantly with autonomous vehicles, self-flying aircraft, and AI-powered drone delivery systems.

While automation may replace certain jobs, AI is also generating new opportunities in machine learning engineering, data science, system design, and AI research. Emerging industries in agriculture, biotechnology, cybersecurity, finance, and gaming are creating new business models and employment opportunities.

Towards Conclusion

The growth of Artificial Intelligence in recent years has been exponential. Its potential impact on society is immense, and we are only beginning to understand the full scope of its capabilities.

Adapting to this technological shift by acquiring skills related to AI, machine learning, and data science will be crucial for future success. Just as AI systems learn and evolve, humans must also continue learning to thrive in an AI-driven world.

History of AI 

The idea of "a machine that thinks" dates back to ancient Greece. But since the advent of electronic computing (and relative to some of the topics discussed in this article) important events and milestones in the evolution of AI include the following:

1950
Alan Turing publishes Computing Machinery and Intelligence. In this paper, Turing famous for breaking the German ENIGMA code during WWII and often referred to as the "father of computer science" asks the following question: "Can machines think?"

From there, he offers a test, now famously known as the "Turing Test," where a human interrogator would try to distinguish between a computer and human text response. While this test has undergone much scrutiny since it was published, it remains an important part of the history of AI, and an ongoing concept within philosophy as it uses ideas around linguistics.

1956
John McCarthy coins the term "artificial intelligence" at the first-ever AI conference at Dartmouth College. (McCarthy went on to invent the Lisp language.) Later that year, Allen Newell, J.C. Shaw and Herbert Simon create the Logic Theorist, the first-ever running AI computer program.

1967
Frank Rosenblatt builds the Mark 1 Perceptron, the first computer based on a neural network that "learned" through trial and error. Just a year later, Marvin Minsky and Seymour Papert publish a book titled Perceptrons, which becomes both the landmark work on neural networks and, at least for a while, an argument against future neural network research initiatives.

1980
Neural networks, which use a backpropagation algorithm to train itself, became widely used in AI applications.

1995
Stuart Russell and Peter Norvig publish Artificial Intelligence: A Modern Approach, which becomes one of the leading textbooks in the study of AI. In it, they delve into four potential goals or definitions of AI, which differentiates computer systems based on rationality and thinking versus acting.

1997
IBM's Deep Blue beats then world chess champion Garry Kasparov, in a chess match (and rematch).

2004
John McCarthy writes a paper, What Is Artificial Intelligence?, and proposes an often-cited definition of AI. By this time, the era of big data and cloud computing is underway, enabling organizations to manage ever-larger data estates, which will one day be used to train AI models. 

2011
IBM Watson® beats champions Ken Jennings and Brad Rutter at Jeopardy! Also, around this time, data science begins to emerge as a popular discipline.

2015
Baidu's Minwa supercomputer uses a special deep neural network called a convolutional neural network to identify and categorize images with a higher rate of accuracy than the average human.

2016
DeepMind's AlphaGo program, powered by a deep neural network, beats Lee Sodol, the world champion Go player, in a five-game match. The victory is significant given the huge number of possible moves as the game progresses (over 14.5 trillion after just four moves). Later, Google purchased DeepMind for a reported USD 400 million.

2022
A rise in large language models or LLMs, such as OpenAI’s ChatGPT, creates an enormous change in performance of AI and its potential to drive enterprise value. With these new generative AI practices, deep-learning models can be pretrained on large amounts of data.

2024
The latest AI trends point to a continuing AI renaissance. Multimodal models that can take multiple types of data as input are providing richer, more robust experiences. These models bring together computer vision image recognition and NLP speech recognition capabilities. Smaller models are also making strides in an age of diminishing returns with massive models with large parameter counts.


AI ethics and governance 

AI ethics is a multidisciplinary field that studies how to optimize AI's beneficial impact while reducing risks and adverse outcomes. Principles of AI ethics are applied through a system of AI governance consisted of guardrails that help ensure that AI tools and systems remain safe and ethical.

AI governance encompasses oversight mechanisms that address risks. An ethical approach to AI governance requires the involvement of a wide range of stakeholders, including developers, users, policymakers and ethicists, helping to ensure that AI-related systems are developed and used to align with society's values.

Here are common values associated with AI ethics and responsible AI:

Explainability and interpretability

As AI becomes more advanced, humans are challenged to comprehend and retrace how the algorithm came to a result. Explainable AI is a set of processes and methods that enables human users to interpret, comprehend and trust the results and output created by algorithms.

Fairness and inclusion

Although machine learning, by its very nature, is a form of statistical discrimination, the discrimination becomes objectionable when it places privileged groups at systematic advantage and certain unprivileged groups at systematic disadvantage, potentially causing varied harms. To encourage fairness, practitioners can try to minimize algorithmic bias across data collection and model design, and to build more diverse and inclusive teams.

Robustness and security

Robust AI effectively handles exceptional conditions, such as abnormalities in input or malicious attacks, without causing unintentional harm. It is also built to withstand intentional and unintentional interference by protecting against exposed vulnerabilities.

Accountability and transparency

Organizations should implement clear responsibilities and governance structures for the development, deployment and outcomes of AI systems. In addition, users should be able to see how an AI service works, evaluate its functionality, and comprehend its strengths and limitations. Increased transparency provides information for AI consumers to better understand how the AI model or service was created.

Privacy and compliance

Many regulatory frameworks, including GDPR, mandate that organizations abide by certain privacy principles when processing personal information. It is crucial to be able to protect AI models that might contain personal information, control what data goes into the model in the first place, and to build adaptable systems that can adjust to changes in regulation and attitudes around AI ethics.

Post a Comment

Previous Post Next Post