Personal tools

AI Research and Applications

Sydney_Harbor_Bridge_Photologic_100720A
[Sydney Harbor Bridge and Opera House, Sydney, Australia - Photologic]
 
 

The Industrial Revolution freed up a lot of Humanity from Physical Drudgery, 

Artificial Intelligence has the potential to free up Humanity from a lot of the Mental Drudgery.

 

 

- Overview

Artificial Intelligence (AI) research explores the nature of intelligence and how computing can be used to explain and engineer it. Artificial intelligence research combines machine learning with semantic information processing and knowledge-based reasoning.

Semantic information processing can extract higher levels of detail from data. By leveraging AI techniques, semantic processing can automatically identify and interpret patterns and correlations in large-scale data sets. 

Knowledge-based reasoning in AI uses AI techniques to store knowledge and perform reasoning. Knowledge is expressed in the form of facts or rules that can be used to draw conclusions or make decisions.

AI techniques such as machine learning and neural networks can be used to create complex models and simulations. Researchers can use these models to study and predict phenomena in fields such as physics, economics, and social sciences.

 

- AI Applications

Artificial intelligence (AI) is already being used to alleviate certain problems across industry and academia. Artificial intelligence, like electricity or computers, is a general technology with many applications. It has been applied in language translation, image recognition, credit scoring, e-commerce and other fields.

AI has many applications, including:

  • Healthcare: AI can build machines to detect disease and identify cancer cells. It can also use laboratory and other medical data to analyze chronic diseases.
  • Business: Artificial intelligence can automate actions, improve manufacturing efficiency, better manage human resources, and promote efficient supply chain management.
  • Cybersecurity: AI algorithms can learn from data to differentiate between authorized and unauthorized access.
  • Transportation: AI can be used for autonomous vehicles, pedestrian detection, traffic light management, travel time prediction, traffic monitoring, parking management, traffic incident detection and license plate recognition.
  • Robotics: Artificial intelligence can be used in industrial robots to improve production efficiency and reduce errors on assembly lines.
  • Astronomy: Artificial intelligence could play a key role in the discovery of exoplanets.
  • Workflow: AI can summarize text, synthesize images and write code.

 

- What Is Artificial Intelligence (AI)?

What is AI, exactly? The question may seem basic, but the answer is kind of complicated. The definition of AI is constantly evolving. What would have been considered AI in the past may not be considered AI today. In basic terms, AI can be defined as: a broad area of computer science that makes machines seem like they have human intelligence. AI is the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. 

Essentially, AI is the wider concept of machines being able to carry out tasks in a way that could be considered “smart”. In the broadest sense, AI refers to machines that can learn, reason, and act for themselves. They can make their own decisions when faced with new situations, in the same way that humans and animals can. If a machine can solve problems, complete a task, or exhibit other cognitive functions that humans can, then we refer to it as having artificial intelligence.

AI makes it possible for machines to learn from experience, adjust to new inputs and perform human-like tasks. Most AI examples that you hear about today – from chess-playing computers to self-driving cars – rely heavily on deep learning and natural language processing. Using these technologies, computers can be trained to accomplish specific tasks by processing large amounts of data and recognizing patterns in the data.

 

- AI Is Constantly Evolving All By Itself

We humans are in trouble. We have unleashed a new evolutionary process that we do not understand and cannot control. 

The latest leaps in AI, with their large language models and deepfakes, have rightly caused anxiety. Yet people react as if AI is just a scarier new technology, like electricity or cars once were. We invented it, they argue, so we should be able to regulate and manage it for our own benefit. it's not true. I believe this situation is new, serious and potentially dangerous.

AI is evolving - literally. Researchers have created software that borrows concepts from Darwinian evolution, including “survival of the fittest,” to build AI programs that improve generation after generation without human input. 

The program replicated decades of AI research in a matter of days, and its designers think that one day, it could discover new approaches to AI. While most people were taking baby steps, they took a giant leap into the unknown. 

Building an AI algorithm takes time. Take neural networks, a common type of machine learning used for translating languages and driving cars. These networks loosely mimic the structure of the brain and learn from training data by altering the strength of connections between artificial neurons. 

Smaller subcircuits of neurons carry out specific tasks - for instance spotting road signs—and researchers can spend months working out how to connect them so they work together seamlessly.

AI is probably the most complex and astounding creations of humanity yet. And that is disregarding the fact that the field remains largely unexplored, which means that every amazing AI application that we see today represents merely the tip of the AI iceberg, as it were. 

While this fact may have been stated and restated numerous times, it is still hard to comprehensively gain perspective on the potential impact of AI in the future. The reason for this is the revolutionary impact that AI is having on society, even at such a relatively early stage in its evolution.

 

- Two Main Areas of AI Research

Artificial intelligence (AI) is a scientific discipline that aims to create machines capable of performing many tasks that require human intelligence. Artificial intelligence started more than 60 years ago and includes two main areas of research:

  • One is based on rules, logic, and symbols; this is interpretable; it always finds the right solution for a given problem if it has been properly specified. However, it should only be used if all possible scenarios of the problem at hand can be foreseen. 
  • Another area of research is example-based, data analysis, and correlation. It can be applied when the concept of the problem to be solved is incomplete or poorly defined. However, this type of AI requires a lot of data, is often difficult to interpret, and always has a small margin of error. 

 

These two research directions and ways of thinking about AI are increasingly being combined to maximize the strengths of both and mitigate their weaknesses.  

 

Maggie_002_080915A
(Maggie, Jeffrey M. Wang)

- From Science Fiction to Reality: The Evolution of AI

Artificial intelligence (AI) has evolved from science fiction to reality. The development of AI dates back to the 1950s, when computer scientists began exploring the idea of creating machines that could mimic human intelligence.

AI has undoubtedly been the technology story of the 2010s, and it doesn't look like the excitement is going to wear off as a new decade dawns. The past decade will be remembered as the time when machines that can truly be thought of as “intelligent” - as in capable of thinking, and learning, like we do - started to become a reality outside of science fiction.

As it currently stands, the vast majority of the AI advancements and applications you hear about refer to a category of algorithms known as machine learning. Machine learning - as well as deep learning, natural language processing and cognitive computing - are driving innovations in identifying images, personalizing marketing campaigns, genomics, and navigating the self-driving car. 

Machine learning is the basis of many major breakthroughs, including facial recognition, hyper-realistic photo and voice synthesis, and AlphaGo (the program that beat the best human player in the complex game of Go)

Over the past few years AI has exploded, and especially since 2015. Much of that has to do with the wide availability of GPUs that make parallel processing ever faster, cheaper, and more powerful. It also has to do with the simultaneous one-two punch of practically infinite storage and a flood of data of every stripe (that whole Big Data movement) - images, text, transactions, mapping data, you name it. 

 

- AI Vs. ML Vs. DL

AI is an umbrella discipline that covers everything related to making machines smarter. Machine Learning (ML) is commonly used along with AI but it is a subset of AI. ML refers to an AI system that can self-learn based on the algorithm. Systems that get smarter and smarter over time without human intervention is ML. Deep Learning (DL) is a machine learning (ML) applied to large data sets. Most AI work involves ML because intelligent behaviour requires considerable knowledge. 

AI is defined as the study of intelligent agents, which can perceive the environment and intelligently act just as humans do. AI can philosophically be categorized as strong AI or weak AI. Machines that can act in a way as though intelligent (simulated thinking) are said to possess weak AI, and machines that are intelligent and can actually think are said to possess strong AI. In today's applications, most AI researchers are engaged in implementing weak AI to automate specific task(s). 

Machine learning (ML) techniques are commonly used to learn from data and achieve weak AI. ML involves the scientific study of statistical models and algorithms that can progressively learn from data and achieve desired performance on a specific task. The knowledge/rules/findings inferred from the data using ML are expected to be nontrivial. Therefore, ML can be used in many tasks that need automation, and especially in scenarios where humans cannot manually develop a set of instructions to automate the desired tasks. Deep learning (DL) is a subfield of ML, which focuses on learning data representations with computational models composed of multiple processing layers.

 

- Beyond the AI Hype Cycle: Trust and the Future of AI

At the heart of digital transformation is the commitment to building trust and data stewardship into our AI development projects and organizations. 

There’s no shortage of promises when it comes to AI. Some say it will solve all problems while others warn it will bring about the end of the world as we know it. Both positions regularly play out in Hollywood plotlines like Westworld, Carbon Black, Minority Report, Her, and Ex Machina. Those stories are compelling because they require us as creators and consumers of AI technology to decide whether we trust an AI system or, more precisely, trust what the system is doing with the information it has been given.

Investment and interest in AI is expected to increase in the long run since major AI use cases (e.g. autonomous driving, AI-powered medical diagnosis) that will unlock significant economic value are within reach. These use cases are likely to materialize since improvements are expected in the 3 building blocks of AI: availability of more data, better algorithms and computing. 

Short term changes are hard to predict and we could experience another AI winter however, it would likely be short-lived! 

 

[More to come ...]


Document Actions