Once viewed by many as the stuff of science fiction or academia, the more formal and pragmatic concept of A.I. dates back to 1950, when British scientist Alan Turing posed the question: “Can machines think?” Since Turing first posed the question, A.I. has developed in structure and purpose and has enabled a myriad of practical applications that can extend, expand and enhance human thinking, reasoning and actions.

Processing big data

Today’s digital world produces far more data than can be effectively captured, analyzed and acted upon. Stated another way, our collective human thinking and reasoning capacity is completely overwhelmed by the amount of data that we are both producing and collecting. This data comes in all forms and complexities—numbers, natural language, images, text, biometrics, facial expressions, video, sound and intonations, are just a few examples.

Our current capture, storage and computational technologies are capable of manipulating that data, but how it eventually gets analyzed and what decisions should be—and are made from it— remain largely the domain of human thinking. In addition, the vast availability of computing itself (your mobile phone for example holds more computational power than the big mainframe computers of only 20 years ago) requires us to develop new, intelligent interfaces that enable us to harness this power more effectively.

“...machine learning has played a critical role in data analysis by crunching through huge volumes of information, looking for and validating patterns that would be almost impossible for a human to perceive.”

Filtering into everyday life

Machine learning, a foundation of A.I., seeks to have machines (computers) learn to think like humans and, in doing so, provide necessary diagnostics, informed guidance and recommendations to human decision making. Long employed in its more basic form by banks and insurance firms, machine learning has played a critical role in data analysis by crunching through huge volumes of information, looking for and validating patterns that would be almost impossible for a human to perceive.

Ultimately, this has provided guidance to loan and risk underwriters. Today’s more advanced A.I. and machine-learning methods have empowered us to look for those patterns and make decisions on them autonomously. A prominent example surfacing today would be a self-driving car.

Today A.I. is finding roles in just about every industry sector. It is beginning to play an important role in improving health care, by analyzing everything from diagnostic images, pharmaceuticals and genomes to interacting with and responding to a patient’s speech, expressions and emotions. For retailers, A.I. and “bot” technologies offer valuable personal assistant tools for enhancing customer engagement and improving customer experience. It is also safeguarding businesses against fraud and cyber-attacks, quickly identifying patterns of possible threats and providing an appropriate defensive response.

What can we expect?

While industry analysts are quick to agree on the numerous and compelling benefits that it will bring to businesses, both big and small, their forecasts of A.I. growth vary. Research firm IDC predicts that U.S. enterprises will cumulatively realize estimated savings of $60 billion by 2020 as a result of employing A.I. and that associated revenues for A.I. platforms will generate about $1.4 billion in revenue by the end of this year.

Market intelligence firm Tractica forecasts cumulative A.I. revenue of $43.5 billion during the ten-year period from 2015 through 2024, and Market Research Store predicts revenue of $40 billion in the single year of 2022. Whatever the exact eventual outcome, it is clear that the impact of A.I. and machine learning technologies will become more pervasive over the coming years and will help improve life for consumers and profits for businesses.