July 10, 2019 - As buzzwords go, “machine learning” is definitely having its moment. But don’t think it’s a new phenomenon.
ML, as it’s known, is a pretty old branch of artificial intelligence. Its origins are rooted in the 1950s, when IBM’s Arthur Samuel wrote some of the very first programs that could learn how to play checkers without being explicitly instructed in how to do so. But it has taken decades for machine learning to really take off, and only in the last few years has it moved out laboratories and into products and businesses.
Unsupervised and unguided learning
At its core, machine learning seeks to cut humans out of the programming process.
Ordinarily, humans learn from analyzing information about past experiences. So, ML posits, what if you could program a computer to do the same thing, and learn how to perform tasks – even making judgement calls about ambiguous situations – without direct human supervision or guidance?
That’s machine learning in a nutshell – creating systems that learn to identify patterns in data and then make decisions with minimal human involvement.
There are a number of machine learning models in use today, each optimized for a specific kind of problem.
Algorithm seeking answers
Programmers use a technique called supervised learning, for example, when you start with a wealth of historical data and can easily train the computer on what the right answer looks like.
The algorithm can match the input with the expected output, and adjust how its model works when humans tell it if it got the right answer or not.
Unsupervised learning, in contrast, is when the data set doesn’t include the answers – so humans don’t tell the algorithm when it has the right answer. Instead, it figures that out on its own.
These techniques – and others – result in a fascinating product: Computer programs that no human being fully understands.
We know the underlying processes, to be sure. But exactly why a machine learning algorithm makes a specific choice? That simply can’t be reverse engineered; it’s too complicated.
And machine learning is having a heyday right now, acting as a fundamental part of many products and services that define 21st century life ranging from the audacious to the mundane.
Self-driving cars, for example, could not exist without machine learning. It’s also the core technology behind recommendation engines (like the ones used by Amazon and Netflix, for example), as well as AI personal assistants, traffic routing predictions in programs like Google Maps, the ability to recognize what “spam” is in email spam filtering tools, and the online fraud detection used by financial institutions.
Indeed, machine learning is being put to use in a large number of industries and business sectors.
Municipalities are putting machine learning to use in public works and utilities, for example, letting self-learning algorithms find ways to reduce waste, increase efficiency, and identify fraud.
Healthcare providers are looking for opportunities to leverage wearable sensors and other health devices by letting machine learning systems identify health trends and to flag problems.
Processing much more data
In this way, software can process vastly more data than human doctors ever could, and do it more effectively than programs rigidly designed to assess health without learning over time.
Speaking of learning over time, machine learning algorithms can get better at what they do over time, as they continuously review additional data.
In the financial sector, banks and other financial institutions rely on machine learning systems to find investment opportunities, make trade recommendations, and to find theft, fraud, and other abuse.
And don’t ignore perhaps the biggest opportunity for machine learning: In retail.
Retailers use this tech to analyze sales and demographic data for marketing, to deliver personalized experiences to customers, for price optimization, supply chain planning, and more.
The future of business
The future for business includes machine learning. The last decade is defined by major ML milestones – like the Netflix Prize in 2006 (a machine learning competition to improve the company’s recommendation engine) and Watson beating a human at Jeopardy in 2011.
And with an ever-increasing reliance on big data and an IoT packed with sensors, it can’t possibly be any other way.