How AI can augment prediction and human decision making

In AI

By Dave Timm

August 2020

We spend much of our time here at Red Marble exploring ways that AI and machine learning can elevate human performance. One area of particular interest is how AI can augment prediction and human decision making.

But before we can design AI, we need to understand the differences between how software and human brains make decisions.

How do humans make decisions?

Humans make thousands of decisions every day; subconsciously combining inputs from multiple parts of the brain, combining real-time data with historical information from our memory and blending rational thought with emotional cues.

In “Thinking, Fast and Slow,” Kahneman and Tversky explain how fast, instinctive and emotional decisions are blended with others that are made logically and deliberately – all with an assessment of risk, probability and judgement based on experience.

This is a pretty sound way to make decisions – however, it takes time and the quality of the decisions can depend on outside factors (ie the person’s health or mental state).

How does machine learning make decisions?

Machine learning (ML) models aim to emulate elements of this decision making, but clearly some areas are more accessible than others.

ML reflects the ability for software to ‘fit’ a particular model to a set of data by applying specific weightings to different parts of the data (called ‘features’). It uses this model and applies the weightings to extrapolate that data in order to predict future events.

A machine can process vast amounts of historical data to make its predictions, with a defined probability, based on past data – but it can’t apply an emotional lens (yet) to those decisions, and it struggles where context changes.

Where machine learning works well

A nice application of ML is to model the rational human decision making process and to make those decisions at scale. Let me share an example.

We recently worked with a client who was making predictions about stock levels of spare parts for machinery. Were they holding enough in stock? When spare parts were ordered, would they arrive on time? They needed to know they would have the parts when required.

Looking at the “health” of each material made the assessment fairly intuitive and simple for the human. A track record of late deliveries from suppliers, highly variable stock levels in the warehouse, parts being used for breakdown (rather than planned maintenance) all lead to a fairly simple judgement by the human worker.

The challenge is applying that process across 800,000 materials every day. Clearly not something a human can do.

Applying an ML model here does the heavy lifting superbly and creates a list of priorities that the human can work through and apply judgement to.

We apply similar models in our AI-driven employee engagement and digital adoption work. The software can analyse many users and predict which information is most valuable to help each individual use their technology to its full potential and to succeed in their role. It models the human analysis, and applies it at scale.

Where do ML models fall short? 

Machine learning takes historical data but can lack immediate context. Here’s another example.

We help one of our clients predict customer conversion for certain products and prices, helping them maximise margin. The model is trained on historical data, but within the context of COVID, historical data is not reflective of current buying behaviour. Companies need to adapt their models to make use of real-time context and the most recent data as it’s coming in.

Machine learning can also fail to understand the behavioural aspects of decision making. Human decision making has lots of nuances; emotion, the time available, the paradox of choice as outlined by Barry Schwartz to name a few. AI models can be trained on aspects of these – and they will get better at this as technology improves – but for now will struggle to put it all together into cohesive reasoning.

When is emotional decision making a problem? 

There are areas where prediction models can expose harsh realities which the human mind might wish to overlook.

One of our areas of work is predicting the success of corporate projects. By tracking the data throughout the project and applying a model trained on historical information, there are clear ways of predicting which projects will succeed, and which will fail to deliver the predicted business benefits. However, many projects are “pet projects” of a particular group or person; and the human mind may already have an ideal end state in mind which will bias the decision making.

An ‘emotionless’ ML model will have no issues exposing the hard facts: “This project only has a 30% chance of delivering the desired benefits” quickly cuts to the reality and can save companies countless time and expense.

Human decision making + AI: what’s the best way to combine them?

Humans are amazing, and the ability to make complex decisions pretty quickly is significantly ahead of our software abilities. However – add in the capability of AI to apply models rapidly, constantly and at huge scale to the capability of human decision making, and you have some very exciting possibilities.

Thanks for checking out our business articles. If you want to learn more, feel free to reach out to Red Marble AI. You can click on the "Let's Talk" button on our website or email Dave, our AI expert at d.timm@redmarble.ai.

We appreciate your interest and look forward to sharing more with you!

Let’s Talk

Keep reading

An Update on AI Agents - AI Research From The Lab - Red Marble AI
Research Briefs
OpenAI for Docket Recognition - AI Research From The Lab - Red Marble AI
Research Briefs
AI-Generated Video - AI Research From The Lab - Red Marble AI
Research Briefs
Fine-Tuning GPT-3.5 Turbo - AI Research From The Lab - Red Marble AI
Research Briefs
12 steps to responsible ai
AI Governance
Audiocraft AI Music Generation - AI Research From The Lab - Red Marble AI
Research Briefs
GPT4all - AI Research From The Lab - Red Marble AI
Research Briefs
Emerging LLMs - AI Research From The Lab - Red Marble AI
Research Briefs
AI-Powered Autonomous Agents - AI Research From The Lab - Red Marble AI
Research Briefs
AI Regulatory Update
AI Governance
AI Regulatory Update
AI Governance
AI Regulatory Update
AI Governance
AI Regulatory Update
AI Governance
AI Regulatory Update
AI Governance
descrimination in ai
AI Governance
The Quiet AI revolution in Heavy Industries -Red Marble AI
AI Strategy
Red Marble Construction Language Research project
AI in Construction
The AI Revolution is here - Red Marble AI whitepaper
AI in Business
AI
AI in Construction
AI Strategy
AI in Business
AI in Business
AI
AI
AI Strategy
AI
Experiments with Red Marble AI
AI Strategy
AI in Business
AI
AI
AI in Business