Artificial intelligence (AI) is now part and parcel of our daily experience. For example, AI features in our online transactions involving chatbots for service providers or retail inquiries. Moreover, it powers algorithms to pilot self-driving cars or generate medical diagnoses more efficiently and accurately.
But this begs the question of where the technology goes from here. More importantly, how humanity can create smarter and more accurate AI solutions.
Researchers at the University of California in Irvine believe that a hybrid approach may be the way to go.
The hybrid approach to AI
According to Professor Mark Steyvers of UCI’s cognitive sciences department, humans can improve the predictive action of AIs. Based on his team’s empirical demonstrations and analyses, Steyvers noted that people and machine-generated algorithms might complement each other’s strengths and weaknesses. This is so because both use a variety of information sources and working strategies to facilitate the decision-making process.
While human accuracy is slightly lower than an AI, a hybrid approach produces significantly more accurate results than a combination of predictions from two digital algorithms or two people.
The results of the Steyvers study were published earlier this month in the most recent Proceedings of the National Academy of Sciences, along with the UC Irvine team’s mathematical model, which shows that we can improve AI performance with a studied mix of human and algorithmic predictions.
How the team came to their conclusions
The UC Irvine team conducted an experiment wherein a select group of human participants and a set of generated algorithms had to identify a series of distorted images of animals and everyday objects. The groups were kept separate throughout the duration of the experiment.
The humans were asked to rank their confidence about the accuracy of their guesses as low, medium, and high. On the other hand, the machine classifier generated a continuous score concerning algorithmic predictions.
The experiment presented a great deal of difference when it came to confidence between human participants and the algorithms. For example, in some cases, human participants were confident about certain images, such as chairs, while the AI algorithm seemed confused and unable to identify the item.
Conversely, the algorithms confidently labeled some photos that stumped the human participants who were unsure if there was actually a recognizable object in the picture shown to them.
When scores from both groups of participants were combined using the new framework, the resulting hybrid model performed better than humans or algorithms working alone, producing more accurate predictions.