Age of Centaur — AI Guided By Human Intelligence
Microsoft’s AI Chatbot Tay was programmed to imitate the language patterns of 18–24 old millennials. After just a few hours of going live, Tay started repeating such offensive statements as “Hitler was right” and “ 9/11 was an inside job” which were purposely exchanged by other human users to provoke Tay.
Facebook content screening uses artificial intelligence to weed out toxic posts but uses thousands of workers to remove the messages that the A.I. doesn’t within 24-hours of posting.
Uber tested its self-driving cars in San Francisco which crossed around 6 red lights in the city during testing.
IBM Watson struggled to decipher doctors’ notes and patient histories, in Electronic Health Records and matched them to volumes of cancer-related scientific literature, frustrating physicians. After four years and spending $62 million, the project was shut down.
Surgical robots have brought to the forefront of the conversation issues regarding liability when it comes to failed surgeries and malpractice suits.
In a beauty pageant judged by a panel, artificially intelligent robots ended up choosing mostly white people as the winners with some Asians.
Amazon’s facial-recognition technology in the Rekognition solution mistakenly matched the well-known athletes as criminals based on the database of mugshots.
The above examples demonstrate how companies are straddling between the use of Artificial Intelligence to completely replace human intelligence and judgment. They bring forth the most difficult philosophical question of the era related to the role of human judgment — Can smart machines outthink us, or are certain elements of human judgment indispensable in deciding some of the most important things in life?
Purely AI-based outcomes, decisions and actions can have some fundamental flaws.
- Awareness of context: Common actions like reading and interpreting a written text or driving a car are much more difficult for AI since it misses the variety, depth and edge cases inherently comprehensible to humans.
- Transparency: AI decisions are not always intelligible and explainable to humans.
- Neutrality: AI decisions are susceptible to inaccuracies, discriminatory outcomes, and embedded or inserted bias.
- Fairness: AI decisions have a risk of Human Rights violations and contrasting fundamental values.
- Liability: As of right now the majority of cases involving injury caused by a robot find either the operator or the manufacturer liable but these decisions will become more complex.
It is evident that it is much easier to simulate the reasoning of a highly trained adult expert in AI like playing chess or jeopardy than to mimic the day-to-day learning of a common man.
The solution is to promote human-in-the-loop systems, where humans hold the reins and can control the AI horse — A Centaur. AI will be there to augment than to do everything. AI in the support mode performs tasks such as mistake reduction, problem-solving, information discovery, and process simplification by partially automating tasks. Any mistakes in AI output or decisions are corrected by humans and the correction data is fed back as labeled data to retrain the machine learning models. With such symbiotic learning, humans can shape machine behaviors using both an active input and passive observations of human behaviors. It’s as if humans guide the AI by confronting it with new issues that encourage human curiosity — A human brain guiding an AI horse.
While more than 90 percent of objectionable material that comes across Facebook and Instagram is removed by A.I., outsourced workers decide whether to leave up the posts that the A.I. doesn’t catch.
Rather than running cars or trucks in completely autonomous mode, a human driver sitting at home can watch and correct a remotely operated car on open roads.
Robots that assist medical professionals in performing surgery allow for increases in precision, dexterity, and flexibility which can greatly improve patient outcomes when difficult and complex surgeries are required.
IBM Watson brings the right, contextual knowledge extracted from the vast library of cancer research to physicians’ fingertips to allow them to make informed decisions.
GPT3 and related technologies will transform writing from an act of solo creation to a collaboration between human and machine: one in which the human provides some initial language, the AI suggests edits or follow-up sentences, the human iterates based on the AI’s feedback, and so forth.
Copilot for auto programming writes programs based on the free text that is 45% correct and the rest of it is added or fixed by the programmer making the entire coding process much faster.
Ethics in human society is a grey area with fuzzy boundaries mainly based on common beliefs. AI decisions and ethics and what is wrong or right is a research area. One way is to allow a swarm of people to collaboratively guide AI decisions so it reflects a collective conscience of the society.
With such a constant dance and dialog with the machine, the hope is that one day, we will let go of the AI reins. Then the horse more intelligent than the entire human society would take the human race to frontiers never achieved before.Share