The history of artificial intelligence has been marked by repeated cycles of extreme optimism and promise followed by disillusionment and disappointment. Today’s AI systems can perform complicated tasks in a wide range of areas, such as mathematics, games, and photorealistic image generation. But some of the early goals of AI like housekeeper robots and self-driving cars continue to recede as we approach them.
Part of the continued cycle of missing these goals is due to incorrect assumptions about AI and natural intelligence, according to Melanie Mitchell, Davis Professor of Complexity at the Santa Fe Institute and author of Artificial Intelligence: A Guide For Thinking Humans.
In a new paper titled “Why AI is Harder Than We Think,” Mitchell lays out four common fallacies about AI that cause misunderstandings not only among the public and the media, but also among experts. These fallacies give a false sense of confidence about how close we are to achieving artificial general intelligence, AI systems that can match the cognitive and general problem-solving skills of humans.
Narrow AI and general AI are not on the same scale
TNW Conference is back, baby!Double the fun with our 2-for-1 ticket deal!
The kind of AI that we have today can be very good at solving narrowly defined problems. They can outmatch humans at Go and chess, find cancerous patterns in x-ray images with remarkable accuracy, and convert audio data to text. But designing systems that can solve single problems does not necessarily get us closer to solving more complicated problems. Mitchell describes the first fallacy as “Narrow intelligence is on a continuum with general intelligence.”
“If people see a machine do something amazing, albeit in a narrow area, they often assume the field is that much further along toward general AI,” Mitchell writes in her paper.
Part of the continued cycle of missing these goals is due to incorrect assumptions about AI and natural intelligence, according to Melanie Mitchell, Davis Professor of Complexity at the Santa Fe Institute and author of Artificial Intelligence: A Guide For Thinking Humans.
In a new paper titled “Why AI is Harder Than We Think,” Mitchell lays out four common fallacies about AI that cause misunderstandings not only among the public and the media, but also among experts. These fallacies give a false sense of confidence about how close we are to achieving artificial general intelligence, AI systems that can match the cognitive and general problem-solving skills of humans.
Narrow AI and general AI are not on the same scale
TNW Conference is back, baby!Double the fun with our 2-for-1 ticket deal!
The kind of AI that we have today can be very good at solving narrowly defined problems. They can outmatch humans at Go and chess, find cancerous patterns in x-ray images with remarkable accuracy, and convert audio data to text. But designing systems that can solve single problems does not necessarily get us closer to solving more complicated problems. Mitchell describes the first fallacy as “Narrow intelligence is on a continuum with general intelligence.”
“If people see a machine do something amazing, albeit in a narrow area, they often assume the field is that much further along toward general AI,” Mitchell writes in her paper.