Recently there has been a lot of talk about AI (Artificial Intelligence). From Siri to self-driving cars AI has been producing at very rapid pace and shows no sign of slowing down.
AI is usually defined as the science of making computers do certain tasks that require intelligence when done by humans. AI can range from autonomous weapons to Google’s search algorithms. In todays conversation Artificial Intelligence is more formally known as narrow AI (weak AI), as this is because it is designed to perform a narrow task. For example facial recognition, internet searches or simply just driving a car.
In the long run many researchers want to be able to create general AI (strong AI), which will be able to outperform humans at pretty much every cognitive task.
Why would we can’t to research AI? The goal of keeping AI’s impact on society beneficial motivates research in many areas from economics and law to technical topics such as verification, validity, security and control.
Whereas it may be little more than a minor nuisance if your laptop crashes or gets hacked, it becomes all the more important that an AI system does what you want it to do if it controls your car, your automated trading system or your power grid.
In the long term, an important question is what will happen if the quest for strong AI succeeds and an AI system becomes better than humans at all cognitive tasks?
As pointed out by I.J. Good in 1965, designing smarter AI systems is itself a cognitive task. Such a system could potentially undergo recursive self-improvement, triggering an intelligence explosion leaving human intellect far behind. By inventing revolutionary new technologies, such a superintelligence might help us eradicate war, disease, and poverty, and so the creation of strong AI might be the biggest event in human history. Some experts have expressed concern, though, that it might also be the last, unless we learn to align the goals of the AI with ours before it becomes super intelligent.
There are those who question whether strong AI will ever be achieved, other believe that the creation of super intelligent AI is guaranteed to be beneficial and shouldn’t pose a threat. However,most researchers believe that AI is unlikely to present any form of human emotion or become benevolent or malevolent.
With that being said, it’s near impossible to know exactly how the technology will react. Some say it will help to save humanity others say it could be our doom. But, the truth is, we won’t really know until we try.