Artificial intelligence (AI) or machine intelligence is a subset of computer science where computer systems are programmed to act independently and intelligently mimicking human. Areas which commonly require AI include decision making, problem solving, speech recognition and visual perception.
Although AI has long been a popular academic and science fiction subject, effective large scale and independent implementation and application of AI remains elusive owing to how massive and ambiguous it is. However, on smaller scales, AI has become an important part of current technology.
The concept of an independent machine capable of learning, reasoning and problem solving has been around since the birth of popular science fiction in the 1930s, as evidenced by the numerous short stories and novels published during the period.
By the 1950s, the concept began to gain recognition within the academic community, though it was known under many different terms, including thinking machines, automate theory and cybernetics. It wasn’t until 1956 before the phrase ‘artificial intelligence’ was coined by American computer scientist and recipient of the National Medal of Science, John McCarthy,during the 1956 Dartmouth Conference. Today, the event is generally acknowledged to be the birth of the science of AI.
From the start, the Turing Test, brainchild of legendary English scientist Alan Turing, has been considered the primary objective of AI research and development – are humans capable of developing computer systems which are capable of sufficiently imitating humans to the point where independent observers are not capable of telling them apart? The ‘Turing Machine’, capable of performing its own autonomous cognitive process, remains the foundation of modern artificial intelligence.
No computer as yet have been able to pass the Turing Test, although there have been many single purpose designs which have demonstrated limited success, such as IBM’s Deep Blue chess computer, chatbots and search engines.
Almost seventy years after Ai first became a serious academic and research subject, the discipline is still viewed by many to be at its infancy. Nevertheless, there have been a host of systems which have successfully skirted the edges of Ai using extensive behavioural algorithms and adaptive algorithms. Some of these include:
These applications were made to do little things. But they do these little things, like finding information, giving directions and decipher speech ever so well. They are not full-fledged AI applications, but their ability to ‘reason’ by utilising a huge database and a user’s personal information and behaviour, have made our lives so much easier.
The seemingly intuitive self-driving features and predictive capabilities of self-driving vehicles like Tesla is breath taking to observe. Wide scale is still a distance away, but companies like Aptiv, Aurora and Cruise are already commercially running self-driving cars in limited areas.
The predictive algorithms used by online giants like Amazon and Netflix are becoming scary good. Using data mined for their millions of customers, these companies are able to offer highly accurate buying and viewing recommendations to customers.
American futurist and inventor Ray Ray Kurzweil has boldly predicted that an AI system will be able to pass a Turing Test by 2029. He went one step further by claiming that humanity will achieve Singularity – the neural connection between a human brain and an artificial intelligence - by 2045, which would effectively increase our level of intelligence by a billion fold!