What do ten million video images have in common? Human faces, human bodies – and cats.
A few years ago, researchers analyzed the databases of the video platform YouTube using a network of 1000 computers. For three days the network searched for recurring patterns, until the artificial intelligence decided for itself how to categorize this flood of images. One select category: cats. That the internet was full of felines is no surprise. But the analysis proved that computers, with the help of massive amounts of data and unbelievable processing power, are able to independently come up with answers and solutions. A proof of the power of artificial intelligence.
What is artificial intelligence?
Language assistants in smartphones, navigation systems in cars or face detection photo and video apps: More and more people are using artificial intelligence. But few really know how and where artificial intelligence already surrounds them in everyday life - and what actually makes it so special.
Coming up with a concrete answer is difficult, because already the term “human intelligence” is not fully defined either. The term "artificial intelligence" was coined in 1956 by computer scientist John McCarthy at a conference at Dartmouth University.
The main difference between conventional computer programs and systems with artificial intelligence is that the programs only do what people have taught them to do through predefined processing rules (algorithms). Artificial intelligence, on the other hand, learns independently and responds to tasks or problems, regardless of what people might do.
This might sound simple, but the difference is huge. Because while simple computer programs only process series of complex if-then tasks, an artificial intelligence may be modeled after the human brain in the form of neural networks. These are similar to the functioning of neurons and can process a lot of information simultaneously. The networks pass on information via artificial synapses and are able to learn – they have been specifically trained through so-called deep learning. Like their natural role models, humans, they learn through experience.
Important terms briefly explained
Machine learning is a subfield of artificial intelligence. With machine learning, computers can be made to look for patterns and regularities in the tasks and problems presented to them. This generates semi-artificial knowledge from experience. For this, one needs select and set instructions, known as algorithms, and lots and lots of data.
The word algorithm basically only means calculation. Every school kid learns about algorithms, like the written formulas for adding, subtracting and dividing, for instance. In the area of information technology, this term describes a processing instruction which is so precisely formulated that a machine could use it to solve a problem or complete a task with it. This instruction has to have a finite number of individual steps, in other words, it has to finish at some point. After each step, it must be absolutely clear what the next step is.
The first idea of transferring the power of the brain to machine learning came about in 1943. Neurologist Warren S. McCulloch and logician Walter Pitts proposed linking up artificial nerve cells as central processing units which are connected to each other. Nerve cells of the brain were their biological role model. Neural networks are groups of algorithms, which are structured similarly to a human brain, in order to recognize recurring patterns and to arrange so-called model groups. In an artificial neural network, many neurons are connected together in order to solve complex tasks like a human brain.
Deep learning is a subfield of machine learning. While classical algorithms often rely on simple, mathematical formulas, deep learning algorithms evolve independently and create their own new model layers within a neural network.
With the term big data, experts combine two aspects: For one, it describes the ever more rapidly growing volumes of data arising from the use of digital devices. Secondly, it’s about powerful software solutions and computer systems, with which companies can quickly and conveniently handle the flood of information.
Do AI improve our everyday lives?
For intelligent systems to enrich our everyday lives, they need huge amounts of data. Big data is an important prerequisite for a well-functioning AI. Because they learn constantly by continually analyzing data, for example, user data. In sports, analysis of biometric data can be used to see how an athlete’s training affects their probability of injury. Farmers can determine the optimal timing for irrigating their fields. Cities use data for energy management. Medicine uses AI to detect diseases and track treatments.
The fact that artificial intelligence can lead to unexpected solutions is exemplified by the message service Twitter chat-bot "Tay". Developers wanted to use it to learn how artificial intelligence might learn in an everyday setting. "Tay" was turned off after less than 24 hours. The experiment to let an artificial intelligence learn through exchange with everyday people went very badly. This is because the chat-bot, who was modeled to be an average teenager, quickly became an internet troll full of hatred and incitement, in a matter of hours. Twitter users had detected a flaw in the bot and taught it negative behaviors.
Read more in the "AI in everyday life" dossier
Artificial intelligence, i.e. self-learning computers, already accompany people in many walks of life today. AI will change the world
Digital assistants listen for words thanks to speech recognition and have the potential to facilitate everyday life. Digital assistants