The German American computer scientist Joseph Weizenbaum demonstrated Eliza way back in 1966 already, as a computer program with a simple dialog algorithm that can have an interactive conversation with a human. This has since been called the Eliza Effect, and today it’s the basis of all chatbots. Bot is short for robot. A chatbot is nothing more than a computer program that can automatically respond to questions posed by a human partner and make sense, more or less. How’s the weather? Where can I buy my favorite pants on sale? What can I make with milk, 3 eggs and apples?
Lots of chatbots use databases to call up their answers, based on keywords and text modules. When they are asked a question, they merely find the response they need in the database. Usually, they are programmed to help in everyday life: They can be found everywhere today in the form of weather bots or media bots that make internet searches easier, or as helpful customer service contacts for websites. The next step for bots are those that can understand spoken language, like the digital assistants Siri or Alexa. These have artificial intelligence. Unlike a database bot, intelligent bots have programming that helps them better imitate human behavior. They are constantly “learning” to achieve these ends.
Chatbots that appear as accounts in social networks and seem to be a real human being behind the account are called “social bots”. Nowadays they can be found in large numbers on Facebook, Twitter, YouTube and co. and have become a hotly discussed issue.
Why are they so debated? Because among their many talents they can systematically collect information about other users. They also spread intended information in the form of comments that appear under specific products – or political opinions. It’s about making a splashy presentation and dominating the conversation so as to persuade people that some particular opinion is prevailing online. Bots can be very active, more active than humans – so they comment more, making it seem like the opinion they push is prevalent. This is what happened recently in the US elections, and now the question is what impact social bots had on real people’s opinion formation and whether they could influence how people vote.
It’s clear that social bots cannot always be spotted right away for what they are. It has been quite a while since Eliza was first developed, and the technological capabilities for simulating human behavior have advanced tremendously. Having a conversation that is without mistakes, that follows and becomes an in-depth dialog with chatbots is still difficult. But more often than not it needn’t get that far. Sometimes simple statements are enough to get a reaction from people and provoke discussion. And there it is again: the Eliza Effect.
“I can’t start at the beginning.” “I understand.” “Do you even care?” “That’s not an easy question.” “But you must care.” “Go on.”
Here are a few tips for recognizing social bots: