It looks like Artificial Intelligence (AI) has developed its own language, but some experts are skeptical of the claim.
- The AI system called DALL-E2 appears to have created its own system of written communication.Some experts say that the apparent language may just be gibberish. It’s an example of how hard it is to interpret the results of advanced AI systems.
OpenAI’s text-to-image AI system called DALL-E2 appears to have created its own system of written communication. It’s an example of how hard it is to interpret the results of advanced AI systems.
“Because of the size and depth of large models, it is very difficult to explain model behavior,” Teresa O’Neill, the director of solutions architecture for natural language understanding at iMerit, told Lifewire in an email interview. “This is one of the core challenges, and in some cases, ethical issues with increasingly powerful models. If we can’t explain why they behave as they do, can we predict their behavior or keep it in line with our norms and expectations?”
AI Chats
Computer Science student Giannis Daras recently noted that the DALLE-2 system, which creates images based on text input, would return nonsense words as text under certain circumstances.
“A known limitation of DALLE-2 is that it struggles with text,” he wrote in a paper published on preprint server Arxiv. “For example, text prompts such as: ‘An image of the word airplane’ often lead to generated images that depict gibberish text.”
But, wrote Daras, there may be a method behind the apparent gibberish. “We discover that this produced text is not random, but rather reveals a hidden vocabulary that the model seems to have developed internally,” he continued. “For example, when fed with this gibberish text, the model frequently produces airplanes.”
In his tweet, Daras pointed out that when DALLE-2 was asked to subtitle a conversation between two farmers, it showed them talking, but the speech bubbles were filled with what looked like nonsensical words. But Daras discovered that the words did appear to have their own meaning to the AI: the farmers were talking about vegetables and birds.
Nicola Davolio, the CEO of tech company Hupry, which works with AI, explained to Lifewire in an email interview that the language is based on symbols that the DALL-E2 system has learned to associate with certain concepts. For example, the symbol for “dog” might be related to a dog’s picture, while the symbol for “cat” might be associated with a picture of a cat. DALL-E2 has created its language because it enables it to communicate more effectively with other AI systems.
“The language is composed of symbols that look like Egyptian hieroglyphs and doesn’t appear to have any specific meaning,” he added. “The symbols are probably meaningless to humans, but they make perfect sense to the AI system since it’s been trained on millions of images.”
Puzzles like the apparently hidden vocabulary of DALL-E2 are fun to wrestle with, but they also highlight heavier questions…
Researchers believe that the AI system created the language to help it better understand the relationships between images and words, Davolio said.
“They aren’t sure why the AI system developed its language, but they suspect it may have something to do with how it was learning to create images,” Davolio added. “It’s possible that the AI system developed its language to make communication between different network parts more efficient.”
AI Mysteries
DALL-E2 isn’t the only AI system that has developed its internal language, Davolio pointed out. In 2017, Google’s AutoML system created a new form of neural architecture called a ‘child network’ after being left to decide how best to complete a given task. This child network was incapable of being interpreted by its human creators.
“These examples are just a few instances in which AI systems have developed ways of doing things that we can’t explain,” Davolio said. “It’s an emerging phenomenon that is fascinating and alarming in equal measure. As AI systems become more complex and autonomous, we may increasingly find ourselves in the position of not understanding how they work.”
O’Neill said that she doesn’t think DALL-E2 is creating its own language. Instead, she said the reason for the apparent linguistic invention is probably a bit more prosaic.
“One plausible explanation is a random chance–in a model that large, a bit of Murphy’s Law might apply: if a weird thing can happen, it probably will,” O’Neill added. Another possibility suggested by research analyst Benjamin Hilton in a Twitter thread discussing Daras’s findings is that the form of the phrase “apoploe vesrreaitais” mimics the form of a Latin name for an animal. So the system has spawned a new order of Aves, O’Neill added.
“Puzzles like the apparently hidden vocabulary of DALL-E2 are fun to wrestle with, but they also highlight heavier questions around the risk, bias, and ethics in the often inscrutable behavior of large models,” O’Neill said.
Get the Latest Tech News Delivered Every Day