The Future Of AI Lies In An Avocado Armchair

Uh, what? OpenAI has built a new model called DALL·E that could combine language and images in a way that will make artificial intelligence algorithms better at understanding both words and what they refer to. DALL-E is an attempt by developers to have AIs be better at understanding what words and sentences mean. Now where does the avocado armchair come in? Technology Review has that one covered: 

Listen beautiful relax classics on our Youtube channel.

To test DALL·E’s ability to work with novel concepts, the researchers gave it captions that described objects they thought it would not have seen before, such as “an avocado armchair” and “an illustration of a baby daikon radish in a tutu walking a dog.” In both these cases, the AI generated images that combined these concepts in plausible ways.

The armchairs in particular all look like chairs and avocados. “The thing that surprised me the most is that the model can take two unrelated concepts and put them together in a way that results in something kind of functional,” says Aditya Ramesh, who worked on DALL·E. This is probably because a halved avocado looks a little like a high-backed armchair, with the pit as a cushion. For other captions, such as “a snail made of harp,” the results are less good, with images that combine snails and harps in odd ways.

DALL·E is the kind of system that Riedl imagined submitting to the Lovelace 2.0 test, a thought experiment that he came up with in 2014. The test is meant to replace the Turing test as a benchmark for measuring artificial intelligence. It assumes that one mark of intelligence is the ability to blend concepts in creative ways. Riedl suggests that asking a computer to draw a picture of a man holding a penguin is a better test of smarts than asking a chatbot to dupe a human in conversation, 

Image via Technology Review 

Source: neatorama

No votes yet.
Please wait...
Loading...