So there you have it: Androids do not just dream of electric sheep; they also dream of mesmerising, multicoloured landscapes. While he was on the right track, the answer appears to be, no, they do not.
The lead engineers behind the A.I. of Google are using “inception” to test out their artificial neural networks – a strategy that has lead to some very handsome, and slightly disturbing, artwork. What does a fake brain that’s trained to detect images of dogs see when it’s shown a picture of a knight?
The AI software has been ‘taught’ to recognise features such as people and animals using millions of photos. That modified picture is then fed back into the network, which is again tasked to recognise features and emphasise them, and so on.
To push the machines further, Google told the robots to amplify and over-interpret images, so that whenever they thought they spotted something, they were told to make it more like that. The image looks the same to a human, but different to a neural network.
But the networks aren’t restricted to only identifying images. We know that after training, each layer progressively extracts higher and higher-level features of the image, until the final layer essentially makes a decision on what the image shows.
The image recognition software has already made it into consumer products.
We then pick a layer and ask the network to enhance whatever it detected. For example, lower layers tend to produce strokes or simple ornament-like patterns, because those layers are sensitive to basic features such as edges and their orientations.
Then things get really interesting.
Where it gets really fun is when the neural network is fed an image and asked to search for small subtle things.
Yes. Those are fantastical creatures created entirely by an artificial neural network looking for animals in an image of clouds. “This in turn will make the network recognize the bird even more strongly on the next pass and so forth, until a highly detailed bird appears, seemingly out of nowhere”.
You can look at the results of this work as pure art, but that would be missing the main message.
After “teaching” an artificial neural network to recognize certain objects, animals, and buildings, the researchers then threw the system a loop, literally. The large scale features that the networks use don’t correspond to the sort of neat prototypes that we might imagine, but messy parts of things that can be put together in ways that the network has never seen – is this imagination or creativity? That’s a huge feat in AI, and it takes a pretty big artificial brain to pull that off. Well, it turns out you can do some other incredible things with Google’s artificial neural networks. The artificial neural network was trained mainly on animal images, so expect to see a lot of dogs and fish and lizards and birds.
Below are some more images the networks created in their feedback loops (in addition to the one at the top of this story).
The entire gallery can be found here.
p style=”text-align: center;”>