Introductions

Artificial Intelligence Can Create Faces Never Before Seen

Google AI Faces
By Graham Templeton on June 8, 2017
When you’re an A.I. researcher at Google, even your days off are filled with neural nets. Mike Tyka is a Google scientist who recently helped create the company’s DeepDream venture, but this week he posted details of a personal project that could someday make DeepDream seem primitive. That famous program works by basically blending together elements of other pictures, and then modifying that collage, but Tyka’s new approach takes the much more difficult and potentially rewarding path: teaching an A.I. to create all-new portraits from scratch.
“I don’t mind if the results are not necessarily realistic but fine texture is important no matter what even if it’s surreal but [high-resolution] texture,” Tyka commented Tuesday on his blog.
How It Works
The approach uses “generative adversarial networks” (GANs) to refine the A.I.’s abilities over time. GANs are neural networks that work in opposition to one another; one GAN draws a picture from scratch (the generative part) and another attempts to tell whether the picture is real or A.I.-generated (the adversarial part). The GANs will eventually trend toward better and better looking portraits, as one learns to trick its adversary network into misidentifying its creations as real. When this happens, the falsifying network learns from its mistake, and gets better at picking out false pictures in the future. In this way, both the generative and adversarial abilities of the system progress together, and one always keeps driving evolution in the other.

Read the full article with pictures HERE

News, Updates & Videos

REACTION

REACTION from dfmn on Vimeo.

VISUALS: DFMN
AUDIO: CYPHERAUDIO

dfmn.work
cypher.audio

soundcloud.com/cypheraudio/breathe
behance.net/gallery/48608047/REACTION

News, Updates & Videos

Journey through the layers of the mind

Journey through the layers of the mind from Memo Akten on Vimeo.

first tests playing with #deepdream #inceptionism

A visualisation of what’s happening inside the mind of an artificial neural network.

In non-technical speak:

An artificial neural network can be thought of as analogous to a brain (immensely, immensely, immensely simplified. nothing like a brain really). It consists of layers of neurons and connections between neurons. Information is stored in this network as ‘weights’ (strengths) of connections between neurons. Low layers (i.e. closer to the input, e.g. ‘eyes’) store (and recognise) low level abstract features (corners, edges, orientations etc.) and higher layers store (and recognise) higher level features. This is analogous to how information is stored in the mammalian cerebral cortex (e.g. our brain).

Here a neural network has been ‘trained’ on thousands of images – i.e. the images have been fed into the network, and the network has ‘learnt’ about them (establishes weights / strengths for each neuron). (NB. This is a specific database of images fed into the network known as ImageNet image-net.org/explore )

Then when the network is fed a new unknown image (e.g. me), it tries to make sense of (i.e. recognise) this new image in context of what it already knows, i.e. what it’s already been trained on.

This can be thought of as asking the network “Based on what you’ve seen / what you know, what do you think this is?”, and is analogous to you recognising objects in clouds or ink / rorschach tests etc.

The effect is further exaggerated by encouraging the algorithm to generate an image of what it ‘thinks’ it is seeing, and feeding that image back into the input. Then it’s asked to reevaluate, creating a positive feedback loop, reinforcing the biased misinterpretation.

This is like asking you to draw what you think you see in the clouds, and then asking you to look at your drawing and draw what you think you are seeing in your drawing etc,

That last sentence was actually not fully accurate. It would be accurate, if instead of asking you to draw what you think you saw in the clouds, we scanned your brain, looked at a particular group of neurons, reconstructed an image based on the firing patterns of those neurons, based on the in-between representational states in your brain, and gave *that* image to you to look at. Then you would try to make sense of (i.e. recognise) *that* image, and the whole process will be repeated.

We aren’t actually asking the system what it thinks the image is, we’re extracting the image from somewhere inside the network. From any one of the layers. Since different layers store different levels of abstraction and detail, picking different layers to generate the ‘internal picture’ hi-lights different features.

All based on the google research by Alexander Mordvintsev, Software Engineer, Christopher Olah, Software Engineering Intern and Mike Tyka, Software Engineer
googleresearch.blogspot.ch/2015/06/inceptionism-going-deeper-into-neural.html
github.com/google/deepdream