Google’s image recognition brings a whole new meaning to cloud watching

Google’s image recognition brings a whole new meaning to cloud watching

Some of the most cutting-edge work with artificial intelligence today revolves around how we are teaching computers to recognize speech and images. Google, one of the front-runners in this field, says it trains an artificial neural network to recognize speech or images by watching how it responds to millions of examples, and gradually adjusting the parameters until it gets things right.

For image recognition, the network works by channeling 10-30 stacked layers, each of which progressively extracts higher and higher-level features of the image. The first layer might look for some edges or corners, while the next look for overall shapes. The final layer determines what the image shows. By adding more higher-level layers, Google can have the computer identify even more sophisticated things in the image than actually exists.

In the image above, Google had its neural network “over-interpret” an image of the sky. The network was trained mostly on images of animals, so it found amazing creatures like the “pig-snail,” the “camel-bird,” and the “dog-fish” in this image.

“We ask the network: ‘Whatever you see there, I want more of it!’ This creates a feedback loop: if a cloud looks a little bit like a bird, the network will make it look more like a bird,” write Alexander Mordvintsev, Christopher Olah and Mike Tyka of Google.

For other amazing quirks and applications of the neural network, check out Google’s original blog post here.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s