Alexa, what do you look like?

Powering everything from voice recognition to self-driving cars and the new ways of finance, A.I. algorithms are the invisible force that increasingly shapes our lives. As individuals and society, we need new visual metaphors to help us decide how much influence we want to give to these intangible systems – London-based creative studio FIELD specialise in making them visible.

This summer, FIELD have started an extensive research project exploring the most relevant smart technologies in code-based illustrations. From an in-depth study of their logic and code, and with a new visual language, they created a series of illustrations that reveal the complexity and architecture of these technologies. Read more about the individual works below and in the Artist Statement.

The first five of these artworks, created exclusively for WIRED magazine, feature in WIRED World 2018 issue, which focuses on the technological advances that will have the greatest impact in the upcoming year.

2-Personal-Assistant-Preview by FIELD.IO
4-Face-Syhthesis-Preview by FIELD.IO
Individual features of a portrait being generated by neural network modules

Face Hacking


In 2017, researchers at the University of Washington managed to generate a believable video of President Obama, using only a forged audio recording – and a neural network trained on his public speeches. The lip sync is nearly perfect, and the possibilities of abuse are alarming. How much longer will we be able to trust what we see on camera?

This artwork, generating Obama’s likeness from a multitude of software modules, illustrates the way a neural network learns how different sounds corresponds to the movement of lips, eyes and cheeks in minute detail.

The Next Generation of Voice Assistants


Personal assistants like Alexa, Siri and Cortana will get even smarter in 2018. A computer science breakthrough called ‘dynamic program generation’ will allow them to understand more complex instructions and even the “intent” of the input. They will provide responses that tap into functionality and data from all apps you use on your connected devices.

This illustration shows the natural language processing algorithm SyntaxNet, you can see the voice input in the form a soundwave coming in at the bottom layer. It is parsed into phonemes, and then processed across multiple, dynamically re-arranging layers to extract the user’s request and form a response.

2-Personal-Assistant-Preview by FIELD.IO
Voice Recognition Neural Network of an A.I.-driven Personal Assistant
5-GAN-Preview by FIELD.IO
Opposing algorithms in a Generative Adversarial Network – one generates, one validates.

Algorithms get Creative


Algorithms receive feedback from humans to help them improve – but AI researchers are excited by generative adversarial networks.

Previously thought impossible, the idea is to pit two machine learning programs against each other — one to create something, the other to act as critic.

Amazon is testing an application in which networks analyse images and then create similar ones. Although they can currently only create tiny images, the technique might one day be used in film-making.

Seeing the World through the Eyes of a Self-Driving Car


To navigate the world safely, autonomous vehicles must build a picture of it. To do this, an algorithm integrates real-time feeds from a multitude of sensors including video, infrared, radar and ultrasound. It then passes that data through up to 150 processing stages and filters informed by prior learning.

This image is based on Inception, Google’s image recognition model, and shows the inputs (on the right) being pulled in and processed (top left) into a model of the road ahead. Other vehicles are represented by the red boxes.

1-Self-Driving-Car-Preview by FIELD.IO
Environment Perception in an Autonomous Driving system – multiple sensor inputs are calculated into a realtime environment model, then interpreted by a neural network.
3-Blockchain-Preview by FIELD.IO
Visualising a day of Ethereum transactions

Following the Money Trail


With initial coin offerings attracting attention and governments testing their own cryptocurrencies, digital money will continue to grow in influence in 2018. This image depicts transactions in Ethereum, an open-source computing system that allows developers to create blockchain-based applications.

Each square represents a line in the distributed ledger that makes up a blockchain, with each following on from the last. The squares’ colours are determined by the amount of money that is being moved.

Technical Approach


Neural Networks come in many different forms depending on their use-case. Within the science community they are typically described through graphs of connected boxes that define the flow of data through the neurons. To execute a neural network you typically need a software framework that allows to design and then run them on high performance hardware.

We’ve exploited this and modified the commonly used Tensorflow Framework developed by Google, to output information about the graphs structure and behaviour, which formed the underlying dataset for our visuals.

FS-doku-1 by FIELD.IO
GAN-doku-2 by FIELD.IO