Imagine an orange cat. Now, imagine the same cat, but with coal-black fur. Now, imagine the cat strutting along the Great Wall of China. Doing this, a quick series of neuron activations in your brain will come up with variations of the picture presented, based on your previous knowledge of the world.
In other words, as humans, it’s easy to envision an object with different attributes. But, despite advances in deep neural networks that match or surpass human performance in certain tasks, computers still struggle with the very human skill of “imagination.”
Now, a USC research team has developed an AI that uses human-like capabilities to imagine a never-before-seen object with different attributes. The paper, titled Zero-Shot Synthesis with Group-Supervised Learning, was published in the 2021 International Conference on Learning Representations on May 7.
“We were inspired by human visual generalization capabilities to try to simulate human imagination in machines,” said the study’s lead author Yunhao Ge, a computer science PhD student working under the supervision of Laurent Itti, a computer science professor.
“Humans can separate their learned knowledge by attributes–for instance, shape, pose, position, color–and then recombine them to imagine a new object. Our paper attempts to simulate this process using neural networks.”
AI’s generalization problem
For instance, say you want to create an AI system that generates images of cars. Ideally, you would provide the algorithm with a few images of a car, and it would be able to generate many types of cars–from Porsches to Pontiacs to pick-up trucks–in any color, from multiple angles.
This is one of the long-sought goals of AI: creating models that can extrapolate. This means that, given a few examples, the model should be able to extract the underlying rules and apply them to a vast range of novel examples it hasn’t seen before. But machines are most commonly trained on sample features, pixels for instance, without taking into account the object’s attributes.
The science of imagination
In this new study, the researchers attempt to overcome this limitation using a concept called disentanglement. Disentanglement can be used to generate deepfakes, for instance, by disentangling human face movements and identity. By doing this, said Ge, “people can synthesize new images and videos that substitute the original person’s identity with another person, but keep the original movement.”
Similarly, the new approach takes a group of sample images–rather than one sample at a time as traditional algorithms have done–and mines the similarity between them to achieve something called “controllable disentangled representation learning.”
Then, it recombines this knowledge to achieve “controllable novel image synthesis,” or what you might call imagination. “For instance, take the Transformer movie as an example” said Ge, “It can take the shape of Megatron car, the color and pose of a yellow Bumblebee car, and the background of New York’s Times Square. The result will be a Bumblebee-colored Megatron car driving in Times Square, even if this sample was not witnessed during the training session.”
This is similar to how we as humans extrapolate: when a human sees a color from one object, we can easily apply it to any other object by substituting the original color with the new one. Using their technique, the group generated a new dataset containing 1.56 million images that could help future research in the field.
Understanding the world
While disentanglement is not a new idea, the researchers say their framework can be compatible with nearly any type of data or knowledge. This widens the opportunity for applications. For instance, disentangling race and gender-related knowledge to make fairer AI by removing sensitive attributes from the equation altogether.
In the field of medicine, it could help doctors and biologists discover more useful drugs by disentangling the medicine function from other properties, and then recombining them to synthesize new medicine. Imbuing machines with imagination could also help create safer AI by, for instance, allowing autonomous vehicles to imagine and avoid dangerous scenarios previously unseen during training.
“Deep learning has already demonstrated unsurpassed performance and promise in many domains, but all too often this has happened through shallow mimicry, and without a deeper understanding of the separate attributes that make each object unique,” said Laurent Itti, a professor of computer science. “This new disentanglement approach, for the first time, truly unleashes a new sense of imagination in A.I. systems, bringing them closer to humans’ understanding of the world.”
Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.