On uncertainty, algorithms and creativity
The existence of art-oriented platforms that allows artists and designers to experiment with algorithmic art without the need for advanced computational skills. Platforms like Google’s AI Experiments and Art Breeder, make it possible for everyone to explore the image-making capabilities of sophisticated machine learning methods such as Generative Adversarial Networks (GANs).
Similarly, websites such as Deep Dream Generator allow anyone to remix images through algorithms that reinterpret two different images to create a new one.
The open-access nature of these projects makes it possible for people outside the computer science sphere to experiment with generative image making, share their results and process online, and be inspired by each other.
I set out to explore the creative possibilities of incorporating algorithms into my work. My personal experience joins that of other artists and designers who have learned about the transformative effects they can have on the creative process, and the opportunity of embracing uncertainty as creative fuel.
Gene-editing is the artist’s brush
Using a technique dubbed “gene-editing”, Artbreeder’s image-making process is centered on mixing the genes of millions of different images to generate an entirely new one. However, instead of choosing specific images, the artist determines categories (e.g. structures, small objects, animals) to add 'genes' from and generate a new image. Intentional about what influences to bring to the piece, but not precisely knowing what the outcome will look like after a new gene is added, the artist needs to constantly align each new result with their artistic vision, developing a new creative intuition in the process. Over time I learned that to add a specific color, texture or shape, I had to think in terms of objects that would contain those genes. To obtain a bright green, for example, I would incorporate a reptile gene, generate a new image, evaluate the results and align them with my artistic vision. For a bit of structure, add a chair. Generate a new image. Evaluate, tweak genetic information, repeat. Less toad, more chair.
The relationship between objects and their visual properties is strongly wired in the mind of artists and designers—when we talk about color and suggest a tomato red instead of scarlet, we are making that connection.
Over time I learned that to add a specific color, texture or shape, I had to think in terms of objects that would contain those genes. To obtain a bright green, incorporate a reptile gene, for a bit of structure, add a chair. Evaluate, tweak genetic information, repeat. Less toad, more chair.
This process makes use of that relationship and pairs it with tremendous computational power. Thus, each possible image and their respective visual properties also become part of the genetic mix. So that adding a tomato gene would not only add bright red to the image, but also pass down other genetic material (visual properties) such as a plump, round shape and a smooth, glossy texture along with other likely genetic possibilities, such as seeds, knives and kitchen counters—all with their respective visual properties and artistic possibilities. This offers a glimpse of the algorithm’s inner-workings, shining light on a computational power that undoubtedly surpasses that of the human brain. AI-assisted art is born.
This image-making method encourages the artist to consider several visual properties simultaneously to inform the next step, and graphic designers are particularly primed for this level of visual thinking. In tweaking how much influence each gene has on the final piece, the artist retains creative agency, but is not fully in control. In this relationship, the artist is intentional about what influences to bring to the piece, while also responding to the algorithm’s results; so the algorithm is also influencing the artists’ vision. In Chimera, every creature is a new genetic experiment. Their appearance reflects the eclectic nature of their genetic make-up: part reptile, part building, part beehive.
AI-assisted art vs AI-created art
In this particular situation, the algorithm is not autonomous; this process requires a specific type of visual cultural intelligence that only humans can bring. The images that emerge are ambiguous and unpredictable, thus uncertainty and surprise become powerful creative drivers, allowing the artist to create images in a different way. The powerful connection between uncertainty and creativity has been written about extensively before. In her essay “Uncertainty as a Creative Force in Visual Art”, Sasha Gershin (2008) highlights the connection between uncertainty and artistic creation:
“Generally, certainty is identified with sound academic practice based on complete knowledge, where the outcome is predictable, while uncertainty belongs to the realm of incomplete knowledge and implies a surrender to chance. Here the outcome is less predictable and this uncertainty can be conscripted as an active collaborator within the process of art making.” (Gershin, 2008)
While Gershin talks about a collaboration between artist and uncertainty, in this particular scenario another possibility is added to the mix: that of an active collaboration between machine and human, which has been explored before with various degrees of success. The idea of a true creative collaboration implies that both parties—human and machine— have equal degrees of creative agency, thus assuming algorithms are capable of artistic expression. Artist and researcher Holly Herndon recently explored this idea through PROTO (2019), a musical project created with an AI entity trained to learn and generate music through call and response. Similarly, the Paris-based art collective Obvious (whose AI-generated portrait Edmond de Belamy sold for $432,500) believe machine creativity is possible: “We found that portraits provided the best way to illustrate our point, which is that algorithms are able to emulate creativity.” (Caselles-Dupré on Bastable, 2018).
Likewise, computer scientist and researcher Ahmed Elgammal and his team at Rutgers University have worked on developing a machine learning model that focuses on creativity, called CAN (Creative Adversarial Networks) which introduces a change in the GAN algorithm to replace similarity with novelty: “The system generates art by looking at art and learning about style; and becomes creative by (…) deviating from the learned styles.” (Elgammal et al., 2017). Far beyond refining an already sophisticated tool, the aspiration Elgammal and his team are trying to fulfill is that of a true human-AI collaboration. The difference between AI-assisted art and AI-created art relies on that very question of agency. In this particular process, creative autonomy, although influenced by the algorithm, is at all times retained.
AI, culture and context
Unlike computer science, art and design have the ability to engage human emotions in their process, thus aiding in the exploration of social issues. Similarly, due to their nature, both disciplines offer great tools to engage in conversations and speculate about the role of AI-assisted and AI-generated images in our culture. Womaness is a reflection of what mainstream media has established as femininity: pink as predominant color, voluptuous silhouettes, blobs of nude hues everywhere. Unlike my chimeras, the images in this series are very similar in their genetic make-up, and specific genetic variations were made to add meaning and cultural context. The process, although similar in practice, differs to that of Chimera in that, instead of focusing on visual properties such as color and texture, I explored the semiotic nature of stereotypical symbols of femininity (lipstick=women; dress=feminine, etc.) The algorithm helped me create thousands of unique images made up of genes from images of categories such as “lipstick” and “brassiere”. The resulting images bear a vague and uncanny resemblance to what we have defined as feminine. What we see is not the algorithm’s judgement, but a direct reflection of our visual and cultural landscape. The predominance of a very limited (and very light) gamut of skin tones, confirms our well-known issues of diversity and representation (or lack thereof) in our mainstream media.
The resulting images bear a vague and uncanny resemblance to what we have defined as feminine. What we see is not the algorithm’s judgement, but a direct reflection of our visual and cultural landscape.
Embracing a new aesthetic
The images created by algorithms look wild and untamed; chaotic yet vaguely familiar—one could say, a version of our world from an algorithm’s perspective. There is undoubtedly creative expression, but is it only that of the artist’s? The forms, blurry and undefined, have a raw quality to them and, in the era of photoshopping, it is tempting to want to post-process the images to fix blemishes and polish imperfections. The decision to leave the images largely untouched goes beyond embracing imperfections to understanding they are an integral part of the image. In this context, the presence of glitches and imperfections in the image reveals its algorithmic origins.
The images created by algorithms look wild and untamed; chaotic yet vaguely familiar—one could say, a version of our world from an algorithm’s perspective. There is undoubtedly creative expression, but is it only that of the artist’s?
As a new image-making method, algorithms and neural networks bring about new aesthetic realms and remind us that there’s beauty and delight to be found amidst chaos. By opening up to a new process, in which we relinquish some control, we get rewarded with dreamy, yet uncanny resemblances of our reality. Many of these images vaguely resemble the idea of a landscape, a person or an object, but with elements too complex to be human. While their dream-like qualities could be linked with influences like the Psychedelic era from the late 60s and early 70s, there’s something human yet so mechanical about how these images are rendered. Some are mesmerizing, others are uncomfortable, and many others are absolutely otherworldly reminding us of human experiences as extreme as altered states and childhood pass-time activities such as cloud gazing. In computer brains, such imagery suggests that artificial brains are more human than they may seem. “The fact that humans report that Google’s Inceptionism looks to them like what they see when they hallucinate on LSD or other drugs suggests that the machinery ‘under the hood’ in our brains is similar in some way to deep neural networks.”
These images remind us there's beauty and delight to be found amidst chaos. It is also a recognition that creativity is ever evolving, and changes with the times. By evaluating our current understanding of creativity, we attempt to define what makes it intrinsically human, and how it is also subject to technological change. In conclusion, AI-assisted art shines a light on how human creativity is ever evolving, changes with our cultural context and is constantly affected by our technologies. Like all other novel technology we have learned to embrace in the past, we are in the very early stages of figuring out how big of an impact AI can have in our world. Along its multiple applications, multiple implications unfold simultaneously. Instead of elaborating on dystopian or utopian visions of the future, let's explore the implications of technology and the role design plays in this context.