At the 2017 SXSW festival, a performer used Einstein’s equations to draw portraits of audience members from selfies they tweeted for the event. The drawing looked like a series of unrelated equations until they suddenly morphed into realistic faces.
The performer was UR10, a cobot — a robot whose purpose is to collaborate with humans — created by Universal Robots. It was programmed by roboticists from Deeplocal using software from Universal Robots’ interface. They purposely arranged the equations in random order, so that the final drawing surprised the audience at the demonstration.
Intentionally eliciting a certain emotion from the audience is a uniquely human ability. According to Hayden Smith, Creative Director of Tangible Design at Deeplocal, the execution of the project mirrors Albert Einstein’s approach to creativity; instead of the romanticized notion that artists stumble upon an idea by chance, Einstein believed that creativity comes from repeated action, practice, and intentionality.
Smith told ARTpublika Magazine that in Einstein’s Chalkboard Project, too, “the seemingly effortless behavior of the installation is the result of months of prototyping and fine-tuning to create an environment conducive to creating accurate, repeatable drawings.” The ability to conceptualize a project that would mirror its source material in both form and execution is also uniquely human, for now.
The goal of a collaborative robot — a robot that carries out tasks with, rather than for humans — is to combine abilities that are uniquely robot with ones that are uniquely human. Humans can think creatively, critically, and emotionally. Robots can do other things, such as speedily draw replicas of people’s photos using Einstein’s handwriting with a type of precision that humans cannot.
“In addition to being engaging — everyone wants to see their face drawn — faces are one of the most immediately recognizable subjects,” explains Smith when asked about why the roboticists at Deeplocal worked with selfies specifically. “By drawing something that people are so intimately familiar with, we were able to highlight the technical accuracy of the build, as any flaw or distortion would be immediately apparent.”
Experts at Deeplocal not only came up with the multi-layered idea, but spent a long time creating the algorithm that made the UR10 cobot draw the photos. “In the simplest sense it was a tool used and programmed by the team to execute the vision of the project,” explains Smith. “That being said, seeing the installation in action the UR10 became more of a performer, the main attraction, and the rest of the chalkboard was the setting.”
Indeed, cobots expand the capabilities and possibilities for how art, humans, and robotics interact to create something new or influence what we already do in novel ways, like make music. Take for example, the group of researchers from institutions across the world who created a roadmap for how one might program and use a virtual composer assistant.
First, they used a study of peoples’ emotional responses to music to make a semantic map of musical sounds. Then they came up with a concept for how to use the semantic map: The Composer starts composing a piece of music. The Cobot makes suggestions, which the Composer either accepts or denies. After a bit of time, the Cobot learns to make better predictions about the Composer’s preferences. It is a lot like Siri, and other popular software, which uses patterns to identify our preferences.
When conceptualizing this project, the researchers’ underlying assumption was that music is based on emotion; people respond to music according to how it makes them feel. The human composer is the cobot’s interpreter of the emotional landscape unique to people. In a sense, the cobot reflects the human’s own emotional landscape back, which might be helpful as far as efficiency and creative inspiration go.
However, some people believe that using robots is antithetical to the very concept of creativity. A large part of creativity is the ability to conceptualize something new. This includes the potential to surprise even oneself. While the composer assistant has some cool, possible applications, it seems like it could be limiting as well. In Human Technology, Jeffrey Bardzell points out that making a formula for creativity is inherently restrictive:
“The ability to specify these relationships [between humans and computers] explicitly greatly facilitates the design of systems; yet that same explicit specificity also defines creativity a priori in cybernetic terms more friendly to computers than to the culturally diverse and rich practice of creativity.”
Even the researchers who created the concept for a virtual composer assistant have similar concerns. They are unsure, for instance, if the composer assistant would be capable of creating complex musical combinations. Furthermore, “emotional perception of music deserves particular attention...Perception is based on sensations, delivered to us by our senses…The attempt to tear the perception away from sensations is clearly untenable.”
Robots can perceive to the extent that they are programmed to perceive, but they can’t feel. “It could be argued that it is the emotional component that is decisive in the process of creating music, although it is purely subjective.” But artists have always used tools to make their art, that’s what cameras, paintbrushes, and instruments really are. But, unlike the UR10 in Einstein’s Chalkboard, a virtual composer assistant, which is programmed to align with the composer’s preferences, seems like more than just a tool.
Consider the fact that the cobot could suggest an emotionally evoking arrangement the composer may not think of. Does that make the cobot an artist, or a crutch the composer leans on during creative blocks? Einstein believed that intention, practice, and action drive creativity; a cobot, however, has programming instead of the three components put forth by Einstein. To say that people fully understand the constructs of creativity is to stretch the truth as well, because we don’t.
What we do know is that when the audience was watching the UR10 cobot draw their portraits at SXSW, they were interacting with both the robot and the creative minds responsible for the presentation, even though they were only able to see one of the two. This mirrors the world we live in. As people advance technology further and further, the line between the creator and the created will be increasingly blurred.
Note* Images 1,3, & 5 are the copyright of Deeplocal.com and associated artists / Images 2, 4 are sourced from the public domain.