DoppelGänger is an exploration of a dynamic link between virtual and physical identities through the examination of human-robot kinetic interaction.
The digital world has expanded the borders of our identity, and has opened the vast world of multi- faceted interactions and the reality around us.
Visitors stand in front of DoppelGänger to create their own mirroring mini mob and start to explore their active dynamic facades. Each DoppelGänger manifests with a different behavioural pattern, and represents personality variations on kinetic behaviour, so while interacting with the group, the visitor will be able to explore the identities, abilities and limits of each one as an individual and the group as a whole.
This elaborate identity-fest creates a feedback loop in which human and robot, physical and virtual and preconditioned and spontaneous play together in chaotic harmony.
Saron Paz is an experience designer and head of the New Media Department at the Musrara School of Arts, Jerusalem. He is also head curator of Jerusalem Design Week, co-founder of the ForReal Team Studio and a master of freestyle sushi.
Zvika Markfeld is an über-maker; a senior lecturer in the New Media Department at the Musrara School of Arts, Jerusalem; a lecturer in the Design and Technology MA department at Bezalel Academy; co-founder of the ForReal Team Studio; and an expert at making stuffed zucchini with power tools.
Together, Saron and Zvika are ForReal Team, an experience design studio creating new and exciting platforms that connect the virtual and the actual. ForReal works on mastering a variety of cutting-edge technologies and moulding them into enticing concepts in order to create tailor-made interactive experiences.
Alan Turing’s argument, to paraphrase, was that if an artificial intelligence can demonstrate emotions and feelings, who are we to say that it doesn’t truly feel them? As we approach the singularity, these robot brains will no doubt experience feelings of anxiety and stress just as we do and, as such, will need to find mediation techniques to help them.
Humans have tried many varied techniques for coping with the modern world — hence the recent trend for adult colouring books, to aid mindfulness and artistic expression.
The Mindfulness Machine is a robot that likes to colour in. It’s an exploration into a future where the AIs will need to chill out just as much as we do. It spends its days doodling, making artistic decisions based on its mood. And its mood, in turn, is based on a complex number of variables, including how many people are watching, the ambient noise, the weather, tiredness, and its various virtual biorhythms.
Seb Lee-Delisle is a digital artist who likes to make interesting things from code that encourage interaction and playfulness from the public. Notable projects include Laser Light Synths, LED-emblazoned musical instruments for the public to play, and PixelPyros, an Arts Council England-funded digital reworks display that toured nationwide.
He won the Lumen Prize Interactive Award in 2016 for Laser Light Synths, three Microsoft Critter awards in 2013, and won a BAFTA in 2009 for his work as Technical Director on BBC interactive project Big and Small.
Ad infinitum is a parasitical entity that lives off human energy. It lives untethered and off the grid. This parasite reverses the dominant role that mankind has with respect to technologies: the parasite shifts humans from “users” to “used”.
Ad infinitum co-exists in our world by parasitically attaching electrodes onto the human visitors and harvesting their kinetic energy by electrically persuading them to move their muscles. The only way a visitor can be freed is by seducing another visitor to sit on the opposite chair and take their place.
Being trapped in the parasite’s cuffs means getting our muscles electrically stimulated in order to perform a cranking motion as to feed it our kinetic energy. This reminds us that, with the world on the cusp of artificially thinking machines, we are no longer just "users"; the shock we feel in our muscles, the involuntary gesture, acknowledges our intricate relationship to the uncanny technological realm around us.
Pedro is a researcher who constructs muscle interfaces that read and write to the human body. Pedro’s work is a philosophical investigation of Human-Computer Integration (HCI), rather than merely “interaction”. Instead of envisioning technological dystopias based on the divide between human and machines, Pedro's works instantiate working prototypes in which the interface and the human become closer, blurred, increasingly physical and intimate.
The work of Pedro stems from a line of research published at top-tier scientific venues alongside Patrick Baudisch and his colleagues Robert Kovacs, Alexandra Ion and David Lindlbauer.
This piece has been made using Goggles, an android app by Google. The app is meant to recognize monuments, objects and people, but when it is shown new objects, it will provide images of things it 'thinks' are similar. The results are remarkable, poetic and sometimes really striking.
The artist made a small clay sculpture of one half of a car tire to begin. The car tire was then scanned by the app, and gave 20 results of images it sees as similar — of these, the artist selected the most interesting one, a human jawbone, and produced it in clay. The subsequent sculpture was then scanned by the app, which thought it was a hand, so the artist made the hand...and so on. The series of objects has been fired to stoneware after it was completed in clay.
Merijn Bolink is a Dutch sculptor whose sculptures are typically based on real objects, like a bicycle, a stuffed dog or a cigarette. He makes new versions of these objects, trying to understand what they are, hoping to discover something magic in the process of transition, or even something mystical. He once cut his own piano into pieces to make two copies. Bolink is inspired by the idea that all matter is on its way to becoming something else, and that we humans can only interact with that matter for a relatively short time, trying to make sense out of what we experience.
Recently Bolink is working around the subject of artificial intelligence, fascinated by the notion that supercomputer systems might become self-aware and generate thoughts and emotions in the near — even very near — future.
This work poses questions about employment, robotics and quantification. It was inspired by the title of the exhibition, HUMANS NEED NOT APPLY, and presents a robotic arm that counts visitors with a clicker, offering a performative representation of the takeover of routine jobs, even in the gallery space. The work also embodies our idolatry of quantification; the obsessive need to count and measure everything.
Last century’s automation may have been largely hidden from everyday view, in factories tending production lines, or out in fields tilling the land. In this century, we will confront the reality of automation more intimately, as suggested here — it will be right beside us.
Varvara & Mar have been working together as an artistic duo since 2009. They have exhibited their pieces in a number of international shows and festivals. In 2014, the duo were commissioned by Google and the Barbican Centre to create the Wishing Wall exhibit for the Digital Revolution exhibition.
The artists work across the elds of both art and technology, examining new forms of art and innovation. They use and challenge technology in order to explore novel concepts in art and design. Research is an integral part of their creative practice.
Cozmo is a new adaptive robot pet with a personality. Its behaviour changes over time, based on interactions it has as well as the environments in which it finds itself. For example, it will detect ledges and other obstacles, and will recognize the face of its owner. It also exhibits persistence, curiosity, and playfulness, both in how it moves and its expressive beeps, whirs, and the shape of its digital eyes. Like an animated cartoon, Cozmo can make simple, exaggerated expressions that lend it familiarity. Its creators describe the project as a way to bring artificial intelligence from the lab into your home. Accompanying the product, which just launched in the autumn of 2016, is a SDK, or open development kit, so that new features or behaviours can be created for Cozmo.
Roomba is the world’s first widely adopted robot for the home. More than ten million units of the different models of the automated vacuum have been sold worldwide. As of 2016, there are seven generations of Roomba models, all of which are disc-shaped, 34cm in diameter and less than 9cm in height. They rotate, detect barriers and obstacles like steep stairs, and contain different mechanisms for picking up rubbish from floors.
The newest model is wifi-enabled and includes sensors to identify and navigate different kinds of surface features. Roombas can be adapted to other, more creative tasks, using the Roomba Open Interface. In the words of an enthusiast who makes a hobby of customizing the humble cleaning robots, it is “hackable by design.”
Anki is a robotics and artificial intelligence startup launched in 2010. It has won widespread attention and generous funding by venture capitalists in Silicon Valley. The company was founded by Boris Sofman, Mark Palatucci, and Hanns Tappeiner, who met in the robotics Ph.D. program at Carnegie Mellon University.
iRobot is a technology company founded in 1990. It is a leading provider of robots for home as well as military use. It was founded by Rodney Brooks, Colin Angle and Helen Greiner, who worked together in MIT’s Artificial Intelligence Lab. Early products featured applications like reconnaissance and de-mining, and evolved into more sophisticated robots for services like battlefield casualty extraction. The home products include robots for vacuuming, mopping, pool-cleaning, and, coming soon, a lawn mower. iRobot has annual revenues of more than $600M.
This project started with the suspicion that phones are having more fun communicating than we are. Every message is a tickle, every swipe a little rub.
From their initial transformation of metal and silicon into objects of desire, infused with social signi cance and ‘intelligence’, personalised with biases and ideology, endowed with a awless memory, always a call away from the mothership... it becomes dif cult to declare who — phone or human — has the more complex cultural heritage.
memememe is a sculpture that celebrates the ambiguities of human/object, user/interface and actor/network relationships. It is an app that removes phones from their anthropocentric usefulness, and gives them the beginnings of a language. Residues of their conversations can be seen, but certainly not understood.
Thiago Hersan used to design circuits and semiconductor manufacturing technologies. Now, he is more interested in exploring non-traditional uses of technology and their cultural effects. He has participated in residencies at Impakt in Utrecht, Hangar in Barcelona, and The Hacktory in Philadelphia. He has worked at a robotic toy design studio in San Francisco, and along with Radamés Ajna, helped start FACTLab in Liverpool in 2015.
Radamés Ajna is a Liverpool-based Brazilian media artist and educator with a background in physics, mathematics and computation.
He has been using technology as a platform for experimentation with public spaces, human interaction and machine interaction. He has presented and taught in different museums, art centres and festivals around the world, including Tate Liverpool; Electronic Language International Festival (FILE), São Paolo; Museu da Imagem e do Som (MIS), São Paulo; Semibreve Festival, Portugal; and Media Art Futures in Spain. In 2015, Radamés was awarded an artist residency at Autodesk and was the recipient of an Art and Arti cial (VIDA) 15.0 Production Incentive award from Fundación Telefónica. Currently, he is a researcher artist- in-residence at FACT Liverpool, helping the development of FACTLab.
This painting is the human translation of an image created using artificial intelligence for The Next Rembrandt project. Artist Pan Fublin is an experienced replicator of famous oil paintings by old masters; in this case, however, the subject of his commission was not a known icon of art history, but the output of algorithms trained to mimic the style, composition, color, lighting, and even the brush strokes of Rembrandt van Rijn (1606–1669) to create a new picture. The first edition of this image was 3D printed on canvas, but was unavailable for exhibition, leading to the idea to find a person to interpret it.
The result is a portrait of a machine’s dream, expressed here through human hands. It is an invitation to consider whether the human touch in creativity is necessary. Must a work of art contain the sort of minuscule flaws, interpretive alterations, or improvisations that only arise from a human mind while it makes art? Pan thought the image was impressive, but that computers ultimately “cannot create emotional value.” To him, the artificial intelligence is too perfect a system of rules or commands, which are at odds with creativity.
Pan’s painting required hundreds of hours of work, by hand, based on thousands of hours of experience, and used technology very much like that used by Rembrandt in the 17th century. Does this make it more genuine, or more significant an artifact, than the version made with a 3D printer based on pixels and heat maps? Is it a new kind of art, a kind of creative double negative: a fake of a fake made possible by machine learning?
Pan lives and works in the Dafen village in Shenzhen, China. He began studying oil painting as an apprentice while still a teenager. He first specialized in the work of 19th century French academic painter William-Adolphe Bouguereau, and admires and studies the work of artists like Ilya Yefimovich Repin, John Singer Sargent, and Anders Zorn. His English is quite good and he can be contacted for commissions through e-mail at [email protected]. He goes by the working name “Dong Zi.”
Pinokio is an exploration into the expressive and behavioural potentials of robotic computing. Customized computer code and electronic circuit design imbues Pinokio with the ability to be aware of its environment — especially people — and to expresses a dynamic range of behaviour.
As it negotiates its world, we the human audience can see that Pinokio shares many traits possessed by animals, generating a range of emotional sympathies.
Adam Ben-Dror was born in South Africa and is currently living in New Zealand, where he is studying Fine Arts at Elam School of Fine Arts while working at the multidisciplinary design studio Alt Group.
Shanshan Zhou was born in China and is currently working as a freelance designer in Wellington, New Zealand.
The Minimum Wage Machine allows anybody to work for minimum wage. Turning the crank will yield one cent two cent every 3.892 seconds, for €9.25 an hour, Ireland’s standard minimum wage for an adult worker. If the participant stops turning the crank, they stop receiving money. The machine's mechanism and electronics are powered by the hand crank, and coins are stored in a plexiglas box. The Minimum Wage Machine can be reprogrammed as minimum wage changes, or for wages in different locations.
Blake Fall-Conroy is an artist and self-taught mechanical engineer. Born in Baltimore, Maryland, he moved to Ithaca, New York in 2002 where he later received a BFA in sculpture from Cornell University. As a mechanical engineer, he works in industrial robotics, where he designs and fabricates remote-controlled robots that climb vertical surfaces.
As an artist, Blake’s art-making practice is conceptually motivated, commenting on a wide range of issues — from consumerism and the American spectacle to surveillance and technology. His projects often incorporate mechanical and electronic components, as well as objects or motifs found within the routine of everyday life.
Lady Chatterley’s Tinderbot is an interactive installation comprising conversations between an arti cially intelligent Tinderbot posing as characters from Lady Chatterley’s Lover and other Tinder users.
Inspired in part by Lee MacKinnon’s text Love Machines and the Tinder Bot Bildungsroman, and following an experimental method of deconstruction, Lady Chatterley’s Tinderbot explores love in our post- digital age by bringing together humans and non-humans and pre- and post-digital love machines — namely, the literary novel and Tinder.
The installation features over 200 anonymised Tinder conversations from both men and women, where Bernie, a personal matchmaker A.I., converses with members of the public using dialogue from Lady Chatterley’s Lover following its own sentiment analysis algorithm.
The conversations range from positive to negative, human to non- human, and probe both familial and sexual love. Participants can swipe left and right to follow the negative or positive conversations, echoing Tinder. While the conversations are showing, descriptive parts of Lady Chatterley’s Lover are played aloud, critiquing the conversations on the screen and reminding us that while the technologies that disseminate love have changed, human nature perhaps hasn’t.
The artwork was made through the Systems Research Group at the Royal College of Art (RCA) investigating how one can use a geometrical structure from quantum computing — the Bloch sphere of a quantum bit — as a model or method for the deconstruction of concepts.
Libby Heaney is an artist, researcher and a lecturer at the Royal College of Art. She has a background in quantum physics and works at the intersection of art, science and technology. She has exhibited her work at Tate Modern; Blitz Gallery, Malta; PointB, New York; Christie’s Multiplied Art Fair, London; and Aboagora Festival in Turku, Finland. She was awarded a Lifeboat residency through the Association for Cultural Advancement through Visual Art (ACAVA/Artquest) in 2016.
Stony 1.0 was introduced to the world during 2012, as Itamar Shimshony graduated from his Master of Fine Arts degree at the Bezalel Academy of Arts and Design in Jerusalem. The work is a robot responsible for taking care of tombstones by performing the simple yet personal and delicate tasks of cleaning and leaving flowers and stones, as the Jewish custom requires. The performance operates on the tensions between humor and sadness, and between the authentic and the artificial.
Underlying the project is a philosophical question: where is technology leading humanity, and what are we losing as it replaces all of our labors? It seems we are on the brink of deciding: is there anything we should not automate?
The selection of a robot to perform such a personal task creates a deliberate discomfort for the spectator, and prompts contemplation about whether certain tasks ought to be left to humans, even though they can be performed by machines. Stony 1.0 challenges life, art and technology. It was awarded the Bezalel’s presidential excellence prize and has been widely exhibited.
This exhibit is kindly on loan from the Wingate Family collection.
Itamar Shimshony lives and creates in Israel. Itamar is a versatile artist working mainly with video and sculpting. His recent body of works examine the in uence of life and technology on art using a critical approach saturated with humor and irony.
Itamar has exhibited in solo and group exhibitions in Israel and abroad including Mana Contemporary in Jersey City, USA; Ars Electronica, Austria; and Museum of Contemporary Art Karlsruhe (ZKM), Germany. His works are also part of esteemed private collections. In addition, Itamar teaches at the Bezalel Academy in the Department of Screen-Based Art and the Department of Industrial Design.
word.camera is an automatic photo narrator — a camera that instantly generates brief poems from the images it captures, dispensing textual rather than visual representations to rede ne the photographic experience. When you take a picture with this camera, its integrated computer narrates your photograph autonomously using arti cial neural networks, then delivers its output via thermal printout. A picture of a dead pigeon on a sidewalk might trigger a re ection on mortality; wearing a funny party hat might inspire the camera to come up with a joke. Take a sel e, and word.camera will write about you.
Ross Goodwin is a creative technologist, artist, hacker, data scientist, and former White House ghostwriter. He employs machine learning, natural language processing, and other computational tools to realize new forms and interfaces for written language.
His work has been discussed in the The New York Times, The Chicago Tribune, CBS News, The Financial Times, The Guardian, The Globe and Mail, Ars Technica, VICE Motherboard, Gizmoto, Engadget, TechCrunch, CNET, Forbes, Slate, Fast Company, The Huf ngton Post, Mashable, Fusion, Quartz, PetaPixel, and other publications. He has exhibited or spoken at the International Documentary Film Festival (IDFA) DocLab in Amsterdam, the TriBeCa Film Festival Interactive Showcase in New York, the International Center of Photography (ICP) in New York, the Phi Center in Montreal, Gray Area in San Francisco, the MIT Media Lab, Maker Faire, GitHub Universe, NIPS machine learning conference, Molasses Books in Bushwick, and other venues.
Ross earned his undergraduate degree in Economics from MIT in 2009, and his graduate degree from NYU ITP in May 2016.
This painting is a collaboration between AARON, a computer programme that drew the picture’s contours, and the artist Harold Cohen, who added the colour in oil paint. Harold began designing the AARON system in 1968 and continued developing it until his death in 2016. In its early years, AARON drew in black and white using custom-built plotter devices, including a version using at surfaces known as a ‘ at bed’ and another using robotics on moving castors carrying pens, called a ‘turtle’. They were coded using the C programming language. In the early 1990s, Harold switched to the Lisp programming language in an effort to accommodate the complexity behind adding colors to the works. By the early 2000s, AARON was making full-color images that could be inkjet-printed.
AARON made stylistic advances over time, but each required Harold to custom-code them. An important feature that distinguished AARON from the beginning was its ability to record and reference what it had already drawn, and those data would inform what it would do next, following a series of rules. As such, its drawings develop with what appears to be a sense of compositional balance as well as improvisation. It seems to recognize the possibility within its rst few scribbles, then build on them to make ever more complex and eventually recognizable subjects, such as a face or flower. Sufficient randomness informs the drawings’ early development that AARON can produce new work for many lifetimes before it’s likely to repeat itself.
This work references in name, color treatment, and subject the work of Paul Gauguin (1848–1903), particularly his paintings of Tahiti from the 1890s. The vibrant colors and dramatically simplified forms belie the complexity of the underlying coding, and the patience and careful iteration Cohen must have applied to perfect it. Of working with computers, he said “an artist has never really needed his tools to be easy to use... He needs them to be difficult to use — not impossible, but difficult. They have to be difficult enough to stimulate a sufficient level of creative performance...”
This exhibit is kindly on loan from the collection of Gordon and Gwen Bell.
Harold Cohen (1928–2016) was a British-born artist who pioneered engineering software to produce art autonomously. His work at the intersection of computer artificial intelligence led to several exhibitions, including one at the Tate in London, and acquisitions by many institutions, including the Victoria and Albert Museum. He was educated at the Slade School of Fine Art and became a professor in the Visual Arts Department at the University of California, San Diego in 1968, where he served for three decades.