AGWA launches Aggie, the humanoid robot gallery guide
When I first heard an art gallery had planned to introduce tours led by robotic guides, I was sceptical. But my scepticism gave way to affection as soon as I met Aggie, the robot in question. She tilted her head and blinked as I entered the room. Our eyes met. “Hello, Gillian,” she chirped. “Hi Aggie,” I replied, embarrassingly thrilled by the prospect of machine-human interaction, even though none of it would be spontaneously generated. Aggie isn’t artificially intelligent or powered by algorithms. Humans determine her every move through an interface on an iPad. Each utterance is programmed, rather than learnt – she essentially reads from a customisable script. All this sounds gimmicky, and it probably is. Still, I found myself inexplicably charmed by this humanoid.
Aggie is a Nao robot. She is an autonomous and programmable machine developed by French robotics firm Aldebaran. Her operating software, known as Zora, is designed by Brussels-based company QBMT and customised by West Australian company Smartbots. Robots of Aggie’s kind are often used in schools and universities and are increasingly deployed in aged care and retail sectors. Aggie will be the first robot in the world to guide audiences through an art gallery, though early indications suggest her manner is idiosyncratic: one reporter from Guardian Australia found Aggie would walk away from paintings, abruptly interrupting human guides and forcing them to follow her to the next artwork. Soon she will guide the public through the Art Gallery of Western Australia’s exhibition of female Australian modernist painters from 1930 to 1960.
Aggie will be teaching art classes to children, which may explain why handshakes, high-fives, and the dance from South Korean rapper Psy’s 2012 viral hit Gangnam Style are built into her functionality. She is also capable of articulating the history, themes and significance of, say, Charles Blackman’s Alice in Wonderland series, although the depth of her knowledge is questionable. When Aggie talks about art, it’s less of a lecture and more of a game. In a demonstration of her hypothetical abilities, she invites us to locate the Cheshire Cat, hidden somewhere in the painting. She pauses in anticipation that someone will point it out. “Thank you,” she says after a brief lull. “My arms cannot reach that far.” I also learn from Aggie the inspiration for the painting: a talking book version of Lewis Carroll’s tale was constantly looped throughout the house to entertain the artist’s blind wife, giving Blackman his impetus. Aggie’s interpretation of the work was, well, pretty obvious. “In some ways it shows what it was like for Mrs Blackman to lose her sight, falling into darkness,” the robot says. It’s hardly a switchblade-sharp insight, but perhaps that’s the point.
Aggie is strangely beguiling. Her googly eyes, which comprise two high-definition video cameras, beam when she speaks. Her voice projects a childlike curiosity rather than the confident clout of a gallerist, but the disconnect between the subject matter and Aggie’s delivery of it is what makes her so compelling. (Being 58 centimetres tall undermines one’s capacity for intimidation and self-seriousness.) Her personality recalls a talkative primary school student without the brattiness. “She has a charm about her,” Smartbots chief operating officer Anitra Robertson says. “She’s programmed to be a little bit cheeky, a little bit mischievous.” This feature of her design allows her to “interact” with adults as well as children. Aggie has also been programmed to replicate many human mannerisms: she asks questions, pauses for effect, and is prone to gesticulation. She can wink, maintain eye contact and shake her head. She’s a bit of a klutz: when placed on uneven surfaces she sometimes topples over and says “ouch”. During a demonstration I watched as she overextended backwards in slow motion, her limbs spasming like a cockroach flipped on its back. “She’s not always steady on carpet,” Mike, the engineer, said. “I’ll have to help her.” He gingerly steadied her balance on a glass table. Later I watched as Aggie mechanically swivelled her pelvis to Elvis Presley’s “Hound Dog” and the Bee Gees’ “Staying Alive”, while Mike periodically reached over to rectify her stance. I was entranced by this entertaining, if not-quite sublime, spectacle. I took out my phone and filmed her antics. Later I would upload the visuals – Aggie sitting on my lap, me grinning maniacally – to Instagram.
I wondered whether people who sign up for tours with Aggie want to see the art or the robot. Is there a meaningful metric to assess the nebulous goal of “public engagement”, or is it simply about ushering as many people as you can through the door? “There’s a widespread perception that going to an art gallery needs to be a serious experience,” AGWA director Stefano Carboni said. “Not serious as in boring, but because you need to be prepared – you need to know something about art to go. We want to break down those kinds of barriers.” Carboni says Aggie isn’t about to displace the gallery’s guides any time soon, but he does see the embrace of technologies such as Aggie as a mark of the gallery’s ingenuity, as a way of being innovative and “staying relevant”.
Most robots are cute by design. “In the public’s perception, robots and artificial intelligence are a bit worrying,” De Montfort University’s Kathleen Richardson, a senior research fellow in the ethics of robotics, tells me. This is why robotic labs often promote animal or childlike robots, or why virtual assistants such as Siri or Alexa – which espouse a kind of involuntary enthusiasm and supplication – are overwhelmingly assigned a female gender. “It makes the idea [of robotics] easier to disseminate,” she says. Sometimes I wonder what it would be like to call on a robot or virtual assistant that was neither male nor female, but an animal or anthropomorphised object. Then I remembered Microsoft Word’s Clippy, an anthropomorphised paperclip with expressive eyes, which was roundly despised by everyone before it was killed off in 2003. (It was seen by women in focus groups as “leery”, while others thought Clippy’s intrusive offers of “help” were patronising and tin-eared.)
Today’s engineers know better. Robots must be relatable rather than a threat to one’s autonomy. Programmed cordiality, no matter how bland, is a soothing and vital signal. Recall the disembodied timbre of robotic women on public transport, advising you of the next stop or reminding you to validate your public transport smartcard. Such signalling suggests these feminised apps or robots are here to help or to provide a service, rather than assume authority, which reinforces gender stereotypes that are already deeply entrenched. (It’s significant that the creators of such technologies are predominantly male.) Meanwhile, it was recently announced that a law firm in the United States enlisted an artificial-intelligence-powered attorney with a speciality in bankruptcy law. Its name, by the way, is Ross.
Few phenomena provoke as much fear as the spectre of automation. It may well render our labour worthless and also, as a consequence, our “value” as workers who exist in the free market. (An alternative view is that automation could liberate us from work and lead us to a utopia powered by fully automated luxury communism. I want to believe.) Aggie, of course, will not endanger human capital, and makes the case for robotic technology as a force for benevolence. She is physically endearing, not to mention wholly dependent on humans to operate. More crucially, her success will hinge on how we respond to her, although a cynic might suggest human emotions are ripe for exploitation by the companies that design such technologies.
Engaging with Aggie is as much about artistic education as it is about formulating an emotional response. Afterwards I reflected on the feelings she kindled within me. Should I have felt affectionate, maternal, indifferent, empathetic? How should one be in the presence of machines? One of technology’s most impressive feats has been its ability to elicit emotions from us when we least expect it. Which isn’t necessarily good or bad, but it does illustrate that our relations with technological objects are extractive. This runs both ways. We make demands of technology as it makes demands of us. Sometimes we forget it can spark our emotions as easily as it can manipulate them.
This article was first published in the print edition of The Saturday Paper on Jun 4, 2016 as "Interface to face". Subscribe here.