Art

The frame between art, robotics and special effects is breaking down. All three are emerging from the ‘uncanny valley’. Academy Award-winning visual effects expert John Cox, robotics specialist Sam Kingsley and artist Shaun Gladwell discuss what this means. By Jaklyn Babington.

Art, robotics and technology

Shaun Gladwell’s virtual reality work ‘Orbital Vanitas’ (2017)
Credit: COURTESY THE ARTIST, ANNA SCHWARTZ GALLERY AND BADFAITH

Jaklyn Babington Maybe if we can start with something that I’m particularly fascinated with. And, John, maybe you’re the best person to answer or to explain further, about the uncanny valley – that moment when human beings become quite uncomfortable with simulations that come very close to the human form, human speech. This is something in the special effects industry that I’m sure you have to work around.

 

John Cox This harks back to The Polar Express, Robert Zemeckis’s film where Tom Hanks was basically converted into a digital character and it freaked all the kids out. It was about 80 per cent okay, and it was off by enough that it looked really creepy. We’re going back to 2004, but they’ve come a long way from there. It’s called the uncanny valley and it’s that point where digital simulations – really specifically of humans – just don’t look right. And it was the point between a deliberate caricature and the attempt to get a 100 per cent perfect-looking human. And the bit that’s in the middle is the uncanny valley, and you don’t want to go there. Films that are around now get past it because of what’s happened with digital. I always compared it to a word-processing document. You’ve got the template and then, for the next one, you change some things and it gets better and it gets better and it gets better. [With] digital you’re always building upon what’s gone before, especially with animation, so they are very much able to learn from these mistakes. When you’re doing digital animation of a human, everyone hits the uncanny valley anyway, but they now have ways that they’ve got past that, so that, significantly, now you can’t even tell you’re looking at a digital simulation of a person.

 

JB That raises another question about your practice – are we still wanting to see stop motion or animatronics? Why wouldn’t digital animation take over from more of these handcrafted forms?

 

JC To a greater extent it has. Animatronics is almost never used these days: they just go straight to digital. And I think a lot of that comes down to the stories that they’re given. People expect to see a dinosaur walk into shot, sing a song, do a tap dance, sprout wings, fly up into space, all in one shot. There’s only one way to do that, and that’s digitally. So, films like Kubo and the Two Strings, The Nightmare Before Christmas – the stop-motion animation films are really written for that form now. They make use of that technology and that look is what best serves the purpose of that particular film. Digital has come in and it’s rolled over everything. I think another reason is that directors that have come up through the ranks in the past 10 years are very young and so digital to them is like breathing. There’s nothing very spectacular or special about it, it’s a tool that they use and they understand digital a lot more than they understand practical.

In Babe, all of those animals were being bred and trained to do very specific movements, and so when you’re doing a shot with, let’s say, a little pig, it’s been trained to walk in from the right-hand-side of the camera, to stop, hit a mark, look up, look left, look right, do whatever, but that’s what it’s been trained to do and you have to shoot that. The director can’t say, “Look, that’s really good but could you do it a little bit faster next time?” You can’t do that with animals, it always has to be planned out. That’s why there were 46 pigs in Babe, every one of them were trained to do something different.

 

JB From training pigs to training robots. Robotics being used to educate and being companions to children as well. Sam, can you just talk a little bit about some of the psychological nuances in having a robot as opposed to, say, a human or an animal even, as a companion for children or adults?

 

Sam Kingsley From my experience, with this robotics platform NAO, we do take this into a lot of schools, so I do deal with a lot of different children interacting with it. So I’ve seen children put their arm around a robot and see it as a friend and as a companion in that way. I guess it comes back to, since it is a technology that we program to do a certain function, that function can be hijacked by negative parts of who we are. So if you program a robot to do something back, it will do that unless we tell it that it is doing something bad. So with that in mind, I think it’s similar to, say, looking at how we engage, and how young people engage, with fake news. And I think it’s really relevant that a lot of people are really worried about their kids not being able to tell the difference between what is legitimate news and what is fake news, but kids pick that up really fast and we don’t give them enough credit for that.

One of the benefits of having a robot that we can program and engage with is that it doesn’t judge you unless we program it to judge you. You can do the same thing over and over and over again and it’s not going to get bored or tired or sick of you, it’s just going to do what we tell it to do. And that’s got some fantastic uses. If we start looking at children on the autism spectrum, you could start modelling situations in which they become anxious and have a lot of fear in, and model it in a way that they can practise it as many times as they like until they’re ready to go into the real world. I think it’s a really fascinating platform, but as with any technology it’s got potential for good and bad uses.

 

JB Can you talk about NAO’s physical and aesthetic qualities? Sitting in an art gallery, it’s interesting that it’s chosen to look like a traditional-style robot from science fiction or cartoons from long ago. Was there a discussion about what NAO needed to look like or perhaps shouldn’t look like?

 

JC I wasn’t privy to that discussion, but from my understanding the reason it is shaped like it is is because it’s very clearly a robot but it has human elements to it, so that does allow it to project human identities onto it and enables us to relate to it quite well. One thing I’ve noticed is, if it does fall over, I see maternal and paternal instincts coming out of people and they’re suddenly like, “Oh, is it okay?” Because it has those human-like elements, it allows us to generate a connection but at the same time we know it’s a robot, so we can pull back if need be. There are some other robotics platforms that are becoming more and more similar to how we look and, on a personal level, I think I would feel more comfortable interacting with a robot that I clearly know is a robot.

 

JB One of the things we talked about earlier today was about image production without the traditional frame. Shaun, could you talk about that?

 

Shaun Gladwell I was interested in that shift. It was one of the initial appeals of virtual reality. I was thinking, maybe we started off without a frame. Before the production of art became secular and detached itself from architecture or cave walls, we were working within the environment, as street artists do today. And there is no frame. The world is the frame, the street is the studio, and that’s incredible. I’m still attracted to that practice. But when a technology offers a shift, people start using words like “paradigm shift”. It’s a significant jump, where we have to see that it’s a completely new playing field. And maybe there’s an overestimation of these short-term effects of VR, because we’re in this consumer revolution of VR and it’s all very exciting.

But there’ll be some pretty wild long-term effects from VR, I think. And again, it’s good and bad. The technology itself is kind of benign: the doer will dictate the deed. You’re in this proscenium and the stage is a frame or, there’s this Euclidean geometry to a frame and having depth when you try to organise an illusion within that frame, but then when you have an illusion that doesn’t give you a frame, doesn’t give you a reference, you don’t have the ability to situate yourself within that simulation. That becomes interesting, because it’s a problem where you don’t have peripheral access to where you are, and that’s why I think augmented reality is amazing … I’m going to get really weird here. I’m really interested in this theory of convergence. Convergence theory. It’s good to talk about this, because it’s 500 years since Thomas More wrote Utopia so I’m going to get really utopian, even though I recognise there’s no functional utopias at the moment. We’re converging. There are organisations out there now that talk about this collective effort, maybe even when people talk about Facebook being the biggest group of humans above and beyond political demarcations or whatever, that’s pretty incredible. Even if you don’t like Facebook. I’m not even on Facebook, but I like the idea.

 

This article was first published in the print edition of The Saturday Paper on Nov 25, 2017 as "Byte futures". Subscribe here.

Jaklyn Babington
is a senior curator at the National Gallery of Australia.