Stelarc, the famous performance artist known for his work on manipulating the human body and cybernetics, visited the Foundation for Art and Creative Technology (FACT), based in Liverpool, earlier this month to talk about his recent projects. To our good fortune we managed to land an interview with Stelarc in order to discuss the intersection between his work and the field of Physiological Computing.
We were interested in talking to Stelarc for several reasons. First of all, his performance pieces have involved alternate interfaces where real-time EMG from the leg muscles is used to drive a third arm. As well as using technology to extend the capabilities of various body parts such as in exoskeleton and muscle machine. Stelarc has also created performance art where remote viewers are allowed to activate the muscles of his arm over the internet. Whilst his work resides in a different world to physiological computing research, some themes in his pieces, such as muscle interfaces and extending the nervous system resonate with our own interests. In addition, by occasionally turning over control of his body to anonymous people on the internet, Stelarc articulates a number of nightmarish scenarios that are associated with body technology, such as loss of agency, privacy and control. These fears may be particularly pertinent to this kind of technology, see this earlier post for more explanation.
With respect to the development of physiological computing systems, we’ve seen a number of examples of convergence where this kind of technology is used in the domain of performance arts. For example, we’ve seen the Heart Chamber Orchestra and the Multi Modal Brain Orchestra with respect to musical performance. There is also the possibility of creating a live profile of audience reactions to events on stage or recording your own personal physiological reactions which we attempted during a theatrical performance.
We sat down to chat with Stelarc with all these connections and possibilities for convergence in mind.
Question 1: Your view of the body as an obsolete entity ripe for new interfaces remains a minority view. For many, the body is an intimate and private realm whose activities should remain at their discretion and under their control. Body technologies are often viewed with some suspicion because they are inherently invasive and linked to surveillance functions or medical operations. What is the biggest barrier to body-technology hybrids? Suspicion of the technology or sanctity of the body?
Stelarc: “Of course, both of those reasons add up to a reluctance, often we only do invasive techniques on our bodies when it’s medically necessary when our bodies have been traumatised [..] to do this by choice would be something else. And certainly I don’t have any religious views but others might. In terms of a secular approach to the body, up until now we’ve considered the skin as the bounding of the self and the beginning of the world, in a sense what’s contained in your body. But of course we know that’s a convenient metaphysical construct, if you’re Wittgenstein you can maybe want to speak about it in a different way, you can make an external construct about thinking being located outside of the body for example. So, whether you’re driven as a body through the imagining of an internal agent or whether your behaviour is largely organised by cultural and social conditioning. It’s a problem of managing one extreme or another, the body isn’t purely reduced to a physical body. If it was, a dead body would be as good as a living one. On the other hand, there’s no need to postulate internal essences. The more and more performances I do, the less and less I think I have a mind of my own, nor any mind at all in a traditional metaphysical sense. There’s nothing inside my head apart from squishy tissue, there’s a lot of empty spaces, there’s circulatory and electrical nervous system.
So I think we have to consider alternate body constructs, and of course, the way that we frame what a body is may also frame the philosophy or research that we do. So in a sense you making your bodily monitoring public [Note: Stelarc is referring to Kiel’s body-blogging experiment here] means that you’re contributing to a social construct of what your body is and how your body performs. So it goes beyond an individual urge for self-improvement… although you may need a lot of self-improvement (laughs). I did biofeedback work in the late sixties and early seventies, and I never thought it was fully realised as a strategy, I think there’s still lots of things that we can learn from externalising or visually making manifest our bodily functions.”
Question 2: In projects like Extended Arm and Ear on Arm you focus on the more extreme expressions of physiological computer interaction, for example in Extended Arm you gave a remote user autonomy over the muscle control in your arm. Understandably such performances are not aimed for use by the general public (e.g. due to the involved technical and ethical issues). In an ideal world what type of physiological interactions would you like the general public to experience and how do you think they would benefit from it?
Stelarc: “Well I think there’s this simplistic idea that technology is enabling. That’s the kind of consumer capitalist ideal of technology, that you purchase it, it somehow improves and enables you, like with every new gadget you’re enabled [to a greater extent]. But of course one can also construct technology as highly destabilising, that with every new technology there are new bits of information and images generated that forces us to reconstruct our paradigms of the world and our paradigms of who we are.
That’s kinda of important to understand I think, that technology is not simply enabling, it’s also destabilising and we should see it incorporating the realm of the accident and the realm of the unexpected. Often the technology might be invented for one purpose and in fact it serves a completely different purpose. I mean, who was to anticipate electricity and the fundamental effects on society that would have or telephony and so on.”
Question 3 : If this technology enables our capabilities whilst simultaneously destabilising our view of the self and the body, physiological computing has the potential to divide the ego – to create divergent perspectives on the self (e.g. my blood pressure is sky high but I feel totally relaxed). Do you see this as a barrier for mainstream acceptance of this kind of technology?
Stelarc: “Well, I had an experience like that which jeopardised the possibility that I would do a zero gravity flight. Actually I did some training at Star City outside of Moscow with a group of other people and the whole purpose was to get on a micro-gravity flight. And I had a medical examination, which everybody does, and they discovered high blood pressure. Now that was unusual for me, but what had happened was that I flew from Los Angeles to Melbourne and then a couple of days later flew to Moscow via the Emirates. I had every reason to be in not a normal bodily condition. And not only that, they had an interpreter and when the doctor did the examination, the interpreter said ‘look I’m very sorry but you’ve got high blood pressure and you can’t go on the micro-gravity flight tomorrow morning’. And it was a real shock because we’d been training for days, and I tried to joke with the doctor, I said ‘look we’ve got this beautiful Russian interpreter, how do you expect my blood pressure to stay normal?’ (laughs) But unfortunately he was not moved, the interpreter was charmed, and I did get an email from her afterwards. Then, I discovered later when the group returned (I wasn’t allowed to be on the flight), half of them admitted to taking aspirin the night before because they were alert to this [issue]. I had such a mad schedule, I arrived, did the training and didn’t consider these problems that might have impeded my inclusion on the flight.”
Question 4: What was the experience like of having your arm’s agency given over to a remote user in the Extended Arm project? From my early experiences with body blogging [Note: referring to Kiel], my friends tended to use personal information to mischievously manipulate my psychophysiological state; as remote users how did they interact with your motor functions, where they playful, inquisitive or perhaps something else?
Stelarc: “Well of course the system was so designed that you could only activate the muscles obviously that had electrodes on them. And they couldn’t turn up the voltage higher than 50 volts. So they were constrained, to produce the large arm movements that made up part of the performance. It was really strange to watch your arm move, you’ve neither initiated that movement nor are you yourself contracting your muscles to do it and because your squirting 50 volts through your skin to the nerve endings of the muscles that’s going to be far beyond the micovolts and millvolts that normally actuate your body internally. So you have no way of resisting that programming.”
Stephen: “You mentioned [in your FACT talk] some people did some repetitive things to make sure it [the muscle control interface] was working.”
Sterlac: “It was quite apparent actually. This was happening 6 hours a day for 2 days. So I began to characterise the remote agents by what sort of programming they were doing and I characterised the ones who kept repeating the same movement as malicious because it was difficult for me and tiring to do the same movement over and over again. Often people were just curiously pressing different muscle sites and seeing what would happen. And some of them were just [activating a few muscles] a few things and some of them were engrossed in multiple, more complex movements. So there was the person who really didn’t want to get involved too much with a few presses, the more curious and more complex programming, the malicious agent, and of course I knew that some people were reluctant even to do anything at all.
But I could see the face of the person who was programming me, there was a web camera on the computer [running the muscle activation interface] so I could see your face. So it created this kinda of intimacy without proximity or without skin contact and also in a sense I saw the agent that was actuating me. On the one hand it was reassuring but on the other hand, and interesting I met some of those people in Paris some weeks later at a video festival and of course I didn’t recognise them immediately but they came up and said they were at the installation, I didn’t really recognise anyone but they assured they were some of the people [who acted as agents].”
Question 5: You refer to the workings of the body as mechanical – in the sense of mindless and automated. A number of thinkers have also used mechanical metaphors to describe behaviour and thinking. Do you think body technology and biofeedback applications have the potential to enhance human awareness and self-regulation to make us less mechanical in this regard?
Stelarc: “Here we have a problem of language and we have to qualify what we mean by mechanical and we’re using mechanical in terms of simple machines, possibly gears and cogs and when we think of mechanical, we tend to think of it like that. But of course, electronic systems, servo-motors, computer programming that might run a machine is a much more complex system with feedback loops. There’s Marvin Minsky’s idea of Telepresence but Susumu Tachi, a Japanese researcher, he one-ups Marvin Minsky so instead of talking about Telepresence, he talks about Tele-Existence. So, it’s not that you feel you’re where the robot is, but if the feedback loop between you and remote robot are adequate enough – you see what the robot sees, it does what you prompt it to do and this feedback loop is fairly instantaneous, or at least has an interface that does forward masking (there is time delay), then effectively the space between the body and the robot collapses, and the robot becomes an end-effector of your body. And it’s those more qualitative systems of interaction between body and machine, we can no longer refer to them as master-slave mechanisms where you have a controller and a robot. For example, if a robot is remotely situated and the situation in its location changes, say it’s in a hazardous nuclear facility and you’re somewhere else, then that robot has to be capable of intelligent disobedience. If you command the robot to do something and by the time it receives your signal the situation has changed, and the robot sense that it’s dangerous to follow this action and it decides to intelligently disobey you. It’s like what a guide dog does, if you want to cross the street but the guide dog will not cross the street if it’s not safe.
This notion of intelligent disobedience in tele-robotics and the idea of the Telepresence are both important. So as we become these extended operational systems, we have to more effectively make these remote applications much more so like end-effectors of our bodies [..] I see it more as an artistic gesture, but unless the system in effectively operational, it’s not interesting anyway. I’m not interested in speculating about things and coming up with ideas unless I can actualise them.
—
We thank Stelarc once again for fitting us into his busy schedule, his talk and later discussion with us at Physiological Computing was immensely enjoyable and stimulating.
—
Stelarc is an artist and Chair in Performance Art, Brunel University West London and Senior Research Fellow at the MARCS Auditory Labs, University of Western Sydney. In 1997 he was appointed Honorary Professor of Art and Robotics at Carnegie Mellon University. In 2003 he was awarded an Honorary Doctorate by Monash University. Presently he is participating in the Thinking Head research project, and he is surgically constructing and stem cell growing an extra ear on his arm that will be electronically augmented and internet enabled. His artwork is represented by Scott Livesey Galleries, Melbourne. In 2010 he was awarded the Hybrid Arts Prize at Ars Electronica.
Pingback: What’s The Deal With Brain-to-Brain Interfaces? | Physiological Computing Blog