There’s a celebration of all things uncanny (Unheimlich) at Spike Island, Bristol, on Friday 21 November, when I’ll be performing a live score to Anna Franceschini’s film from 2010, shot in the Pianola Museum, Amsterdam.
On the night, I’ll also be performing a live set, one which includes a tentative, first outing of a new automaton I’m building for stage use. The machine moves a lipstick and a mirror, as it traces a fragment of a daily ritual. This is the first in a series of machines I’ve been making for Trace, a project exploring new, poetic approaches to music and robotics.
Trace is undoubtedly one of the hardest robotic projects I’ve attempted. I’m very grateful for the input of fellow technologists Jens Meisner and David Haylock as I’ve been working on this project. I’d also like to thank Victoria Tillotson and colleagues at the Pervasive Media Studio, Bristol, and the Motion Capture Lab, Falmouth AIR, for giving me their time and support and for hosting me during an ongoing residency. Thanks too to Spike Island for allowing me to show work in progress and Arts Council England for funding this work – research I’ve found both challenging and fascinating.
The Spike Island event, Unheimlich Manoeuvres, starts at 20:00 on Friday 21 November. Hákarl & Friends are also on the bill and Anna Franceschini’s films will be screened all night in the galleries. Tickets £8 (£6).
Here’s a brief update on the research, for those of you into the technical details:
In Trace, I’m taking hairbrushes, bowls, mp3 players, lipsticks, books and other discarded objects and reanimating them, so they retrace the actions of their former owners. Now in development, the finished works are old-school automata, animated with servos, cams, cranks and other mechanisms. They don’t have caricatured motion – the kind you’d see on my ventriloquial sidekick Hugo. Each piece is a high fidelity replay of a human action, recorded using 21st-century motion capture techniques. You can read more about the thinking behind this project here.
Trace is based around found objects, which I’m mechanising. I’m deliberately avoid mechanisms which immediately resemble human arms, legs and spines – there’s no conventional six-degree-of-freedom robot arm for instance. My aim is to create motion that’s humanlike, even though there’s nothing obviously anatomical about the mechanism that moves it. If I can pull this off, my hunch (i.e. hope!) is that it will create a disturbing cognitive dissonance – a sense of the uncanny as the effect will seem to be humanlike yet inhuman.
As well as using servos and stepper motors, I’ve gone back to the textbooks in search of alternative mechanisms that might fit the non-humanoid brief. It’s been a fascinating and humbling process, viewing Geneva drives, lantern wheels and other elegant mechanisms for creating complex motions. Many of these were devised by imaginative engineers who were working before the electric age.
Increasingly, as this project has progressed, I’ve become interested in mechanical cams. A cam can be thought of as a motion storage device from the pre-electric age. Gifted makers, such as the eighteenth-century watchmaker Pierre Jaquet-Droz, have used cams to store and playback detailed motion with remarkable fidelity. One of Jaquet-Droz’ most sublime machines, The Writer, could scribe complete words, using letters encoded in cams.
In my experience, a device is all the more deliciously unsettling when you can see the mechanism – the guts of the machine – as it works (interestingly, one transliteration of ‘Unheimlich’ is unhidden – an idea I discuss in this article for The Wire). And I’m really struck how a cam reveals future motion, in the way a sequence of motor signals, controlled by a software algorithm, cannot. Unlike motor commands, cams aren’t easily editable (unless you have a hacksaw) and they typically can only store a few seconds of motion. But I’m enjoying the challenge of working within these constraints – they’re forcing me to focus on animating just one salient fragment of each event.
Rapidly prototyping cams
I’m trying to incorporate cams in at least some of these works and apply some 21st century tech to make this job a little easier. With David Haylock, resident technologist at Pervasive Media Studio, I’ve been creating software that automates a few stages of the cam design process. Working together, we’ve written a suite of simple OpenFrameworks tools which can help us to rapidly prototype cams.
We can now take displacement data from a binary mocap file and wrap it around an axis to make a simple cam design. I’ve also written software which lets you run these designs to move virtual cam followers (see video below). David has also created a routine to take cam design data in OpenFrameworks and turn it into a .dxf file so the design can be sent straight to the laser cutter. More work needs to be done on these and in stitching them together. I need to write a smoothing function, for instance, which irons out the motion to fit the resolution of the follower. But this has been a fascinating and helpful line of enquiry – we’ll be sticking our code on GitHub when we think it’s ready.
Cam prototyping – early work in OpenFrameworks (Sarah Angliss and David Haylock)
Capturing people’s actions, as they go about everyday tasks, remains the hardest problem to crack in this project. It’s one I can only partially address in a project of this scope. In an ideal world, a motion capture process would be so inobtrusive, volunteers wouldn’t be hampered by in it any way as they demonstrated how they used hairbrushes, lipsticks, soup spoons and so on. Sadly, the reality is a long way from the ideal.
Earlier this year, I went to the Motion Capture Labs, at Falmouth AIR. There, I was fortunate to work with the animator Jens Meisner who is also the resident technician in the motion capture studio. Jens and I identified several actions and recorded them in the studio at very high resolution. This high fidelity data will be used to make my first automaton.
Unfortunately, this isn’t a technique I can replicate outside studio, as multiple, expensive IR cameras are needed, along with analysis software and reflectors (see photo, right, of me in one of Falmouth’s mocap studio suits).
Back in the Pervasive Media Studio, David and I have experimented with mocap using the gamers’ system Kinect. This wasn’t too successful as people’s limbs and hands were often obscured by the objects they were moving. Using cheap IMUs, I can get an approximate idea of the tilt of objects (and a first approximation of velocity, if they’re moving slowly). I’ve also trialled visual mocap algorithms, using optical flow, coloured target objects and so on. In the next stage of the project, I’ll attempt to track objects by triangulating data from these three techniques.
Example of data collected in Falmouth AIR mocap lab, with thanks to Jens Mesiner
Two constraints of the project do make the mocap a little easier: 1) I’m trying to trace objects, not people – and there’s scope to rig up the objects with sensors or visual targets in advance. 2) I’m not doing this in real-time so I can clean up any data dropouts or obvious jitters in the movement.
As you can see from the above – Trace is a simple idea, on paper, which is actually requires some complex technical feats, one which touches on many emerging areas of motion tracking, psychology and robotics. It’s required us to seek solutions to some knotty problems, in the hope of getting the desired artistic effect. I’m still not sure how this project will ultimately develop – or even how the first prototype will turn out – but it’s been great to have the luxury to spend time thinking about these issues. I have a feeling the problems we’ve encountered will continue to perplex me and my colleagues in the months that follow. And the work we’re doing here will be influencing my robotics technique for some years to come.