Wednesday 24 August 2011

Grounding Research: Inter<>face

So, i've been avoiding this topic so far, it's a big passion of mine.  Sure my closest friends are sick of hearing this rant, but think it's time to touch on it now..

The act of interacting with our devices (whatever they may be) is a constant learning experience - usually on the part of the user.  Rarely will a device adapt it's behaviour for you (though this is beginning to change with advances in artificial intelligence and neural networking).  My problem, in this case, is that sometimes the language of interaction we are forced to learn may not be (these days) the best alternative - just the one that has been taken as standard through time.

Take my passion - music, for example.  For so many thousands of years it was a strictly kinaesthetic playing & learning experience, then it was refined into a written communication language (by the church).  The written language makes sense, however, with pitch going vertical, and time horizontal - providing a lot of data in a compact and visual system.

John Cage example score
This persisted (probably due to the reach of the church, if nothing else) for many many years until the advent of modernism and visionaries like John Cage - whose revolutionary music required an equally revolutionary way of scoring it.  The conventional system just does not provide the correct 'words' to describe the ideas he wished to convey, and thus he set about re-imagining it.

Then along came computer music, and soon after, the need to sequence sound in time and pitch.  Advances in computing power allowed for the text based representations of audio in early tracker programs to be replaced by the ubiquitous MIDI piano roll - again a simple, grid based pitch/up time/forward system, and this time even more constricted to the grid.  Amplitude and pitch are sorely restricted with this system also, with a maximum of 128 discrete steps being recorded by even the best midi device.

Cycle forwards to 2011, today, and we are still fighting our 30 year old interface for the creation of music...this piano roll that has all of a sudden become the de-facto standard of computer music needs to be scrapped, rethought and reinterpreted.

Gestural input interfaces have been around for a while - Leon Theremin's device as far back as the early 1900's was probably a little before the time of its intended audience however.  Roland introduced the D-beam controller into its devices in 1998, Korg's Kaoss Pad a year later, in 1999.... these devices still only manipulated more parameters of the grid however.  

The time for the reinterpretation of interface has come just now, with consumer electronics coupled with open design and community interest.  Johnny Lee with the seminal wiimote hack provided low cost sensory input device, paving the way for much dance-controlled music and movement-based composition.  The plethora of sensors and actuators are available now, and cheap to procure for custom designed interfaces to the world of computer music.

Andy Huntington is one of the new visionaries, taptap and beatbox are a great example of musically interacting with machines...

But surely the nicest re-imagining of the musical interface so far has to be the reactable - what a beautiful, creative and collaborative way to make music! and how intuitive and non-linear these interactions can be - removing all the non-essential functions certainly streamlines the interface in this case, and makes the interaction more inspiring and rewarding.  This, in turn feeds back into itself, allowing the user to enjoy the creative process more.... 

...which is what i dream of, when i sit down in front of the drab, utilitarian grey of the Logic Desktop!


</rant>
   -&c

No comments:

Post a Comment