An exploration in real-time animated mesh deformation using musical instruments as controllers. How would it feel to control your avatar and interact with the virtual realm through a highly dimensional, dynamic control device like a violin?
This video (above) was made at the National Center for Supercomputing Applications at UIUC in February, 2010. Ben Smith gives a brief demonstration of the capabilities of the animation in response to various musical inputs (apologies for the bursts of noise in the audio track!).
The dynamic object on the screen, the purple mass, is controlled by the violinist. Thus it becomes an "avatar", in the gaming sense, a virtual embodiment of the musician, in the virtual domain. It is modeled as a series of inter-connected springs and the sound of the violin imparts forces upon those springs dependent on the particulars of the sound being produced. These mappings are clearest around 1:30 in the video above.
The nature of the mapping from sound to image is largely determinate. Higher frequencies in the audio spectrum release the tension on the springs (causing the avatar to swell), while lower frequencies tighten them (shrinking the avatar). Pitches pull the avatar in different directions (mapped to the 12 points of an icosahedron): C is straight down, F# is up, A is to the left, etc. Louder sounds cause the avatar to come closer to the screen and quieter ones move it away. However the precise result is dependent on the current state when the inputs are given, producing a dynamic flow of image responding to the nuance of the musicians craft.
This 2nd video was made in my living room as an initial pitch of the idea of controlling a virtual representation through acoustic sound. Two different avatar shapes are shown, the first is developed further in the more recent video (top), while the second remains in the idea bin.