Le projet Topophonie porte sur la navigation sonore dans les flux et les masses d'évènements audiographiques spatialisés.

le 29 mai 2011 par Roland Cahen

NIME 2011 workshop Oslo

Audio-graphic Modeling and Interaction Workshop @ NIME2011

See here for event details

(introduction of the workshop by Roland Cahen)

 

One or two questions about audio(-)graphic modeling and interaction

Why inventing a neologism (audiographic) where the already glorious concept of audio-visual seems so adequate?
Audio and visual relations where questioned long ago in arts, cinema, perception, cognition, synaesthesia... But what about relation between sound and images in life?
Does it even make sense?

Why discussing modelling and interaction for it seems so obvious that these items are the two sides of the same coin?

Virtual reality and digital interaction are nowadays common but also still expanding.

The reality vs. representation paradigm seems comparable to other pairs of concepts such as physical vs. abstract, or theory vs. praxis, as in Karl Marx.

But in fact, this paradigm should slightly change, when we are getting conscious that simulation and digital objects are no longer representations, but objects themselves.
The question of dematerialization and re-materialization is modifying our comprehension and our experience of the world, if not the world itself. Images can now be real objects. Vision can easily fuse with words and sounds can now be watched, as well as images can somehow be represented in sounds.

In our physical experience of the everyday world, (it is) the actions of physical objects, which most often produce synchronized visual and sound effects. But our way to represent these events inherits from the past (history of representation) and we usually create flows of images on one side, sequence of sounds on another, in separate workflows, and link them together afterwards, using complex synchronization systems. But it may be done another way!

There seems to be three main sets of approaches.
The classical audio-visual parallel ones, where an image event trigs a sound event or vice versa: it is developed in works such as Michel Chion’s (Audiovision) for the cinema and mainly used in audio-visual, and also in interactive application using audiofiles and samples.

The physical approaches, where sound and images events are produced by a common physical cause and uses physical models to set sound and image effects. It is mainly used for simulation. This method is present in many works nowadays, such as Claude Cadoz and Annie Luciani using of masses and springs systems, Perry R. Cook or Kees van den Doel … In these works, sound synthesis is intimately linked to physical interactions. This very consistent approach becomes tricky to simulate phenomenon such as crowds, large clusters or other abstract sound and visual effects. This maybe one reason for these authors to mix their models with un-physical ones.

This is what does the third approaches, somewhere in between the two others balancing physical and abstract information like James F. O'Brien’s who uses physical collision and visualization information “by analyzing the surface motions of objects that are animated using a deformable body simulator, and isolating vibrational components that correspond to audible frequencies. “ Nicolas Tsingos and Georges Dettrakis in the Crossmod project used physical information such as impacts to control modal synthesis and prerecorded samples… The Topophonie project presented today by Diemo Schwarz uses concatenative synthesis based on sound corpus within a new architecture introducing audio level of detail and statistic profiles presented by Christian Jacquemin and Hui Ding in what I would call an impressionist approach.

The coming talks will present different methods for articulating visual space and sound temporalities.
(This is one of the main conceptual and practical locks in our world understanding and its representation, maybe too large.)

I would like to finish with this brief introduction saying: that the traditional way of linking sound and images using the concept of audio-visual relations, was perfectly all right for sound illustration and simple dual-mode interaction. But what about when multimodal interaction deals with all kinds of temporalities and behaviours, and when active representation becomes part of the real world?

Is a fragile synchronisation link sufficient?

How can we now build more comprehensive audio-graphic architectures?

These questions are some of those we wanted to share with you today.
Thanks a lot
Roland Cahen

(Oslo 29th May 2011)

Topophonie, c'est quoi ?

Le projet Topophonie porte sur la navigation sonore dans les flux et les masses d'évènements audiographiques spatialisés.

The research project Topophonie proposes lines of research and innovative developments for sound and visual navigation in spaces composed of multiple and disseminated sound and visual elements.

en lire plus...

Avec qui ?

L'équipe du projet est composée de chercheurs spécialisés dans les domaines sonores et de la visualisation, de designers, d'artistes et d'entreprises spécialisés dans les domaines d'application concernés. Les partenaires sont : l'Ensci-les Ateliers, le Limsi, l'Ircam, Navidis, Orbe et User Studio.

en savoir plus...

Soutiens

Topophonie est lauréat de l'appel à projet CONTINT (contenus interactifs) 2009 publié par l'Agence Nationale de la Recherche et bénéficie à ce titre d'une aide au développement. Le projet est également labélisé par le pôle de compétitivité Cap Digital.

ANR - Agence Nationale de la Recherche Cap Digital

en imagesFlickr
en vidéosVimeo