Le projet Topophonie porte sur la navigation sonore dans les flux et les masses d'évènements audiographiques spatialisés.

le 23 août 2011 par Diemo Schwarz, Roland Cahen, Hui Ding

DAFx 2011 Workshop Programme Announced

Versatile Sound Models for Interaction in Audio-Graphic Virtual Environments: Control of Audio-graphic Sound Synthesis

Workshop @ Conference on Digital Audio Effects DAFx-11 http://dafx11.ircam.fr
Friday September 23, 2011 at Ircam, Paris

Detailed information about the workshop and the programme can be found here:
http://www.topophonie.fr/event/3
http://dafx11.ircam.fr/?page_id=224

The use of 3D interactive virtual environments is becoming more widespread in areas such as games, architecture, urbanism, information visualization and sonification, interactive artistic digital media, serious games, gamification. The limitations in sound generation in existing environments are increasingly obvious with current requirements.

This workshop will look at recent advances and future prospects in sound modeling, representation, transformation and synthesis for interactive audio-graphic scene design.

Several approaches to extending sound generation in 3D virtual environments have been developed in recent years, such as sampling, modal synthesis, additive synthesis, corpus based synthesis, granular synthesis, description based synthesis, physical modeling... These techniques can be quite different in their methods and results, but may also become complementary towards the common goal of versatile and understandable virtual scenes, in order to cover a wide range of object types and interactions between objects and with them.

The purpose of this workshop is to sum up these different approaches, present current work in the field, and to discuss their differences, commonalities and complementarities.

The workshop is free for attendants of the DAFx conference and for non-DAFx-attendants by invitation. Registration to the DAFx conference can be found here: http://dafx11.ircam.fr

Program Chairs

Roland Cahen, ENSCI-les Ateliers
Diemo Schwarz, IRCAM
Hui Ding, LIMSI-CNRS & University Paris Sud 11

Program committee

Nicolas Tsingos (Dolby Laboratories)
Lonce Wyse (National University of Singapore)
Andrea Valle (University of Torino)
Hendrik Purwins (University Pompeu Fabra)
Thomas Grill (Institut für Elektronische Musik IEM, Graz)
Charles Verron (McGill University, Montreal)
Cécile Le Prado (Centre National des Arts et Metiers CNAM)
Annie Luciani (Ingénierie de la Création Artistique ICA, ACROE)
Christian Jacquemin (LIMSI)

Topics in detail

Which other and better alternatives to traditional sample triggering do exist to produce comprehensive, flexible, expressive, realistic sounds in virtual environments? How to produce rich interaction with scene objects such as physically informed models for contact and friction sounds etc? How to edit and structure audio–graphic scenes otherwise than mapping one event to one sound? There is no standardized architecture, representation and language for auditory scenes and objects, as is OpenGL for graphics. The workshop will treat higher level questions of architecture and modeling of interactive audio-graphic scenes, down to the detailed question of sound modeling, representation, transformation and synthesis. These questions cannot be detached from implementation issues: novel and hybrid synthesis methods, comparison and improvement of existing platforms, software architecture, plug-in systems, standards, formats, etc.

New possibilities regarding the use of audio descriptors and dynamic access to audio databases will also be discussed.

Beyond these main questions, the workshop will cover other recent advances in audio-graphic scene modeling such as:

  • audio-graphic object rendering, and physically and geometrically driven sound rendering,
    • interactive sound texture synthesis, based on signal models, or physically informed
  • joint representation of sound and graphic spaces and objects,
  • sound rendering for audio-graphic scenes:
    • level of detail, which is a very advanced concept in graphics, but is rarely treated in audio.
    • representation of space and distance,
    • masking and occlusion of sources,
    • clustering of sources
  • audio-graphic interface design,
  • sound and graphic localization,
  • cross- and bi-modal perceptual evaluations,
  • interactive audio-graphic arts,
  • industrial audio-graphic data:
    • architectural acoustics,
    • sound maps,
    • urban soundscapes...
  • platforms and tools for audio-graphic scene modeling and rendering,

These areas are interdisciplinary in nature and interrelated. New advancements in each area will benefit the others. This workshop will allow to exchange the latest developments and to point out current challenges and new directions.

Topophonie, c'est quoi ?

Le projet Topophonie porte sur la navigation sonore dans les flux et les masses d'évènements audiographiques spatialisés.

The research project Topophonie proposes lines of research and innovative developments for sound and visual navigation in spaces composed of multiple and disseminated sound and visual elements.

en lire plus...

Avec qui ?

L'équipe du projet est composée de chercheurs spécialisés dans les domaines sonores et de la visualisation, de designers, d'artistes et d'entreprises spécialisés dans les domaines d'application concernés. Les partenaires sont : l'Ensci-les Ateliers, le Limsi, l'Ircam, Navidis, Orbe et User Studio.

en savoir plus...

Soutiens

Topophonie est lauréat de l'appel à projet CONTINT (contenus interactifs) 2009 publié par l'Agence Nationale de la Recherche et bénéficie à ce titre d'une aide au développement. Le projet est également labélisé par le pôle de compétitivité Cap Digital.

ANR - Agence Nationale de la Recherche Cap Digital

en imagesFlickr
en vidéosVimeo