SCIENTIFIC PROGRAMME

I3DA 2023 International Conference hosts 3 Plenary Lectures, Special Sessions, Oral Sessions and Poster Sessions and sound demos.

Herewith the final programme!! See you soon!

PLENARY LECTURES

We are pleased to announce the 3 Plenary Lectures of I3DA 2023.  As plenary speakers, we will have the honor to host:

Angelo Farina
Professor | Department of Engineering and Architecture, University of Parma; Parma, Italy.

Navigating a virtual acoustics world: real-time rendering systems vs precomputed impulse responses

Huseyin Hacihabiboglu
Professor | METU in Ankara, Turkey.

6DoF Audio: Moving beyond the constraints of 3D sound – Convincingly transporting the user to a remote, virtual location using digital and communication technologies has been the holy grail of multimedia research since at least the second half of the 20th century. Today, we are closer than ever to that idealised future with computational and network infrastructure that can afford to fulfil the requirements, terminals with audiovisual capabilities to achieve unprecedented visual realism, and, more importantly, users eager to consume such content. Along with volumetric visual content, 6DoF audio is one of the most critical components to make navigable immersive environments possible. While 6DoF audio has many components in common with legacy 3D audio technologies, it also has distinct scientific and technological challenges that are bound to make 6DoF audio an interesting research topic for many years to come. We will consider some of these challenges, such as the utilisation of knowledge on and modelling of human movement, sound recording requirements, methods for sound field processing and interpolation, and considerations for presenting 6DoF audio as well as its coding and compression. We will also attempt to outline future opportunities, especially in creating navigable multimedia content.

Filippo Maria Fazi
Professor | University of Southampton, UK.

Listener-adaptive reproduction of spatial audio with loudspeaker arrays

This lecture will cover the fundamentals of several loudspeaker-based methods for spatial audio delivery that were developed at the Institute of Sound and Vibration Research and rely on listener tracking. The oldest of these techniques, Virtual Microphone Array Panning (VMAP), is a pressure-matching approach wherein the array of virtual control points moves together with the listener and the audio reproduction DSP parameters are adapted accordingly. The second technique, Compensated Amplitude Panning (CAP), relies on the first-order Taylor expansion of the sound field close to the listener’s position. The extension of this technique, Higher Order Stereophony (HOS), is based on the higher-order Taylor expansion of the sound field and it is shown to be an extension of traditional stereophony. Finally, listener’s position-adaptive Cross-Talk Cancellation (CTC), combines acoustic beamforming, cross-talk cancellation, and computer vision to adapt the audio reproduction algorithm parameters depending on the listener’s position, thus overcoming the narrow sweet-spot issue of conventional CTC systems.

Social Events

Social Dinner  – 6 Sept 2023 @ 20:00

Donatello Restaurant in Bologna 

The restaurant is located in the heart of downtown Bologna at

via Augusto Righi, 8 – 40126 Bologna (BO)

The amazing theremin player, Vincenso Vasi will give a music performance at our dinner party!

We will plan a special visit for attendees to Museo Pelagalli

List of Sessions

Technological advances in eXtended Reality (XR) are hinting at an unprecedented explosion of applications where immersive content takes the centre stage. Such an explosion of applications requires that vision and hearing, the two primary sensory modalities that are relevant for spatial perception, are appropriately catered for. 6DoF immersive audio allows for a truly lifelike auditory experience, allowing listeners to not only hear sounds from all directions, but also to perceive their distance and location in space and more crucially to navigate the sound scene without spatial constraints unlike its predecessor 3D audio.

Building upon the success of its first edition in i3DA 2021, this special session will bring together researchers from academia and industry to present, discuss and explore the latest developments in 6DoF immersive audio, including advances in capture, rendering, coding, compression, and perception of 6DoF audio content. Topics of interest for this special session include but are not limited to:

–   6DoF sound capture and rendering techniques
–   Spatial audio processing algorithms
–   Coding and compression of 6DoF audio content
–   Audio-visual integration in extended reality with specific emphasis on 6DoF applications
–   Perception and subjective assessment of 6DoF audio
–   Applications of 6DoF immersive audio in gaming, film, and other industries

Chairpersons:

portrait photo of Huseyin HacihabibogluHuseyin Hacihabiboglu is a Professor of Signal Processing at METU in Ankara, Turkey. He received the B.Sc. degree from METU in 2000, the M.Sc. degree from the University of Bristol, Bristol in 2001, both in electrical and electronic engineering, and the Ph.D. degree in computer science from Queen’s University Belfast in 2004. He held research positions at the University of Surrey and King’s College London. His research interests include immersive audio, room acoustics, psychoacoustics of spatial hearing, microphone arrays, and game audio. He has several patents on spatial audio and microphone arrays and is one of the co-founders of sonixpace, an audio technology start-up based in Ankara. He is a member of the IEEE SPS, UKRI Peer Review College, AES, Turkish Acoustics Society, EAA and ASA. He is the official representative of METU in Moving Picture Audio and Data Coding by Artificial Intelligence (MPAI) where he contributed to the development of MPAI-CAE (Context-based Audio Enhancement) standart. Between 2017-2021, he was an Associate Editor for the IEEE/ACM Transactions on Audio, Speech, and Language Processing. He is currently an associate editor for the IEEE Signal Processing Letters.

Zoran Cvetkovic portrait photo

Zoran Cvetkovic received the Dipl. Ing. and Mag. degrees from the University of Belgrade, Yugoslavia, the M.Phil. degree from Columbia University and the Ph.D. degree in electrical engineering from the University of California, Berkeley. He is currently a Professor of Signal Processing at King’s College London. He held research positions with EPFL, Lausanne, Switzerland, in 1996, and with Harvard University, Cambridge, MA, USA, during 2002–2004. Between 1997 and 2002, he was a Member of the technical staff of AT&T Shannon Laboratory. His research interests include signal processing, ranging from theoretical aspects of signal analysis to applications in audio and speech technology, and neuroscience. He was an Associate Editor for the IEEE TRANSACTIONS ON SIGNAL PROCESSING.

Common approaches for room acoustic simulation solve computationally complex problems which are limited by computational resources or need to be simplified for real-time rendering. The session will address questions on simulation complexity in light of the interplay between the wish for a physically accurate and a perceptually convincing simulation.

Co-chairs:

Portrait photo of Bernhard Seeber

Bernhard U. Seeber received the Dipl.-Ing. degree in electrical engineering and information technology and the Dr.-Ing. degree with distinction from the Technical University of Munich (TUM), Germany, in 1999 and 2003, respectively. Next, he was post-doc at the Department of Psychology at UC Berkeley, USA. In 2007 he joined the MRC Institute of Hearing Research, Nottingham, UK, to lead the Spatial Hearing lab. Since 2012, he is the head of the Audio Information Processing lab and Professor in the Department of Electrical and Computer Engineering at TUM. His research foci are on signal processing for hearing aids and cochlear implants, on virtual acoustics, spatial hearing, auditory modeling and acoustic nondestructive testing. Prof. Seeber is a member of the German Acoustical Society (DEGA), the Association for Electrical, Electronic & Information Technologies (VDE), the Acoustical Society of America (ASA), the Association for Research in Audiology (ARO) and the Bernstein Network for Computational Neuroscience. He heads the technical committee on hearing acoustics in the Society for Information Technology (ITG/VDE) and was member of the executive board of the DEGA from 2016 to 2022. He received the Lothar-Cremer award of the DEGA, the doctoral thesis award of the ITG and the ITG publication award.

portrait photo of Stephan EwertStephan D Ewert studied physics and received his Ph.D. degree in 2002 from the Carl von Ossietzky UniversitätOldenburg, Germany. During his Ph.D. project, he spent a 3-month stay as visiting scientist at the Research Lab of Electronics at the Massachusetts Institute of Technology(MIT), Cambridge, MA. From 2003 to 2005, he was Assistant Professor at the Centre of Applied Hearing Research at the Technical University of Denmark (DTU), Lyngby, Denmark. He re-joined Medizinische Physik at the Universität Oldenburg in 2005, and there he has been the head of the Psychoacoustic and Auditory Modeling Group since 2008. Dr. Ewert started his interest in hearing and audio engineering by developing loudspeakers during his early undergraduate years. His field of expertise is psychoacoustics and acoustics with a strong emphasis on perceptual models of hearing and virtual acoustics. Dr. Ewert has published various papers on spectro-temporal processing, binaural hearing, and speech intelligibility. More recently, he also focused on perceptual consequences of hearing loss,hearing-aid algorithms, instrumental audio quality prediction, and room acoustics simulation.

The application of digital technologies to cultural heritage has led to important methodological changes in the protection and enhancement of reconstruction monuments, ancient theatres, and performative spaces of the past. Within this context, the study of sound in archaeological contexts includes many subject areas ranging from physics and acoustics to soundscape archaeology and aural architecture, passing through digital sound studies and digital approaches to historical acoustemology. By exploring the potential impact of digital techniques in the field of aural architecture, sonic heritage, archaeoacoustics, heritage acoustics, soundscape archaeology, archaeology of sound, and archaeomusicology, this special session aims to explore how digital methods and virtual acoustic reconstructions can improve our knowledge on sounds and sound behavior in ancient places, performative spaces, and architectural structures as well as within a built and natural environment in the past, by promoting their modern reuse and sonic preservation. Moreover, the goal of this session is the analysis of the relationship between acoustics, architecture, and environment – and how sound interacts with that environment -, as well as of the methods concerning anechoic recordings of music, sounds, and voices to be used in the auralisation of ancient places and performative spaces.

Some crucial issues that papers for this special session could address are:

1) whether the location of monuments, ancient theatres, and performative spaces may tell us about sound and auditory culture in antiquity;

2) whether the acoustic qualities of architectural structures may have caused an evolution of ancient buildings and structures, in which there was an auditory experience of the space and built environment;

3) whether the spatial configuration of monuments, ancient theatres, and performative spaces contributed to model their intangible acoustic aspects (soundscape, voicescapes, dancescapes);

4) whether the links between form and function of ancient structures and buildings can shed light on the active properties of aural architecture and on performances in strengthening cultural and social identity;

5) whether digital architectural reconstruction, immersive audio-visual modes, and auralisation could enable us to understand the way sounds and voices were experienced;

6) whether digital models of instruments or the reconstruction of ancient instruments can enhance our knowledge on their evolution in relationship with sound perception of instruments and the physical acoustics in monuments, ancient theatres, and performative spaces of the past.

Keywords: Aural Architecture; Sonic Heritage; Archaeoacoustics; Heritage Acoustics; Soundscape Archaeology; Archaeology of Sound; Archaeomusicology; Auditory Archaeology; Virtual Musical Instruments; Sound Studies; Historical Acoustemology; Ecoacoustics; Psychoacoustics; Sensory Studies; Sonic Built Environment; Sonic Natural Environment; Auralisation; Performances; Music; Voice; Sound; Dance; Performative Spaces; Ancient Theatres; Linear Theatres; Greek Theatres; Greek-Roman Theatres; Bouleuteria; Odeia; Roman Theatres; Temples; Tombs; Stadia; Caves; Grottoes.

Chairperson: Angela Bellia, National Research Council of Italy, Institute of Heritage Science

Bio:  Angela Bellia is a researcher at the CNR, Institute of Heritage Science. Her work concerns soundscape archaeology, aural architecture, sonic heritage, archaeoacoustics, archaeology of performances, sound studies, and archaeomusicology. After her research in mobility at the Institut für Archäologie in Zürich, at the École des Hautes Etudes en Sciences Sociales in Paris, she carried out her research at the Institute of Fine Arts at the New York University, devoting her attentions towards the reconstruction of the performative dimension of an ancient Greek polis. Thanks to the Marie Skłodowska-Curie Actions Programme – Individual Fellowships, she carried out her research devoting her attentions towards archaeology of sound and archaeoacoustics as new approaches to the study of intangible cultural heritage. She has been the Chair of the Italy Chapter and of the Events & Network Working Group of the Marie Curie Alumni Association (MCAA). She also received the “MCAA Outstanding Contributor Award”. At present, she is the Chair of the Archaeomusicology Interest Group of the Archaeological Institute of America and the Principal Investigator of the project AURAL: Exploring the Potential of Immersive Virtual Reality for Experiencing Archaeological Soundscapes. Apart from a number of volumes and contributions in journals and edited volumes, she edited the special issues From Digitalisation and Virtual Reconstruction of Ancient Musical Instruments to Sound Heritage Simulation and Preservation (http://www.archcalc.cnr.it/journal/id.php?id=oai:www.archcalc.cnr.it/journal/A_C_oai_Archive.xml:1143) and Sonic Heritage: Sound and Multisensory Interactions in Immersive Virtual Reality and Cultural Heritage (https://www.mdpi.com/journal/heritage/special_issues/sonic_heritage) (with Eva Pietroni). Among her articles: Towards a Digital Approach to the Listening to Ancient Placeshttps://doi.org/10.3390/heritage4030139 and Rediscovering the Intangible Heritage of Past Performative Spaces: Interaction between Acoustics, Performance, and Architecturehttps://doi.org/10.3390/heritage6010016 (with Antonella Bevilacqua). Angela Bellia is also editor in chief of TELESTES: An International Journal of Archaeomusicology and Archaeology of Sound (http://www.libraweb.net/promoriv.php?chiave=147)

To see her work: https://nationalacademies.academia.edu/AngelaBellia

We are pleased to invite you to submit a paper to a special session entitled « Tools, Technologies, and Formats ».

The topics of interest for the session include but are not limited to:

  • Software tools (such as standalone applications, plugins, web-based solutions, etc.) for the production and rendering of immersive and 3D audio.
  • Hardware technologies for 3D audio capture or rendering and AR/VR, such as microphone arrays, loudspeaker arrays, head-tracking devices, wearable augmented reality displays and interfaces, etc.
  • Formats (object based, scene based, channel based), standards, codecs, and transmission protocols for the delivery of spatial audio.
  • Design and implementation of interactive audio frameworks and applications.

C0-chairpersons:

Thibaut Carpentier profile picture

Thibaut Carpentier is an R&D engineer at IRCAM (Institut de Recherche et de Coordination Acoustique/Musique), Paris [https://www.ircam.fr].
After studying acoustics at the Ecole Centrale and signal processing at the ENST Telecom Paris, he joined CNRS (French National Centre for Scientific Research) and the Acoustics & Cognition Research Group [https://www.stms-lab.fr/team/espaces-acoustiques-et-cognitifs] in 2008.
His research is focused on spatial audio, room acoustics, artificial reverberation, and computer tools for 3D composition and mixing. In recent years, he has been responsible for the development of Ircam Spat [https://www.stms-lab.fr/shop/product/spat/], he created the 3D mixing workstation Panoramix [https://www.ircam.fr/recherche/equipes-recherche/eac/panoramix/], and he contributed to the conception and implementation of a 350-loudspeaker array for holophonic sound reproduction in Ircam’s concert hall [https://www.ircam.fr/lircam/le-batiment#].

 

Jean-Marc Jot recently founded Virtuel Works to help accelerate the development and deployment of audio, voice and music computing technologies that will power future immersive experiences.  Previously, he headed the development of novel sound processing technologies, platforms and standards for virtual and augmented reality, gaming, broadcast, cinema, and music creation – with Magic Leap, Creative Labs, DTS/Xperi, and iZotope/SoundWide.  Before relocating to California in the late 90s, he conducted research at IRCAM in Paris, where he created the Spat software library for immersive music creation and performance.  He is a Fellow of the Audio Engineering Society and author of numerous publications and patents on digital audio signal processing (sites.google.com/site/jmmjot/).

Abstract:

Recent audio technologies facilitate simulation of the acoustical signatures of historic structures. Together with immersive visual technologies it is possible to provide new perspectives and deeper insights into the role of architecture on human behavior. This panel will discuss the implications of using immersive technologies to reanimate cultural heritage sites with particular emphasis on archaeology, musicology, anthropology, and sound studies.

Keywords:

Data acquisition, analysis, modeling, architectural acoustics, cultural histories, resonant properties, performance spaces, performance practice, spatial audio, virtual reality, cultural studies, musicology, history of art and architecture, cognitive science, audio engineering, archaeology, anthropology, musicology, music, rituals, sound and space, cognition, perception, psychology.

Session co-chairs: Jonathan Berger, Stanford University and Cobi van Tonder, University of Bologna

Jonathan Berger portrait photoBios: Jonathan Berger is the Denning Family Provostial Professor in Music at Stanford University. Berger is a composer of a wide range of genres including opera, orchestral, chamber, end electroacoustic music. He is also an active researcher, with expertise in computational music theory, music perception and cognition, psychoacoustics, and sonification. He has published over 70 academic articles in a wide variety of fields relating to music, science, and technology, including relevant work in digital audio processing in Neuron, Frontiers in Psychology, andthe Journal of the Audio Engineering Society. Among his awards and commissions are the Guggenheim Fellowship, the Rome Prize, fellowships from the National Endowment for the Arts, and commissions from Lincoln Center Chamber Music Society, the 92nd Street Y, The Spoleto Festival, the Kronos Quartet, and others. Berger is the Principal Investigator of a major grant from the Templeton Religion Trust’s Art Seeking Understanding initiative, to study the interplay of architectural acoustics and musical and ritual sound.

Cobi van Tonder is a practice-led researcher and interdisciplinary artist. She composes and performs microtonal drone music, synthesized music, and sound pieces based on field recordings, infrasound, and virtual acoustics. Her current project, ACOUSTIC ATLAS – Cultivating the Capacity to Listen, developed during a Marie Skłodowska-Curie Fellowship, enables remote listening in the browser through virtual acoustic technology. Cobi completed a Ph.D. in Music Composition at Trinity College, Dublin, an MFA Art Practice degree at Stanford, USA; and a BHons in Music in History and Society (Musicology) at WITS, Johannesburg, South Africa. Cobi is currently a research fellow at the Department of Architecture, Alma Mater Studiorum, University of Bologna.

Recent technological developments are having a fundamental impact on the field Human-Computer Interaction. Technologies such as holography, head-mounted displays, full-dome immersive video projection, kinesthetic communication (haptic technology), transparent monitors, and three-dimensional (3D) sound and electronic sensors, facilitate sophisticated and interactive environments using augmented and virtual reality. The scope of this augmentation is participant immersion, which is the ultimate goal of an effective virtual or augmented experience. It is a common belief that aurality constitutes an essential part in VR and AR and offers additional details and a visceral sense to the immersive experience. Aurality encloses the synthesis, spatialisation, and reception of sound in a virtual world.

In this session, we welcome research papers that lie in all the individual aspects of aurality and sound spatialisation as well as novel engineering efforts and applications that bridge VR and AR development with immersive sound and auralisation.

Chairperson:

Athanasios G. Malamos received Bsc in Physics from the University of Crete (1992) and MSc and PhD from the Technical University of Crete in 1995 and 2000 respectively. From 2002 until 2019 he was with the Technological Educational Institute of Crete, Dept. of Informatics Engineering (former name of the department Applied Informatics and Multimedia). Since May 2019 is a professor of the Electrical and Computer Engineering Dept., of the Hellenic Mediterranean University. He is an active member of the WEB3D community. He is member of the web3D Consortium spatial sound and semantics group and the W3C representing the consortium in the Audio Work Group. His laboratory team contributed to the spatial sound, the Humanoid, the Physics libraries, and streaming in X3D platforms. Dr. Malamos is honored with the ACM Recognition of Service Award and by the WEB3D consortium with recognition of his efforts in the organization of web3D conferences. His research interests include online multimedia services, Web3D, and Virtual and Augmented Reality.

The 3D measurement, simulation, and auralization of the sound field inside and outside cars are covered in this session.  Researchers are invited to submit scientific contributions on the following topics but not limited to:

  • Sound synthesis for electric cars and Active Vehicle Alert Systems (AVAS).
  • Implementation and virtualization of Active Noise Control (ANC) systems.
  • 3-DoF and 6-DoF auralization of car sound systems.
  • Algorithms and practical implementations of 3D audio techniques for personalized listening experiences.
  • 3D audio technology to provide the driver with precise directional information (Augmented Reality).

Chairpersons:

Angelo Farina graduated in Civil Engineering in 1982 at the University of Bologna, where he also obtained the PhD in Technical Physics in 1987 (I° Cycle). He is University Researcher for the disciplinary group “Technical Physics”, since 11/1/1986, Associate Professor in Environmental Technical Physics (ING-IND/11) since 11/1/1998 and Extraordinary Professor in Environmental Technical Physics (ING- IND/11) from 1/5/2005. From 1/5/2008 he is prof. Professor of Environmental Technical Physics. He has conducted extensive and in-depth research in almost all fields of acoustics, particularly with digital signal processing techniques and numerical prediction models. In particular, since the early 90s he has been among the pioneers of the use of the MLS (Maximum Length Sequence) technique for measurements in the acoustic and vibrational fields, and has developed his own, original method for simulating sound propagation in halls and at open, based on pyramid tracing (Pyramic Tracing); this algorithm has been implemented in the Ramsete commercial program, of which prof. Farina is co-author (www.ramsete.com ). Later he developed innovative acoustic measurement techniques, in particular for the in situ evaluation of the acoustic properties of materials, for the three-dimensional characterization of the sound field and for the measurement of the transfer function of non-linear and non-time invariant systems. These new methods have resulted in the development of a series of “plugins” for processing sound signals, known under the name Aurora ( www.aurora-plugins.com). In the last 3 years he has dedicated himself in particular to measuring the impulse response of theaters using array microphones, and to the three-dimensional reproduction of the sound field in special listening rooms, making use of advanced signal processing techniques, made possible by efficient real-time multichannel convolution algorithms. He is the author of over 300 scientific publications, of which about 3/4 in English, presented at important international conferences or in international journals. The complete list can be consulted on the Internet at: http://pcfarina.eng.unipr.it/Public/Papers/list_pub.htm

In 2008 he was awarded the prestigious “AES fellowship award” by the Audio Engineering Society.

Daniel Pinardi received the M.S. degree (cum laude) in mechanical engineering from the University of Parma, Italy, in July 2016, with a thesis on loudspeaker modeling, and the Ph.D. degree in industrial engineering from the University of Parma, in March 2020, with a thesis on the design of microphone, hydrophone, and camera arrays for spatial audio recording.

He is a Research Assistant of Prof. Angelo Farina at University of Parma since 2016, mainly specialized in spatial audio, design of transducer arrays, acoustics simulations and 3D auralization, applied to automotive field, and underwater acoustics.

This special session focus on the use of acoustic parameters in creative work, including perception of spatial qualities of sound in space, auralisation as musical material, convolution reverb, 3D impulse responses, field recordings, soundscape, conceptual art based on sound and space, resonance, infrasound,  meditation via sound, technological environments, fluid architectures, and immersive VR/XR/AR art projects.

We invite artists who work with sound and space as media in a wide array of contexts where spatial audio or immersive audio technologies enable artistic work. Practitioners in sound art, composition, sound studies, music production, sound design, VR, AR, XR, theatre, architecture, computer generated art and philosophy present research on ‘sound and space’ as artistic material. Especially welcome are hybrid projects that support multidisciplinary approaches.

We welcome papers about (but not limited to):

  • Sound as invisible architecture
  • Spatiotemporal perspectives in 3D sound
  • Sonic environments, architecture and technological spaces
  • Auralisation
  • Electroacoustic composition
  • AI and listening
  • Environmental sound ecology
  • Field recordings, soundscape
  • Meditation through sound, (deep listening), sound walks
  • Conceptual art based on sound and space
  • Vibration, resonance, infrasound
  • Algorithms and the creation of the spatial experience
  • History of spatial audio in sound art practice
  • Wellness

Keywords:  Deep Listening, Spatial Audio, Reverb, Sound and Space, Landscape, Soundscape, Soundwalks, Soundwalk, Binaural, Multichannel, Diffusion,  Ambisonics, Wave Field Synthesis, VR, AR, XR, sound and space, 3D models, Twins, architecture, virtual architecture, abstract space, imaginary space, imaginary landscape, AI-generated, AI, sounding, Sound propagation, Echolocation, Infrasound, HRTF, Human Ears, Listening, Urban Space, Environmental, Sonic Ecology, Film, Theatre, Dance, Sound Design, Instrument Design, Human Computer Interaction, Internet Art, Webaudio, Browser Art, Interface Design, DSP, Robotics, Interactive Art, Simulation, Digital Art, Electroacoustic Music, Computer Music.

Co-chairs:


Cobi van Tonder is a practice-led researcher and interdisciplinary artist. She composes and performs microtonal drone music, synthesized music, and sound pieces based on field recordings, infrasound, and virtual acoustics. Her current project, ACOUSTIC ATLAS – Cultivating the Capacity to Listen, developed during a Marie Skłodowska-Curie Fellowship, enables remote listening in the browser through virtual acoustic technology. Cobi completed a Ph.D. in Music Composition at Trinity College, Dublin, an MFA Art Practice degree at Stanford, USA; and a BHons in Music in History and Society (Musicology) at WITS, Johannesburg, South Africa. Cobi is currently a research fellow at the Department of Architecture, Alma Mater Studiorum, University of Bologna.

Giulia Bismara sitting in the back of a theatre with headphonesGiulia Vismara, is a researcher and an electroacoustic composer. Currently she is teaching History of Electroacoustic Music at the Conservatory in Castelfranco, Italy. She is a postdoctoral researcher at Antwerp’s Royal Conservatory and Academy of Fine Arts, a co-founder of the SSH! Study Center at Iuav, University of Venice, as well as a member of RISME digitali, a research group committed to the study of the use of electronic and digital technologies musical and sound creation.
Space is the key to her work, the matrix that shapes the music she composes. Her works range from electroacoustic music to sound installation, music for theatre, performance and video art. www.giuliavismara.com

This session regards applied research on new technologies, modelling, signal processing and audio reproduction, virtual soundscape experiences. It aims to collect and discuss projects and case studies characterized by experiences of auralisation and acoustic reconstruction of spaces and buildings with particular historical and/or architectural value.

Researchers are invited to present practical implementations of auralisation strategies in room acoustics, sound design and virtual reality, with special interest for the acoustic heritage of places like theatres and auditoria where new acoustic scenarios and aural experiences for music other performing arts can be created.

Sergio Luzzi is Adjunct Professor at the University of Florence (Italy). Honorary Visiting Professor at USURT University of Ekaterinburg (Russia). Scientific Director of the OSHNET SCHOOL for high certified education in Occupational Hygiene (Izmir, Turkey).  He is also president elect of the International Institute of Sound and Vibration; member of the Executive Council of the European Acoustics Association (2008-2019); Ad hoc expert of the EU Commission for URBACT and Horizon 2020 programs (2018-ongoing); coordinator of noise awareness campaigns INAD ITALIA (2009-ongoing) and INAD IN EUROPE (2017); coordinator of the international student competition office for the International Year of Sound (2020+); as well as director of educational multimedia programs and films for acoustics learning and noise awareness.

Luzzi serves as General Secretary of AIA – Acoustcal Society of Italy (2021-ongoing), Vice-President of AIDII – Italian Occupational Hygienists Association, President of PESCAS – Association of Experts for the Protection and Culture of the Environment and Health (2020-ongoing); member of the scientific and organizing committees of many international congresses, conferences, symposia. General Chairman of ICSV22, the International Congress on Sound and Vibration, held in Florence in 2015.  Sergio Luzzi is President and Technical Director of “Vie en.ro.se. Ingegneria”, certified company of consultancy and design founded in 1992, leader in applied acoustics, civil and environmental engineering and architecture, with wide and long-lasting experience in environmental, building and room acoustics, occupational and environmental hygiene, sustainable development and management systems.  He has a relevant international experience, as leader or partner in EU funded projects, widespread national and international consultancies, also in forensic matters, in collaboration with universities and research institutes from several countries. Sergio Luzzi is author of 7 books, editor of congress proceedings, guest editor of special issues of scientific journals and author or co-author of more than 200 scientific papers in the fields of environmental acoustics, building acoustics, forensic acoustics, industrial and occupational noise control.

The topics of interest include but are not limited to methods for synthesizing or reproducing sound fields over an extended area by means of loudspeaker arrays. Prominent examples for such methods are wave field synthesis and ambisonics. We will also be pleased to receive contributions on transaural audio presentation.
Chair:
Jens Ahrens has been an Associate Professor within the Division of Applied Acoustics at Chalmers since 2016. He received a Diplom (equivalent to a M.Sc.) in Electrical Engineering and Audio Engineering jointly from Graz University of Technology and the University of Music and Performing Arts, Graz, Austria, in 2005. He completed his Doctoral Degree (Dr.-Ing.) at the Technische Universität Berlin, Germany, in 2010.
From 2011 to 2013, he was a Postdoctoral Researcher at Microsoft Research in Redmond, Washington, USA, and in the fall and winter terms of 2015/16, he was a Visiting Scholar at the Center for Computer Research in Music and Acoustics (CCRMA) at Stanford University, California, USA. He is an Associate Editor of the IEEE Signal Processing Letters, of the IEEE/ACM Transactions on Audio, Speech, and Language Processing, and of the EURASIP Journal on Audio, Speech, and Music Processing.

Filippo Maria Fazi is Professor of Acoustics and Signal Processing at the Institute of Sound and Vibration Research of the University of Southampton, where he is Head of the Acoustics Group and leads the Virtual Acoustics and Audio Engineering Team. His research interests include acoustics, audio technologies, electroacoustics and digital signal processing, with special focus on acoustical inverse problems, multi-channel systems (including Ambisonics and Wave Field Synthesis), virtual acoustics, and microphone arrays. He is the author of more than 150 scientific publications and several patents. Prof Fazi was awarded a research fellowship by the Royal Academy of Engineering (2010) and the Tyndall Medal by the Institute of Acoustics (2018). He is a fellow of the Audio Engineering Society, a member of the Institute of Acoustics and is co-founder and chief scientist of AudioScenic, a start-up company that commercialises loudspeaker array technologies developed by Prof Fazi and his team.

We are pleased to invite you to submit a paper to the special session entitled «Application of 3D Audio to room acoustic design».

The topics of interest for the session include but are not limited to:

  • Applications of 3D sound in design and architectural projects.
  • 3D audio measurements for room acoustic design.
  • 3D modeling for sound perception research in virtual reality environments.
  • Software tools and facilities for production and rendering of immersive and 3D audio.
  • Integration of immersive and 3D audio to design tools used in architectural practices.

Co-chairs:

Louena Shtrepi Dr. Louena Shtrepi is an assistant professor at Politecnico di Torino in the Department of Energy “Galileo Ferraris” since 2018. She holds a university degree in architecture both from Politecnico di Torino and from Politecnico di Milano. Moreover, as part of her master degree, she obtained the Alta Scuola Politecnica diploma in 2010. She received her PhD degree in 2015 in Metrology: Measuring Science and Techniques, rewarded with the Newman Medal (Newman Student Award Fund and Acoustical Society of America) for excellence in the study of acoustics and its application to architecture.

Her research and teaching interests rely on applied acoustics, more specifically in room acoustics and building acoustics. Since 2012, she started working on acoustic materials properties, acoustic simulations, measurement uncertainty, metamaterials design and applications. Furthermore, her research aim is to raise awareness about acoustic issues and solutions since the early stages of the design process by involving actively architects and designers. These aspects have been deeply studied in multidisciplinary investigations that involve also subjective perceptual testing to promote listening into architectural design process. Her research results have been published in highly rated journals and rewarded with several grants at different conferences.

She teaches in two main graduate courses on Engineering of Sound Systems in the MSc program in Cinema and Media Engineering, Food Space Design (Light, Sound, Clima for food spaces) and Exhibit Design: Light and Sound in Design and Visual Communication. She frequently collaborates in the agreement between Politecnico di Torino and the School of Electronic Music of the Conservatorio Giuseppe Verdi di Torino. She is a member of AIA (Associazione Italiana di Acustica), ASA (Acoustical Society of America), FTI (Associazione della Fisica Tecnica Italiana) and ATI (Associazione Termotecnica Italiana).

Arianna Astolfi

Arianna Astolfi is associate professor of building physics at the Department of Energy of the Politecnico di Torino, where she teaches building physics and applied acoustics and is responsible for the Applied Acoustics Laboratory.

She has been vice-President of then European Acoustical Association since 2022, and co-chair of the EAA TC of Room and Building Acoustics since 2017. She has been also member of the National Council of the Italian Acoustic Association since 2014. She is general chair of the Forum Acusticum 2023 to be held in Torino in September 2023.

Arianna is Associate Editor of “Applied Acoustics” and member of the editorial board of “Acoustics” and “Building Acoustics”. Her main research interests include classroom acoustics, speech intelligibility and voice monitoring, but she is also works on sound diffusion, acoustical characterization of materials, sound insulation, soundscape. She is author of more than 90 peer-reviewed journal papers and hundreds of conference papers, she has three patents and has created two start-ups incubated in the I3P incubator of the Politecnico di Torino.

Arianna serves the UNI committee, which is developing technical standards on acoustic requirements for indoor environments such as schools, offices, and hospitals.

The measurement, recording, and simulation of auditorium and theater acoustics are covered in this session. We are looking for tearing technologies and innovative scientific approaches for acoustic measurement and modeling.

Keywords: historical theatre, modern auditorium, acoustic measurements and simulations, acoustic modeling, data analysis, computer science.

Chairperson: Ruoran Yan

Ruoran Yan is a PhD student in Architecture and Design Cultures at the Alma Mater Studiorum University of Bologna (37th cycle). Doctoral research project:  ‘A study of interactive architectural acoustics for genius loci regeneration driven by cultural heritage’, is in progress.  Specialized in the energy efficiency of existing buildings and the current situation and restoration of ancient villages in China, during the postgraduate period. Experience in interactive architectural design based on multi-media. Research interests: Theatre acoustic simulation, Deep Learning based acoustic models, interactive design.

This special session is focused on nonlinear acoustics applied to 3D audio systems.

Every audio system could be affected by nonlinearities introduced by the sound reproduction and acquisition chain.

In nonlinear systems, the output signal presents additional frequency components that are not contained in the original signal. This aspect becomes important in immersive audio systems, which require a high fidelity reproduction and a realistic identification of the environment.

 We are pleased to invite you to submit scientific contributions on the following topics but are not limited to:

 – nonlinear audio system identification

– nonlinear 3d auralization

– acoustic measurements for nonlinear systems

– nonlinear sound propagation

– nonlinearities in 3d audio systems

Chairs:  Stefania Cecchi & Valeria Bruschi

Stefania Cecchi was born in Amandola, Italy, in 1979. She received a Laurea degree (with honors) in electronic engineering from the University of Ancona (now University Politecnica delle Marche, Italy) in 2004 and a Ph.D. degree in electronic engineering from the University Politecnica delle Marche (Ancona, Italy) in 2007. She was a postdoc researcher at DII (Department of Information Engineering) at the same university from February 2008 to October 2015 and an Assistant Professor from November 2015 to October 2018. She is an Associate Professor at the same department since November 2018. She is the author or coauthor of numerous international papers. Her current research interests are in the area of digital signal processing, including adaptive DSP algorithms and circuits, speech, and audio processing. Prof. Cecchi is a member of the AES, IEEE, and the Italian Acoustical Association (AIA).

Valeria Bruschi received the M.Sc. degree (cum laude) in electronic engineering in 2018 at Università Politecnica delle Marche, Ancona, Italy, with a thesis on immersive sound reproduction in real environments. She is going to obtain the PhD by June 2023 at the Department of Information Engineering of the same university and she has currently a postdoc fellowship at the same department. The main topic of her PhD deals with innovative systems for immersive audio rendering enhancement. Her current research interests are mainly focused on digital audio signal processing (DSP), with particular attention to adaptive DSP algorithms for linear/nonlinear systems identification, audio equalization and immersive audio systems. She is a Student Member of the Audio Engineering Society (AES).

A wide range of sound-absorbing and sound-insulating elements are currently available on the market. They are generally used to adjust the acoustic features of a room, thanks to the progress made in the research and development of materials and the introduction of new manufacturing technologies. With the growing need to protect the environment, acoustic performances are only one of the required characteristics, together with environmental compatibility, longevity, and affordable cost.

The session “acoustic characterisation of materials and systems” has the aim to collect the advances in the description of sound-absorbing and sound-insulating materials. The application fields can be building, industrial, and tertiary sectors. Works featuring experimental, numerical, or theoretical applications of novel materials and meta-materials are welcome.

-sound absorption
– surface acoustic properties
– metamaterial
– porous material
– environmental compatibility
– life cycle assessment
– experiment
– simulation
– model

Chairperson:

Edoardo Piana holds the position of associate professor and founder/head of the Applied Acoustics Laboratory (ISO 9001 certified) at the University of Brescia. His main research fields are: acoustic insulation of composite materials, acoustic absorption of innovative materials, acoustic metamaterials, environmental acoustics, noise from high voltage power lines and electrical substations and sound propagation in ducts.
He authored numerous articles in international and national scientific journals and participates actively to the main congresses in the sector. He is the scientific referent for the University of Brescia in the cooperation agreements for the development of projects in the field of vibroacoustic with the Royal Institute of Technology (KTH) in Stockholm and the Katholieke Universiteit in Louvain.
Member of ISO/TC 43/SC 2/WG 32, “Determination of acoustical parameters of materials” and of the UNI technical commission “Acoustic and vibrations”, collaborates with various companies on projects with high scientific content.
At his own university, he holds Applied Acoustics and Technical Physics courses. He has also supervised numerous degree and doctoral theses.

This session concerns the study of the acoustics of sacred spaces such as churches, mosques, places of worship generally located outdoors and in confined spaces (even caves). The purpose of the session is to discuss the effects of acoustics in these environments and how the acoustics influence perception and participation in religious services. Experiences relating to the acoustic correction of these environments can also be presented.

Gino Iannace is an associate professor at University of Campania Luigi Vanvitelli and qualified as full professor in 9C2 (ING-IND 11).  PhD in environmental acoustics, active in the field of Acoustics, with interests in all its aspects and in particular in environmental acoustics and acoustic properties of material. Innace carried out research through collaboration on national and international projects. He has participated in numerous national and international conferences. He is a reviewer for multiple journals and editor of International Journal of Environmental Research and Public Health. Iannace has more than 30 years of professional experience in the acoustics sector carrying out action plans, acoustic zoning, noise maps and various acoustic impact assessments.

The 3D measurement, recording, reconstruction and simulation of soundscape in outdoor scenarios are covered in this session. Researchers are invited to present practical implementations of auralization strategies in soundscape design, with special interest for urban areas, such as parks, squares, open spaces, and natural areas.

Keywords: soundscape, measurements and simulations, acoustic modeling, auralization

Chair:

Francesco Asdrubali

is full professor of Building Physics and Building Energy Systems at the University of Roma Tre since 2015. Previously he served as Assistant Professor and Associate Professor at the University of Perugia.

Graduated in Civil Engineering in 1990, he obtained a PhD in Thermophysical properties of materials in 1995 and served as Director of CIRIAF, an Inter-University Research Center in the field of environment and pollution, based at the University of Perugia, from 2004 till 2013.

His teaching activities include the courses of Environmental and Room Acoustics, Applied Thermodynamics and Heat Transfer, Energy systems and Environment in the Universities of Perugia and Roma Tre.

The areas of scientific research cover a wide range of topics, such as acoustical properties of materials, environmental noise and soundscape, sustainable mobility, renewable and alternative energies, heat transfer, energy and buildings, Life Cycle Assessment.

Francesco Asdrubali is author of more than 400 scientific papers, most of them with international diffusion. He is member of the Editorial Board of various international Journals and Editor in Chief of two Journals in the field of acoustics: Building Acoustics and Noise Mapping.

The international activities include the coordination of various national and EU-funded Projects (LIFE, Intelligent Energy Europe, VII FP; Horizon 2020) and the participation to a COST Action on acoustic innovative materials.

Member of the Scientific and Organizing Committees and Session Organizer of several international conferences, such as ICSV16 (2009), ICAE International Congress on Applied Energy (2011), ECOS (2012), EAA Euroregio (2013), ICSV22 (2015), Euronoise (2015), ICSV23 (2016), ICSV24 (2017), Internoise (2017), Building Simulation (2019), ICSV26 (2019), Forum Acusticum (2023).