18 September 2025

Webinar: A new era for audio AI data

A new era for audio AI data

Our biggest SDK update ever: Dynamic scenes, large-scale data workflows and better AI outcomes

On February 26th Treble will host a dedicated webinar showcasing major advancements in the Treble SDK. To accommodate both European and US audiences we will run two sessions on the same day.

- European Session - 9:00 GMT / 10:00 CET

- USA Session - 18:00 GMT / 10:00 PST / 13:00 EST

On the agenda for this Webinar:

  • Audio that moves - finally
    Synthetic audio has traditionally been static, unable to represent the fluid nature of sound in real life. People move. Devices move. Conversations shift direction. Acoustic conditions change continuously. In this webinar, Treble will introduce world-first dynamic scene modeling, enabling simulation of moving sound sources, moving listeners, and time-varying spatial relationships. This unlocks more realistic training data and enables virtual prototyping of audio devices operating in dynamic environments - a fundamental expansion of what synthetic audio can represent.
  • Curated audio data, directly accessible
    Treble is evolving from a simulation SDK into a full Audio AI Data Platform. We will show how teams can directly access large-scale, curated audio and acoustics datasets and work with them alongside simulation and data generation. The session will demonstrate how data can be processed and tailored to specific use cases, with metadata treated as a first-class object that can be customized and enriched. Attendees will see how this enables faster iteration, better collaboration, and training datasets designed for real-world conditions. 
  • Research spotlight: why better data wins
    Treble will present new research showing measurable improvements in downstream ML performance when training on Treble data, including gains in monaural and multi-channel speech enhancement, speech recognition, direction-of-arrival & distance estimation, and blind room estimation. We will discuss why, as model architectures converge, data quality and diversity become decisive for audio AI performance.
  • New data-centric ML workflows
    High-quality data only matters if teams can use it efficiently. We will introduce new data-centric workflows for handling and processing large audio datasets at scale, improving collaboration between data, simulation, and ML teams, and integrating Treble seamlessly with existing training infrastructure. Attendees will see how these workflows reduce friction and improve reproducibility in real ML pipelines.
  • Own-voice modeling and enhanced device handling
    We will showcase new capabilities for modeling challenging device-level audio scenarios, including devices with microphones and speakers and complex near-field and own-voice interactions. The session will demonstrate how detailed geometry and acoustic configurations can be handled efficiently, improving both training data quality and realism in virtual device prototyping.
  •  Automated scene generation
    We will introduce new tools for automated virtual audio scene generation, enabling teams to construct nuanced audio scenes at scale by controlling SNR, speech overlap, turn-taking, and environmental complexity, while incorporating real speech data where needed. We will show how this integrates with Treble’s 3D world capabilities, including room databases, programmatic environment generation, and CAD and OpenUSD-based workflows.

Industry guests

Industry leaders in audio technology and voice AI will join us to discuss how Treble elevates their products and accelerates their research and development efforts.

ai-coustics

ai-coustics will showcase why speech enhancement models require training data that closely matches real world acoustic conditions and how high-quality synthetic data makes it possible to introduce controlled variability in speaker positions noise sources and room characteristics. They will show how this enables systematic scene construction such as placing multiple speakers at different distances from a microphone to train more specialized and robust models.

Jabra GN

In the audio industry, many measurements are required for product development, testing, and recordings across different scenarios, and these are often costly and time consuming. The approach is to use Treble software to replace selected measurements, enable deeper investigation of existing products, support the design of new ones, and generate synthetic data to improve audio algorithms. As an initial step, Treble was validated through simulations of rooms with varying sizes and reverberation times, as well as simulations of videobars and speakerphones, which showed very strong agreement with real measurements. Treble is now used to further develop products and audio algorithms, and the presentation therefore covers room and device simulations across scenarios, microphone array design, and synthetic data generation.

Hugging Face

Hugging Face will present an overview of their audio ecosystem including education resources leaderboards and Transformer based ASR and TTS models and explain why high quality data is critical for building reliable audio systems. They will highlight the Treble10 collaboration through the public datasets released on Hugging Face and demonstrate how Treble enables training and evaluation in realistic acoustic conditions with reverb noise and movement while new Treble SDK features simplify development workflows through Hugging Face tooling.

Meet the speakers

CEO - Treble Technologies

Dr. Finnur Pind

Dr. Finnur Pind is the CEO and co-founder of Treble, a company revolutionizing sound simulation and acoustic design through cutting-edge technology. With a background in acoustical engineering and years of experience driving innovation, Finnur has been at the forefront of developing tools that enable designers, engineers, and architects to create better-sounding environments and products. Under his leadership, Treble has become a global leader in cloud-based sound solutions, pushing the boundaries of precision and collaboration in the industry.

Senior Product Manager for the Treble SDK - Treble Technologies

Dr. Daniel Gert Nielsen

Dr. Daniel Gert Nielsen is a specialist in numerical vibro-acoustics, with a PhD focused on loudspeaker modeling and optimization. His expertise spans acoustic simulation for communication devices and synthetic data generation for machine learning applications. With a strong background in numerical methods and audio technology, he plays a key role in shaping advanced acoustic modeling solutions at Treble.

Senior Audio Research Engineer - Treble Technologies

Dr. Georg Götz

Dr. Georg Götz is a leading expert in acoustics and audio signal processing, with a PhD specializing in data-driven room acoustics modeling. His research and work focuses on large-scale data acquisition, multi-exponential sound energy decay, and 6DoF late reverberation rendering, contributing to more accurate and immersive spatial audio simulations. With extensive experience bridging theoretical research and real-world applications, Dr. Götz plays a key role in advancing next-generation acoustic modeling at Treble.

Senior Simulation Specialist

Dr. Sarabeth Mullins

PhD in acoustics focused on the acoustics of Notre-Dame de Paris, with expertise in room acoustics, human-acoustic interaction, and acoustic simulations using both wave and geometric methods to address real-world challenges.

Audio ML Engineer - Hugging Face

Dr. Eric Bezzam

Eric Bezzam is an audio ML engineer at Hugging Face. He received his PhD from EPFL, and previously worked at Snips, Sonos, DSP Concepts, and Fraunhofer IDMT. He was one of the main developers of pyroomacoustics.

Head of Research - ai-coustics

Dr. Tim Janke

Tim Janke is Head of Research at ai-coustics. He holds a Ph.D. in Machine Learning from TU Darmstadt and has published at NeurIPS. With 10+ deep learning papers, he’s an expert in generative audio and Voice AI, driving breakthroughs that are shaping the future of audio technology at ai-coustics.

Simulation Specialist - GN

Dr. Yauheni Belahurau

Dr. Yauheni Belahurau works as a Simulation Specialist at GN in the Audio Technology department. Prior to joining GN, he completed his PhD at the Technical University of Denmark (DTU), at the Centre for Acoustic-Mechanical Microsystems, followed by a postdoctoral fellowship. His research and professional interests include the design of acoustic metamaterials and transducers, computational optimization, and advanced numerical methods for vibroacoustic problems. His work focuses on leveraging simulation-driven approaches to support product development and the optimization of audio systems.

Recent posts

03 February 2026

Meet Treble at ISE 2026

We will be at ISE 2026 in Barcelona on February 3-6 where wewill showcase the Treble Web Application for high-fidelity room acoustic simulations, giving visitors a hands-on look at our streamlined cloud-based approach to acoustic prediction and auralization.
06 January 2026

Meet Treble at CES 2026

At CES we will present the Treble SDK, our cloud based programmatic interface for advanced acoustic simulation. The SDK enables high fidelity synthetic audio data generation, scalable evaluation of audio ML models and virtual prototyping of audio products. Visit us in Las Vegas from January 6-9, 2026 at booth 21641.
07 November 2025

Studio Sound Service authorized reseller of Treble in Italy

Through this partnership, Treble and Studio Sound Service are bringing next-generation acoustic simulation and sound design solutions to professionals across the country. With its deep expertise and strong reputation in pro audio, Studio Sound Service is the perfect partner to expand the reach of Treble’s cutting-edge technology, empowering acousticians and sound engineers to design better-sounding buildings and venues.