17 October 2023

On-demand: Treble SDK Launch Webinar

Register your interest in the Treble SDK

Please fill in this form to register your interest in our Treble SDK and join our waiting list for a free trial. We will get back to you as soon as possible to discuss your specific needs.

The Treble SDK

The Treble SDK offers an easy-to-use programmatic interface to Treble’s cloud-based sound simulation engine. Being an SDK, there are virtually unlimited audio/acoustics related applications where the SDK can add value – only limited to the imagination of its users. Treble’s powerful simulation engine allows for accurate simulations of sound in complex environments, including the modeling of audio devices, complex materials and complex acoustic phenomena such as diffraction and phase. The SDK is packed with powerful pre- and post-processing functionality to facilitate an efficient workflow and allow for simulation-based workflows that were previously impossible. It enables engineers to generate vast amounts of hyper-realistic synthetic audio data to train audio machine learning models, engage in virtual prototyping of audio products, conduct efficient parametric studies, and much more, all through an easy-to-use Python-based interface.

Synthetic Audio Data Generation

Create custom and high-quality synthetic datasetsof sound in space. Say goodbye to collecting data using measurements or fromsimple simulations and instead create much larger datasets of rich and realistic audio sceneswith ease. Train and test speech enhancement, echo cancellation, source localization, noise suppression, de-reverberation, blind room estimation and any other audio machine learning model.

Advanced Room Acoustic Simulations

Run state-of-the-art full bandwidth wave-based acoustic simulations or wave/geometrical hybrid simulations at scale. Capture real-world acoustic behavior, including details like phase and diffraction, thanks to Treble’s proprietary acoustic simulation technology.

Audio Device Virtual Prototyping

Replace physical prototypes with digital twins. Analyze microphone array performance and optimize designs using an entirely virtual workflow. Optimize loudspeaker designs and study the interaction between loudspeakers and space.

Parametric Studies

Easily set up parametric studies, sweeping across parameters of interest such as room geometries, sound source/receiver configurations, material configurations, etc. Run optimization projects and perform uncertainty quantification.

Spatial Audio

Accurately simulate ultra-high order ambisonics spatial room impulse responses at scale for perceptually authentic spatial audio experiences. Extract parameters to drive real-time audio rendering engines and enable physics-based driven audio in your virtual/augmented worlds.

Power your Solution with Treble

Integrate Treble’s simulation engine into your 3rd party software or hardware, e.g., loudspeaker modeling software or virtual world engines.

Recent posts

06 January 2026

Meet Treble at CES 2026

At CES we will present the Treble SDK, our cloud based programmatic interface for advanced acoustic simulation. The SDK enables high fidelity synthetic audio data generation, scalable evaluation of audio ML models and virtual prototyping of audio products. Visit us in Las Vegas from January 6-9, 2026 at booth 21641.
07 November 2025

Studio Sound Service authorized reseller of Treble in Italy

Through this partnership, Treble and Studio Sound Service are bringing next-generation acoustic simulation and sound design solutions to professionals across the country. With its deep expertise and strong reputation in pro audio, Studio Sound Service is the perfect partner to expand the reach of Treble’s cutting-edge technology, empowering acousticians and sound engineers to design better-sounding buildings and venues.
13 October 2025

Treble and Hugging Face Collaborate to Advance Audio ML

Treble Technologies and Hugging Face have partnered to make physically accurate acoustic simulation data openly accessible to the global research community. As part of this collaboration, we are releasing the Treble10 dataset, a new open dataset containing broadband room impulse responses (RIRs) and speech-convolved acoustic scenes from ten distinct furnished rooms. The dataset is now freely available on the Hugging Face Hub for non-commercial research. This collaboration aims to lower the barrier to entry for audio and speech machine learning research by providing high-quality, physics-based acoustic data that was previously difficult to obtain or reproduce.