17 October 2023

On-demand: Treble SDK Launch Webinar

Register your interest in the Treble SDK

Please fill in this form to register your interest in our Treble SDK and join our waiting list for a free trial. We will get back to you as soon as possible to discuss your specific needs.

The Treble SDK

The Treble SDK offers an easy-to-use programmatic interface to Treble’s cloud-based sound simulation engine. Being an SDK, there are virtually unlimited audio/acoustics related applications where the SDK can add value – only limited to the imagination of its users. Treble’s powerful simulation engine allows for accurate simulations of sound in complex environments, including the modeling of audio devices, complex materials and complex acoustic phenomena such as diffraction and phase. The SDK is packed with powerful pre- and post-processing functionality to facilitate an efficient workflow and allow for simulation-based workflows that were previously impossible. It enables engineers to generate vast amounts of hyper-realistic synthetic audio data to train audio machine learning models, engage in virtual prototyping of audio products, conduct efficient parametric studies, and much more, all through an easy-to-use Python-based interface.

Synthetic Audio Data Generation

Create custom and high-quality synthetic datasetsof sound in space. Say goodbye to collecting data using measurements or fromsimple simulations and instead create much larger datasets of rich and realistic audio sceneswith ease. Train and test speech enhancement, echo cancellation, source localization, noise suppression, de-reverberation, blind room estimation and any other audio machine learning model.

Advanced Room Acoustic Simulations

Run state-of-the-art full bandwidth wave-based acoustic simulations or wave/geometrical hybrid simulations at scale. Capture real-world acoustic behavior, including details like phase and diffraction, thanks to Treble’s proprietary acoustic simulation technology.

Audio Device Virtual Prototyping

Replace physical prototypes with digital twins. Analyze microphone array performance and optimize designs using an entirely virtual workflow. Optimize loudspeaker designs and study the interaction between loudspeakers and space.

Parametric Studies

Easily set up parametric studies, sweeping across parameters of interest such as room geometries, sound source/receiver configurations, material configurations, etc. Run optimization projects and perform uncertainty quantification.

Spatial Audio

Accurately simulate ultra-high order ambisonics spatial room impulse responses at scale for perceptually authentic spatial audio experiences. Extract parameters to drive real-time audio rendering engines and enable physics-based driven audio in your virtual/augmented worlds.

Power your Solution with Treble

Integrate Treble’s simulation engine into your 3rd party software or hardware, e.g., loudspeaker modeling software or virtual world engines.

Recent posts

26 February 2026

Webinar: A new era for audio AI data

Join our webinar on February 26th unveiling new Treble SDK capabilities for advanced voice processing and audio AI, including dynamic scene simulation, a new source receiver device for own voice and echo cancellation scenarios and powerful tools for large scale synthetic data generation. Industry leaders in audio technology and voice AI will join to discuss how these advancements elevate product performance and accelerate research and development.
17 February 2026

The Next Major Evolution of the Treble SDK

The release is centered around two main areas. First, a substantial expansion of our device modeling capabilities. Second, a new framework for large scale data workflows and acoustic scene generation. Together, these changes further position the Treble SDK as a core infrastructure layer for modern audio algorithm development and virtual prototyping.
03 February 2026

Meet Treble at ISE 2026

We will be at ISE 2026 in Barcelona on February 3-6 where wewill showcase the Treble Web Application for high-fidelity room acoustic simulations, giving visitors a hands-on look at our streamlined cloud-based approach to acoustic prediction and auralization.