12 August 2025

Introducing FFASR Leaderboard with Hugging Face

In this session, Eric Bezzam from Hugging Face will introduce the leaderboard, walk through its structure, and demonstrate how it can be used to evaluate and compare models under more realistic acoustic conditions.

Treble will then take you behind the scenes, exploring the data that powers the leaderboard and the levels of acoustic complexity required to move from controlled datasets to real world performance. If you are building or evaluating speech and audio systems, this session will give you a clearer view of what current benchmarks miss and how that is changing.

Go Beyond Near Field Data

Join us on June 11th for the launch of the Far Field ASR Leaderboard (FFASR) on Hugging Face, a new benchmark designed to move audio AI evaluation beyond near field assumptions.

June 11th, 2026
- US Time slot: 9:00am PDT / 12:00pm EDT
- EU Time slot: 9:00am UTC / 10:00am CET

Sign up to secure your place.

On the agenda for this Webinar:

1. Hugging Face: Introducing the Far Field ASR Leaderboard (FFASR)

Eric Bezzam presents the leaderboard, including its structure, key features, and how it can be used to evaluate and compare ASR models under more realistic conditions.

2. Treble: Why Far Field Data Matters

An inside look at the data powering FFASR and why far field audio is critical for building and evaluating robust audio AI systems. We will cover how this data is created and the level of acoustic complexity required to reflect real world performance.

3. Q&A

About Hugging Face

Hugging Face has become a central platform for the machine learning community, setting the standard for open, collaborative development in AI. Known for its model hub, evaluation tools, and commitment to open benchmarks, Hugging Face plays a key role in shaping how models are shared, tested, and improved across the ecosystem. The introduction of the Far Field ASR Leaderboard continues this direction, bringing more realistic evaluation practices into the open.

Meet the speakers

Audio ML Engineer - Hugging Face

Dr. Eric Bezzam

Eric Bezzam is an audio ML engineer at Hugging Face. He received his PhD from EPFL, and previously worked at Snips, Sonos, DSP Concepts, and Fraunhofer IDMT. He was one of the main developers of pyroomacoustics.

Senior Product Manager for the Treble SDK

Dr. Daniel Gert Nielsen

Dr. Daniel Gert Nielsen is a specialist in numerical vibro-acoustics, with a PhD focused on loudspeaker modeling and optimization. His expertise spans acoustic simulation for communication devices and synthetic data generation for machine learning applications. With a strong background in numerical methods and audio technology, he plays a key role in shaping advanced acoustic modeling solutions at Treble.

FFASR Leaderboard with Hugging Face and Treble

Recent posts

21 May 2026

Webinar: The Next Era for the Treble Web App

Join us on May 21st for our webinar focused on the latest advancements in the Treble Web Application, covering new features that improve analysis workflows, visualization, and overall usability. The session includes live demonstrations of upcoming capabilities, a brief overview of Revit and Rhino integrations to streamline design processes, and the introduction of a new materials initiative aimed at improving simulation accuracy and industry alignment. Attendees will gain a clear understanding of how these updates enhance efficiency, reduce setup time, and enable more reliable acoustic design decisions within modern, cloud based simulation workflows.
25 March 2026

Warner Theatre Rehearsal Hall - Acentech Case Study

This study helped to refine Acentech's understanding of the performance of the ceiling system, which absorbs sound quite effectively at most frequencies, and contributes valuable reflections for ensemble intelligibility. Treble helped Acentech to translate between measurements of room decay times and up-close mockup measurements to create a coherent analysis and understanding.