DEV Community

Cover image for Microphone Array Selection for Large Boardrooms
beefed.ai
beefed.ai

Posted on • Originally published at beefed.ai

Microphone Array Selection for Large Boardrooms

  • How Good is Good Enough? Speech-intelligibility targets and design criteria
  • Which Array Topology Actually Works in Large Boardrooms?
  • Where to Place Mics and Why the Room Changes Everything
  • Why DSP, Beamforming, and Echo Cancellation Demand Hands-on Tuning
  • Practical Application: Field checklist and step-by-step tuning protocol

Poor remote intelligibility in a large boardroom almost always traces back to the microphones and the room — not the network. Get the array topology, placement, and DSP right and remote participants will hear consonants, overlap, and nuance; get any of those wrong and meetings become a guessing game.

Large-boardroom audio problems usually present as specific symptoms: remote attendees asking participants to repeat, far-end audio that "washes" consonants, double-talk breakup, or AEC (acoustic echo cancellation) artifacts during interruptions. These symptoms come from three root causes used by integrators every day: the room’s acoustics (reverberation and noise), the mic topology and placement, and how the DSP/beamformer/AEC chain is configured and sequenced.

How Good is Good Enough? Speech-intelligibility targets and design criteria

Target metrics decide design choices. Use objective measures early — subjective impressions will lie.

  • Aim for STI/STIPA targets rather than vague “it sounds OK.” The IEC 60268-16 STI model maps intelligibility to a 0–1 scale; practical categories are: bad 0–0.3, poor 0.3–0.45, fair 0.45–0.6, good 0.6–0.75, and excellent >0.75. For corporate boardrooms plan for good to excellent where possible: a pragmatic target is STIPA ≥ 0.6 for reliable remote participation and STIPA ≥ 0.75 for rooms that need broadcast‑quality speech.

  • Control reverberation: specify RT60 design targets in the RFP. Small-to-medium meeting rooms should typically be in the 0.4–0.6 s band; video-conferencing–optimized rooms benefit from tighter targets (≈0.3–0.4 s) for the highest perceived far-end clarity. The Teams audio test guidance used for conferencing validation commonly works with reverberation in the 0.4–0.8 s range during stress testing, and vendors use a ~0.4 s RT60 when claiming STIPA ratings.

  • Early-energy clarity (C50) correlates with consonant audibility. A C50 above +3 dB is a realistic engineering goal for speech; professional video-conferencing spaces aim higher (C50 around +6 dB in some published recommendations) when feasible. Measure C50 averaged across the 500 Hz–4 kHz speech bands during the survey.

  • Background noise and SNR: define a steady-state background-noise limit (A-weighted) in the spec. Typical conferencing test conditions use 30–40 dBA ambient as a baseline; a lower noise floor yields both better STI and more stable AEC operation. Quote the required test conditions explicitly in any acceptance test plan.

Important: Require vendor STIPA results that list the test conditions (RT60, ambient noise, talker SPL, mic mounting height). A STIPA number without test conditions is not actionable.

Which Array Topology Actually Works in Large Boardrooms?

Topology choice (ceiling, table, boundary, lavalier, distributed) determines directivity, integration effort, and DSP demands. The following table summarizes the practical trade-offs you will weigh.

Type Typical use-case Key pros Key cons DSP / integration notes
Ceiling beamforming arrays (beamforming microphones) Large tables, clean aesthetics, fixed room layouts Covers whole table area, unobtrusive, dynamic/steerable beams, Dante/AES67 ready on many models. Requires careful height/zone planning for long tables; some loss of near-field directness; ceiling noise sources (HVAC) matter. Onboard beamforming + per‑channel or per‑beam processing common; manufacturers publish coverage maps — validate with their STIPA test conditions.
Table / linear arrays (multi‑element table mics) Medium rooms, simple retrofit where ceiling work is hard Close to talkers, predictable directivity, easier to wire to local DSP Visible hardware, PD/maintenance on the table, can pick up table noise Often paired with automixers and single AEC channel per array; coverage radius limited — plan spacing.
Boundary (PZM) pickups Small-to-medium tables, hybrid rooms Low visual impact, good hemispherical pickup, 6 dB pressure advantage near boundary Picks up table thumps and surface noise; less selective in reverberant rooms Good when talkers remain seated and distance is small; combine with gating/mixing to reduce noise.
Lavalier / close-talk Presenters and high-stakes panels Best SNR and direct voice capture, minimal reverb pickup Management (battery/microphone hygiene), not practical for every participant Use for presenters; design AEC to exclude lavalier-to-loudspeaker loops; supports voice‑lift with minimal echo.
Distributed omni network (many small mics) Large or irregular rooms High spatial resolution, redundancy Complex cabling, high channel counts, more DSP needed Requires well‑designed mixing logic and per‑mic AEC strategy.

Examples: Sennheiser’s TeamConnect ceiling arrays advertise automatic, adaptive beamforming for roomwide coverage; Shure’s MXA line emphasizes steerable/automated coverage with integrated DSP; Yealink and other vendors publish STIPA/coverage figures tied to controlled RT60/noise test conditions — always confirm manufacturer test conditions against your room baseline.

Contrarian field insight: ceiling arrays are not a universal win. In long, narrow boardrooms multiple ceiling arrays are needed to avoid low direct-to-reverb ratio at the ends; a centrally mounted ceiling array intended for a 10‑seat table often under‑performs at the far seat unless it has sufficient elements and the DSP is configured for multiple, overlapping coverage lobes.

Where to Place Mics and Why the Room Changes Everything

Physical placement is not guesswork — it’s engineering. Document decisions with coverage maps and acceptance-test coordinates.

  • Height and spacing rules:

    • Use manufacturer coverage tools and CAD templates first. Shure and Sennheiser provide software and datasheets indicating the effective coverage area for each model; typical ceiling arrays are specified to cover spaces like a 30×30 ft area under specific RT60 and noise conditions.
    • For ceiling arrays, plan to place units so that each active seat lies within at least one beam’s optimal pickup radius. Large boards often require multiple arrays spaced along the table every 3–6 m depending on ceiling height and array aperture.
    • For table and boundary mics, maintain talker-to-mic nominal distances below ~1.0–1.5 m to preserve gain-before-feedback and SNR; boundary mics gain ~6 dB from the boundary effect but are sensitive to table contact noise.
  • Avoid acoustic pitfalls:

    • Don’t place arrays directly above HVAC diffusers, projectors, or loudspeaker clusters. Mechanical noise and direct speaker radiation reduce the AEC’s ability to converge and introduce pumping artifacts.
    • Avoid placing mics directly under or too close to in-room loudspeakers; where unavoidable, use physical baffling, directivity steering, and per-channel AEC strategies.
  • Line-of-sight is metaphorical: early reflections matter more than visual sightline. Aim to manage major early reflections (first 50 ms) by adding targeted absorption/diffusion so the microphone “hears” a higher direct‑to‑reverberant ratio — this lifts C50 and STI measurably. Measure RT60 and C50 at the planned talker positions before final DSP tuning.

Why DSP, Beamforming, and Echo Cancellation Demand Hands-on Tuning

Beamforming and AEC are powerful, but they interact and require deliberate configuration.

  • Beamforming basics and trade-offs: arrays form directional lobes by delaying and weighting individual elements (delay‑and‑sum is the simplest practical implementation). A wider aperture and more elements reduce beamwidth (narrower beam) at higher frequencies, but aperture and element spacing set the frequency range where the beam behaves as intended (aperture-to-wavelength relation). Array geometry also determines sidelobe behavior, which affects how much reflected energy leaks into the beam. Use aperture math when planning element counts vs. beamwidth.

  • Adaptive vs. fixed/steerable beams:

    • Adaptive (automatic) beamforming tracks active talkers and can simplify coverage in dynamic meetings; validate its behavior with multiple simultaneous speakers.
    • Steerable coverage presets explicit lobes/zones for deterministic routing (voice lift, AV switching). Prefer steerable zones when you need predictable matrix outputs for voice-lift or camera‑vision systems.
  • AEC realities and best-practice tuning:

    • The adaptive filter tail length is a critical parameter. In practice a tail length beyond ~150–250 ms has diminishing returns and can degrade adaptive stability; many industry AEC solutions default to ~200 ms as a practical compromise between modeling the echo path and stable convergence. Measure and tune tail length per room size and system latency.
    • AEC is far more robust when microphone inputs present a healthy speech signal (peaks ~-6 to -3 dBFS) and when a clean reference (the output feeding the far‑end loudspeakers) is available to the processor. QSC’s AEC guidance and vendor papers stress correct input levels and the importance of a reliable double‑talk detector.
    • Per‑channel AEC versus post‑mix AEC: performing AEC on each microphone channel before mixing (per‑channel) yields better echo suppression in multi‑mic arrays and preserves mixing fidelity; a single post‑mix AEC can work but often leaves residual echoes because multiple echo paths combine into a more complex impulse response. Modern ceiling arrays and DSPs support per‑beam or per‑channel AEC for cleaner double‑talk performance.
  • Measure what matters: track ERLE (echo return loss enhancement) and subjective double‑talk behavior. A practical AEC goal is substantial attenuation during far‑end only speech (ERLE > ~40 dB is commonly cited as “very good” in lab conditions), but verify performance under realistic talker and noise conditions — vendor lab ERLE figures rarely reflect real rooms.

Practical Application: Field checklist and step-by-step tuning protocol

This is the working protocol used on acceptance visits. Use it as an executable checklist in your project plan.

  1. Pre‑install survey (document everything)

    • Measure RT60 (500/1k/2k/4k bands), C50, and ambient LAeq at each planned seating position. Record HVAC and projector noise spectra. Use the measured values to set target STIPA test conditions.
    • Produce a coverage sketch (top view + ceiling grid) showing proposed mic locations, loudspeaker locations, and cable routes. Include PoE budget assumptions (802.3af/at/bt).
  2. Procurement / RFP requirements (must-haves for vendor responses)

    • STIPA test report produced by the vendor for a room of similar volume and RT60 with stated test conditions (RT60, ambient noise, talker SPL) and measurement positions.
    • Supported network and control protocols: require Dante/AES67 outputs, 802.1X support, and management API/remote monitoring. Ask for documented QoS / PTP recommendations for network switches (or specify Dante best‑practices).
    • Power: specify PoE class (e.g., IEEE 802.3af Class 3 or 802.3at if device requires it) and total PoE budget.
    • Security and lifecycle: firmware update policy, remote-management tool, and CVE/patch disclosure schedule.
    • Physical: plenum rating, mounting accessories, acoustic grilles, and warranty / calibration service.
  3. Install and baseline configuration

    • Follow manufacturer CAD templates for mounting; avoid HVAC diffusers and speakers in the immediate element footprint. Confirm actual microphone height vs. design.
    • Configure the audio network: place Dante/AES67 devices onto a dedicated AV VLAN, enable QoS for audio flows, and ensure PTP stability or Dante clocking as documented by Audinate.
    • Macro DSP ordering: set input gains first, then routing, then AEC, then NR/AGC, then EQ. This ordering prevents chasing artifacts introduced by later stages.
  4. DSP tuning step-by-step

    • Set microphone analog/digital gains so speech peaks at roughly -6 to -3 dBFS on the DSP meters; ensure meters show consistent speech energy across coverage areas. QSC and other AEC guidance recommends healthy input levels for reliable modeling.
    • Select AEC reference(s): route the actual loudspeaker mix that the far end hears as the AEC reference. For multi‑mic systems, prefer per‑channel AEC or one AEC per array with a shared reference where supported.
    • Initial AEC settings: start with a moderate tail (~150–250 ms), conservative adaptation speed, and minimal NLP aggressiveness; evaluate double‑talk and then iterate toward more aggressive suppression only if artifacts remain acceptable. Record ERLE and subjective double‑talk scores.
    • Enable noise reduction and comfort‑noise features; tune NR to lower steady sources (HVAC) while preserving consonants and sibilance. Use narrow notches for tonal projector or fan noise rather than broad cuts.
    • Apply subtle EQ to improve speech mid‑band clarity rather than broadband boosts; confirm with STIPA tests and listen tests. Document all EQ presets as part of the handover.
  5. Acceptance testing (executable)

    • Perform STIPA at each acceptance seat under the following conditions (examples adopted from vendor test practice):
      • Test condition: talker at “presenter” position at 62–65 dB SPL, ambient noise at operational level (e.g., 30–40 dBA), and RT60 as measured. Record STIPA at a minimum of five representative positions.
      • Success criteria (example): STIPA ≥ 0.6 at all seating positions; STIPA ≥ 0.75 for high-end rooms. Require vendors to provide the raw measurement files and the test conditions.
    • Perform double‑talk tests with real far‑end and near‑end participants; confirm no audible echo or collapse during interruptions and that AEC does not clip near‑end speech. Log ERLE snapshots and subjective pass/fail.
    • Document AEC convergence time, any residual echo artifacts, and NR side effects. Retain DSP presets as immutable deliverables for future maintenance.
  6. Handover and operations

    • Deliver a concise operations document with: STIPA and RT60 results, DSP presets exported, microphone and PoE map, and a short troubleshooting guide for common site issues (HVAC spikes, firmware rollback steps).

Practical sample acceptance checklist (compact)

- Pre-install survey report attached (RT60, C50, ambient LAeq)
- Delivered hardware: model, firmware, PoE class
- STIPA: measured at N positions; all >= 0.60 (attach logs)
- AEC: ERLE during Far‑End only >= 40 dB (attach logs)
- Double‑talk test: subjective pass (no echo, reasonable artifacts)
- Network: Dante/AES67 validated; PTP stable; QoS set
- Documentation: DSP presets, CAD, test logs, support contacts
Enter fullscreen mode Exit fullscreen mode

Final engineer’s note

Microphone arrays and DSP are only as good as the acoustic baseline and the acceptance test that validates them. Require objective metrics in the RFP, demand measurement logs with test conditions, and make STI/STIPA and measured AEC behavior non‑negotiable acceptance items. When STIPA, RT60, and documented AEC performance are all in the green, the far end will stop asking people to repeat themselves and the room will do the job the hardware was bought to perform.

Sources:
IEC 60268-16 - Standard defining the STI/STIPA methodology and typical application guidance.

STI and STIPA (Rational Acoustics) - Practical interpretation of STI bands and real‑world measurement notes.

Beamforming Microphones: Speech Intelligibility (Biamp blog) - Explanation of STI and field trade-offs when using beamforming arrays.

Shure — Understanding the MXA920 (white paper) - Practical details on steerable coverage, per‑channel DSP, and per‑channel AEC benefits for ceiling arrays.

Sennheiser TeamConnect product resources - Product documentation and datasheet details for a widely‑used ceiling beamforming array (coverage, capsule count, mounting guidance).

Q-SYS Acoustic Echo Cancellation White Paper (QSC) - Deep dive on AEC behavior, tail length, ERLE, double‑talk handling and recommended tuning practices.

Microsoft Teams Rooms certified systems and peripherals (Microsoft Learn) - Guidance on Teams certification and test conditions used in vendor validation and certification.

beamwidth2ap (MathWorks documentation) - Aperture/beamwidth relationships used to size arrays and understand frequency/beam trade‑offs.

Yealink CM20 (product page / datasheet example) - Example vendor STIPA/coverage claims and explicit test conditions used in vendor datasheets (useful RFP comparison model).

Frequency range and microphone-distribution FAQ (GFaI / BeBeC) - Engineering notes on array aperture, element distribution and practical design trade‑offs.

Assessing the Acoustic Characteristics of Rooms (tutorial, PMC/NCBI) - Background on C50, early reflections, and clarity metrics used in speech acoustics.

Audinate — Dante, AES67 and ST 2110 white paper - Guidance on AoIP interoperability, Dante best‑practices, and AES67 considerations for audio networks.

Top comments (0)