Hi Gijsbert Huijsen,
The behavior you’re observing where Azure Speech Recognition returns unrelated insurance-related text in response to background noise or chatter—is a result of the service attempting to interpret unclear or unintelligible audio using learned language patterns. This can lead to inaccurate transcriptions that were never actually spoken, which we understand can be confusing for users. While the service does not currently support a built-in option to return an empty result for unrecognized input, there are several ways to improve accuracy. First, we recommend checking the acoustic environment to ensure it's as quiet as possible, using high-quality microphones, and optimizing their placement to reduce external noise. Additionally, you can fine-tune recognition behavior by adjusting SpeechConfig parameters such as increasing the Initial Silence Timeout to allow more time before speech detection begins, and modifying the Segmentation Silence Timeout to better handle pauses between phrases, reducing the risk of misinterpreting background sounds. We also suggest enabling word-level confidence scores or retrieving nBest alternatives, which allows your application to detect and filter out low-confidence transcriptions.