Lab Notes

Various personal weekend projects

Dec 17, 2023

Listening to flight details as aircraft pass overhead

Here are some notes to share on how I use my ADS-B RTL-SDR receiver, adsb2mqtt and an MQTT broker, and text-to-speech (TTS) to play details of nearby passing aircraft.

I live near a busy airport. I can see aircraft flying low enough to make out and guess the airline and model. Wouldn't it be cool to have my computer tell me instead of guessing? Also, it would be nice to know the departure/arrival country and city.

Publishing MQTT messages with ADS-B details

As explained in Experimenting with RTL-SDR on NetBSD 10 I run the FlightAware maintained dump1090-fa on a NetBSD 10 in an outdoor enclosure that is close to my 1090mhz ADS-B antenna.

I run adsb2mqtt in a Docker container (Debian 12 server) that connects to the NetBSD dump1090-fa port 30003. That repo has details on parameters to adjust which aircraft are processed (i.e. ones you can see from your own location).

This setup has been stable for several months since I switched from running everying on an RPi3 (yes, .NET 7 works on a Raspberry Pi), but you can use anything to host dump1090-fa or run adsb2mqtt with .NET 7.

Ultimately, I'm now able to process ADS-B messages and publish them to my MQTT broker -- any MQTT client can subscribe to messages of locally passing aircaft.

Enhancing with flight and airline information

Now that I can have an MQTT client subscribe to ADS-B info, I can receive altitude and distance, latitude/longitude, speed, heading, and flight and aircraft (ICAO) numbers.

In order to get more details such as departure/arrival city, more aircraft details, and airline information, we need to access a database of that information. I use a combination FlightAware's JSON files that come with dump1090-fa (dump1090/public_html/db) and FlightAware's AeroAPI. AeroAPI has a free tier based on the number of requests which is more than enough for my use.

I have this implemented in a Python class AdsbSpeech that I use in a Paho MQTT Python client (again, I have this running in Docker). The flight parameter is the JSON payload from the adsb2mqtt message.

With this, I have Python code that can generate a text sentence describing the flight. Using this in a Python MQTT client subscribed to adsb2mqtt messages, I can publish back an MQTT message so any MQTT client that can do TTS can speak that sentence.

As an optimization, I make sure I only publish that message every 30 seconds though, as ADS-B info comes in quickly. For an airline passing over at ~5000ft MSL that's about 2 or 3 total messages before it passes outside of my 2.3 nautical mile radius filter.

Using Azure for TTS

Now that I can generate a sentence describing a passing flight, any TTS solution works. If you have Home Assistant Assist set up, that would probably work well (although I never tried).

I use Azure Speech service to generate a wave file. It's cheap and easy to use, and I like their voices. Here's an abbreviated Python code snippet that generates a temporary wave file:

import os
from tempfile import NamedTemporaryFile
from azure.cognitiveservices.speech import SpeechConfig, SpeechSynthesizer
from azure.cognitiveservices.speech.audio import AudioOutputConfig

Speech_config = SpeechConfig(subscription=os.environ['COGS_KEY'], region=os.environ['COGS_REGION'])

def create_wave(text):
    filename = f"{NamedTemporaryFile(dir='/var/www/wav').name}.wav"
    audio_config = AudioOutputConfig(filename=filename)
    ssml = f'<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" \
        xmlns:mstts="https://www.w3.org/2001/mstts" xml:lang="en-US"> \
        <voice name="en-US-JennyNeural"> \
        <mstts:express-as style="newscast"> \
            <prosody rate="+25.00%"> \
            {text} \
            </prosody> \
        </mstts:express-as> \
        </voice> \
        </speak>'
    synthesizer = SpeechSynthesizer(speech_config=Speech_config, audio_config=audio_config)
    _ = synthesizer.speak_ssml_async(ssml).get()
    return filename

One more optimization that I do is make sure I only generate a single wave file available to all clients that happen to be running. For example, I have a client on the upstairs balcony, in WSL2 on my Win11 laptop, even one on my Android phone running termux with termux-api.

What I do is create the TTS wave file in a Python Flask app (again, Docker) and publish the wave filename. Now MQTT clients can be really simple and no need to have keys to my Azure account etc on other clients. Here's a snippet of a shell script I run in WSL2 on my laptop that will play the wave for passing aircraft:

#!/bin/sh

while true
do
    mosquitto_sub -v -t "ADSB/speech/wave" -u mosquitto_sub -P $(cat ~/.mosquitto_sub) -h mqtt.host -p 8883 | while read msg
    do
        wave=$(echo $msg | awk '{print $2}')
        curl --silent --output - -X POST -H "Content-Type: text/plain" --data "${wave}" "http:myserver/speech/wave" | paplay
    done
done

In this case, it's just about getting audio to work on Linux (no small feat).

Summary

  1. dump1090-fa feeds adsb2mqtt raw ADS-B data; adsb2mqtt generates MQTT messages for nearby flights, filtered by distance from my home.
  2. An MQTT client uses AdsbSpeech class to enhance data and generate a sentence to be used for TTS.
  3. Another MQTT client uses Azure Speech to generate and share via HTTP a generated wave file and publishes the filename to MQTT.
  4. Various devices can use simple client tools like mosquitto_sub and curl to retrieve and play the wave file.

It's a simple pleasure to sit on my balcony and enjoy some plane spotting, now with details played back from my computer.