Generating sound in modern Web Audio API

#retrocomputing #javascript #demoscene #web-audio

Written by Anders Marzi Tornblad

Published on dev.to

This is part 1 of the JavaScript MOD Player series.

As part of my effort to remake the old Artsy demo in more modern JavaScript, I decided to start with the music player. The original demo used the SoundTracker MOD format, but my remake from 2013 uses an MP3 file, which was a bit of a cheat. Playing an MP3 requires almost zero code, but it's not very interesting, and it requires loading several megabytes of data. So I decided to try to make a MOD player from scratch, using only the Web Audio API

The current state of Web Audio

The Web Audio API has gone through a lot of changes since it was first introduced in 2011. Originally, it was quite messy, and required a lot of boilerplate code to get anything done. The API has since been cleaned up, and now it's quite easy to use. Unfortunately, there are still a lot of blogs and tutorials out there that show old, deprecated code. For example, the ScriptProcessorNode, which let you generate audio data procedurally, has been deprecated since 2014, but it's still used in many examples. Today, the AudioWorklet interface is the recommended way to generate sound in JavaScript.

Sine of times

The first step of creating a MOD player is to make some noise. One way to do this is to generate a sine wave. A sine wave is the simplest waveform, and requires very little code to produce. I will create an audio worklet that produces a sine wave of a constant frequency.

The first step is to create an AudioContext, which is the main interface for the Web Audio API. I also need to load an AudioWorklet (that I'll create later), connect it to the context, and send a message to the worklet to start generating sound. The context is not allowed to play any sound until the user has interacted with the page, so I need to wait for a click event before resuming the context.

// Create the audio context
const audio = new AudioContext();

// Load an audio worklet
await audio.audioWorklet.addModule('player-worklet.js');

// Create a player
const player = new AudioWorkletNode(audio, 'player-worklet');

// Connect the player to the audio context
player.connect(audio.destination);

// Start the player when the user clicks
window.addEventListener('click', () => {
    audio.resume();
    player.port.postMessage({
        type: 'play'
    });
});
player.js

Because the script above calls an asynchronous function, audio.audioWorklet.addModule, I need to load it from the HTML page as an ES6 module. If you are not writing frontend JavaScript as an ES6 module yet, you really should start doing it.

<!DOCTYPE html>
<html>
    <head>
        <title>JS Mod Player by Anders Marzi Tornblad</title>
    </head>
    <body>
        <script type="module" src="player.js"></script>
    </body>
</html>
index.html

Creating the worklet code is a bit more involved, but it's not too complicated. I start by creating a class that extends AudioWorkletProcessor, which is the base class for all audio worklets.

The worklet runs in a separate thread, so it can't access any variables in the main script. Because of that, each worklet instance has a port property, which is a MessagePort that is used to communicate with the main script. The worklet can send messages to the main script using port.postMessage, and it can receive messages from the main script using port.onmessage. I normally put some code in the constructor to connect the port.onmessage event handler to an onmessage method in the worklet class.

The process method is what the Web Audio framework calls to generate sound. It gets called repeatedly, and it's supposed to fill the outputs array with audio data. The method receives both inputs and outputs, and can be used anywhere in an audio graph, for example to add effects to sound. Because I'm generating sound and not manipulating sound, I don't need to use the inputs, so I just ignore the inputs. The outputs array is a two-dimensional array, where the first dimension is the channel, and the second dimension is the sample. Each sample value is a floating point number between -1 and 1. The process method must return true to indicate that it has filled the outputs array with data, and that the Web Audio framework should call it again. If the method returns false, it is a signal to the Web Audio framework that the worklet is done generating sound, and it should stop calling the process method.

class PlayerWorklet extends AudioWorkletProcessor {
    constructor() {
        super();
        this.port.onmessage = this.onmessage.bind(this);
        this.playing = false;
    }

    onmessage(e) {
        if (e.data.type === 'play') {
            // Toggle between playing and silence
            this.playing = !this.playing;
            this.phase = 0;
        }
    }

    process(inputs, outputs) {
        const output = outputs[0];
        const channel = output[0];

        for (let i = 0; i < channel.length; ++i) {
            if (this.playing) {
                channel[i] = Math.sin(this.phase);
                this.phase += 0.1;
            } else {
                channel[i] = 0;
            }
        }

        return true;
    }
}

registerProcessor('player-worklet', PlayerWorklet);
player-worklet.js

Try it out

You can try this version of the player here

Conclusion

That's it! The code above is a very minimal example, but it should give you an idea of how to create a worklet. The next step is to load a MOD file into the worklet, and use it to generate sound.

You can try this solution at atornblad.github.io/js-mod-player. The latest version of the code is always available in the GitHub repository.

Articles in this series: