Skip to content
Elizabeth Hudnott edited this page Oct 6, 2019 · 35 revisions

What This Is

This project is an attempt at doing some sound synthesis and sequencing in the browser. I'm not a musician or all that knowledgeable about synthesizers. I just love programming, math, learning new things, using modern browser APIs and creative exploration of new topics.

Some sources of inspiration:

  • The Commodore 64 SID chip
  • The original Sound Blaster card
  • Tracker programs on the Amiga and other platforms (ProTracker, Famitracker, OpenMPT, Renoise...)
  • Ableton Analog (the structure but without the physics modelling)
  • Step sequencers and piano roll

This project isn't an attempt to directly emulate any of these but it will contain aspects inspired by each of them. The new software hopes to become a modern refresh of the tracker concept that specifically targets the capabilities of browsers and the web while maintaining a retro feel that's easily accessible to beginners.

Requirements

The synthesizer uses Audio Worklets and cancelAndHoldAtTime(), both of which are features only found in recent versions of Chrome browser (as of April 2019) although there is polyfill available for other browsers [1, 2].

A development server is also required because Audio Worklets can't be loaded from file:// URLs.

Getting Started

This guide is written from a programming perspective. There's also a test interface available at https://elizabethhudnott.github.io/sound-synth. At the moment the test interface isn't very impressive because its sole purposes are to assist in developing the JavaScript features and to provide a simple demo. Eventually the test interface will be re-engineered into a more professional looking live performance tool and I'll also develop a song composition program. However, the JavaScript interface also stands alone as a separate product that I hope will be used without an interface for applications like playing background music in games, or with a minimal user interface inside things like media players.

  1. Load the JavaScript.
<script src="synth.js"></script>
  1. Create an empty sound system.
let audioContext = new AudioContext();
let system = new Synth.System(audioContext, callback);

The code for steps 3–5 should be placed inside your callback function to ensure everything is loaded before you create a sound channel.

  1. Add one or more sound channels.
let channel1 = new Synth.Channel(system);
  1. Start the system going.
system.start();
  1. Schedule your first parameter change.
let changes = new Map();
changes.set(Synth.Param.Gate, new Synth.Change(Synth.ChangeType.SET, Synth.Gate.TRIGGER));
channel1.setParameters(changes, 0);

The second argument to the setParameters() method determines when the parameter change occurs. It's measured according to the number of steps since system.start() or system.begin() was called. Use system.begin() when you want to reset the step counter to zero. Each step lasts 1/50th of a second. The step number can be omitted, in which case the changes are applied almost immediately. Each successive call to setParameters() should use a higher step number than the previous one (unless system.begin() has been called). I don't officially support scheduling changes out of order.

For information about the specific parameters that are available and how to use them see the Parameters and Change Types pages.

A small number of parameters require invoking setParameters() with a third argument (which is always true because false is equivalent to omitting the parameter).

changes.set(Synth.Param.GLISSANDO, new Synth.Change(Synth.ChangeType.SET, 1));
channel1.setParameters(changes, 1, true);

This additional argument is needed because these oddball parameters don't describe instantaneous changes but instead they describe how sound changes over the course of a small chunk of time called a "line". The default line time is six steps. So in this case the glissando effect causes the pitch to rise over the course of 6/50ths second.

The last value set for a particular parameter can be read directly from the parameters array.

gate = channel1.parameters[Synth.Param.GATE];

Simplified Syntax

If you only need to change one parameter at a time then an alternative, simpler syntax is available.

system.set(Synth.Param.GATE, Synth.Gate.TRIGGER);

If you need a change that doesn't happen immediately then add a delay value, measured in steps.

system.set(Synth.Param.GATE, Synth.Gate.TRIGGER, 100); // 2 second delay

Note that this parameter specifies the time relative to the current time, whereas setParameters() interprets times relative to some Time 0.

If you want to fade the change in gradually then add a change type.

// 2 second fade out (100 steps)
system.set(Synth.Param.VOLUME, 0, 100, Synth.ChangeType.LINEAR);

By default, the set() method alters the first channel. To affect a different channel add a channel number. Channels are numbered from zero.

// 2 second fade out on the 2nd channel
system.set(Synth.Param.VOLUME, 0, 100, Synth.ChangeType.LINEAR, 1);

set() also accepts the special value -1 in place of a channel number, which applies the change to all channels.

Macros and Automations are other ways to make repetitive code more compact.

JavaScript Files

Filename Functionality
synth.js The synthesizer and its instrument descriptions
audioworkletprocessors.js Required by synth.js
sequencer.js Storing and playing back compositions
midi.js Performing, recording or editing using MIDI input from an external device or via WebMidiLink
sampler.js Capturing new instruments from the computer's microphone
machines/*.js Plug-ins which extend the synthesizer

For live performance it's sufficient to load synth.js (which in turn loads audioworkletprocessors.js) and midi.js.

Clone this wiki locally