Audio, MIDI, Sensors, and AI #

Budo includes several APIs that reach beyond drawing: audio synthesis, decoded audio files, MIDI I/O, RTP-MIDI network sessions, accelerometer and compass data, screen wake locks, and optional ONNX Runtime inference. They share the same design as the rest of the runtime: small handles, project-relative assets, explicit capabilities, and platform-aware fallbacks.

Audio context #

The audio context starts stopped. You can call sys.audio.start() explicitly, or let playback helpers start it when needed.

main.js
// Start the audio context explicitly.
sys.audio.start();
// Set a comfortable master volume.
sys.audio.setMasterGain(0.5);

The master gain is clamped to 0.0 through 1.0.

Oscillators #

Oscillators are represented by integer IDs.

main.js
// Create an oscillator and keep its runtime handle.
const osc = sys.audio.createOscillator();
// Choose a simple sine wave.
sys.audio.setOscillatorType(osc, 'sine');
// Tune the oscillator to A3.
sys.audio.setOscillatorFrequency(osc, 220);
// Set the oscillator gain before playback begins.
sys.audio.setOscillatorGain(osc, 0.25);
// Start continuous playback for this oscillator.
sys.audio.startOscillator(osc);

Wave types may be strings or constants: SINE, SQUARE, SAWTOOTH, TRIANGLE, and NOISE.

ADSR envelopes let the runtime shape notes for you.

main.js
// Configure a short attack and release envelope.
sys.audio.setOscillatorEnvelope(osc, 0.01, 0.12, 0.7, 0.25);
// Trigger the note on the configured oscillator.
sys.audio.noteOn(osc);
// Release the note after 300 milliseconds.
sys.timer.once(300, () => sys.audio.noteOff(osc));

sys.audio.midiToFreq(69) returns 440, which is handy when MIDI, sequencers, and synthesis meet.

Buffers #

Audio buffers store sample data in runtime-managed slots.

main.js
// Choose the sample rate for the generated buffer.
const sampleRate = 44100;
// Allocate one second of mono samples.
const samples = new Array(sampleRate);

// Fill the buffer with a quiet 110 Hz sine wave.
for (let i = 0; i < samples.length; i++) {
  // Convert the sample index into a phase angle.
  samples[i] = Math.sin((i / sampleRate) * Math.PI * 2 * 110) * 0.25;
}

// Create a runtime audio buffer for the samples.
const buffer = sys.audio.createBuffer(sampleRate, 1, samples.length);
// Upload the generated samples starting at offset 0.
sys.audio.setBufferData(buffer, samples, 0);
// Play the buffer once at full gain.
const playback = sys.audio.playBuffer(buffer, false, 1.0);

Stop playback handles with stopBuffer and destroy buffers you no longer need.

Audio files #

For sound effects, music cues, voice clips, and other prepared media, load an audio asset into the same buffer system with sys.audio.loadBuffer(path) or sys.audio.loadBufferFromBuffer(buffer). Budo decodes WAV, MP3, Ogg Vorbis, and FLAC data to floating-point PCM, then returns a buffer ID that can be played, looped, stopped, and destroyed just like a buffer you created manually.

main.js
// Decode a project asset into a runtime audio buffer.
const hit = sys.audio.loadBuffer('sounds/hit.ogg');
const hitFromNetwork = sys.audio.loadBufferFromBuffer(await (await fetch(url)).arrayBuffer());

// Check for a load failure before using the returned handle.
if (hit < 0) {
  // Print the most recent audio error to help diagnose missing or invalid assets.
  console.log(sys.audio.getError());
}

// Play the decoded sound once at a comfortable level.
const hitPlayback = sys.audio.playBuffer(hit, false, 0.8);

Loaded audio paths are project-relative. Use paths such as sounds/hit.ogg or music/theme.flac; absolute paths and . or .. path segments are rejected. The FromBuffer form accepts bytes from local file reads, network responses, or generated data.

Looping works through playBuffer, so a longer ambient file can stay simple too.

main.js
// Load a background loop from the bundled project assets.
const ambience = sys.audio.loadBuffer('audio/ambience.mp3');

// Start the buffer in looping mode at a low gain.
const ambiencePlayback = sys.audio.playBuffer(ambience, true, 0.35);

// Stop the looping playback later when the scene changes.
sys.audio.stopBuffer(ambiencePlayback);

loadBuffer is synchronous and best used while entering a scene, opening a tool, or preparing a small sound set. For large libraries, keep the returned buffer IDs in your own state and release buffers with destroyBuffer when they are no longer needed.

MIDI devices #

sys.midi exposes local MIDI input and output where the platform backend is available. Device lists can change, so call refreshDevices before presenting a picker in long-running tools.

main.js
// Refresh devices before showing a picker.
sys.midi.refreshDevices();

// Iterate through the available MIDI inputs.
for (const device of sys.midi.getInputDevices()) {
  // Log the stable device id and human-readable name.
  console.log(device.id + ': ' + device.name);
}

Open an input with a callback.

main.js
// Open the first MIDI input and receive messages through a callback.
const input = sys.midi.openInput(0, (message) => {
  // Treat NOTE_ON with velocity greater than zero as a played note.
  if (message.type === sys.midi.NOTE_ON && message.data2 > 0) {
    // Log note number and velocity.
    console.log('Note ' + message.data1 + ' velocity ' + message.data2);
  }
});

Open an output and send notes or raw bytes.

main.js
// Open the first MIDI output.
const output = sys.midi.openOutput(0);
// Send middle C on channel 0.
sys.midi.noteOn(output, 0, 60, 100);
// Release middle C on channel 0.
sys.midi.noteOff(output, 0, 60, 0);
// Send channel pressure aftertouch on channel 0.
sys.midi.channelPressure(output, 0, 72);
// Send polyphonic pressure for note 60 on channel 0.
sys.midi.polyPressure(output, 0, 60, 64);
// Send a raw MIDI System Exclusive message.
sys.midi.sendRaw(output, [0xf0, 0x7e, 0x7f, 0x06, 0x01, 0xf7]);

Close input and output handles when done.

RTP-MIDI #

RTP-MIDI sessions let apps exchange MIDI messages over the network using the MIDI namespace.

main.js
// Create an RTP-MIDI session on an automatically selected port.
const session = sys.midi.createSession('Jam', 0);
// Receive MIDI messages from the network session.
sys.midi.onSessionMessage(session, (message) => {
  // Log the raw status byte for quick diagnostics.
  console.log('Network MIDI: ' + message.status);
});

// Connect to another RTP-MIDI peer.
sys.midi.connectSession(session, '192.168.1.20', 5004);
// Send a note through the session.
sys.midi.sessionNoteOn(session, 0, 64, 100);

RTP-MIDI depends on UDP support. It is available on desktop and Android builds that compile the transport, and unavailable on web because browsers do not expose raw UDP sockets.

Sensors #

sys.magneto provides accelerometer and compass access. Desktop builds report unavailable. Android uses native sensors. Web uses browser device motion and orientation APIs when available.

main.js
// Start sensors only when the platform reports support.
if (sys.magneto.isAvailable()) {
  // Activate accelerometer and compass polling.
  sys.magneto.start();
}

// Read sensor state once per frame.
function frame() {
  // Get the latest accelerometer sample, if any.
  const accel = sys.magneto.getAccel();
  // Update tilt state only when a sample exists.
  if (accel) {
    // Store horizontal tilt.
    state.tiltX = accel.x;
    // Store vertical tilt.
    state.tiltY = accel.y;
  }

  // Get the latest compass sample, if any.
  const compass = sys.magneto.getCompass();
  // Update heading only when a compass sample exists.
  if (compass) {
    // Store heading in degrees.
    state.heading = compass.heading;
  }

  // Render the current sensor-driven state.
  draw();
  // Continue the frame loop.
  sys.animation.requestFrame(frame);
}

Accelerometer values include gravity. Compass heading is a simplified two-dimensional bearing and assumes the device is held flat.

Keeping the screen awake #

Some apps should keep the screen on while active: timers, instruments, kiosks, games during play, and monitoring tools.

main.js
// Keep the display awake while the active experience needs it.
sys.device.keepScreenOn(true);

Call keepScreenOn(false) when the app returns to a menu or state that no longer needs it. Desktop toggles the SDL screensaver. Android toggles FLAG_KEEP_SCREEN_ON. Web uses the Screen Wake Lock API when the browser allows it.

Neural inference #

ONNX Runtime support is optional at build time and opt-in at app level. Add neural to app.json.

app.json
{
  "neural": true
}

Then check capability before loading a model.

main.js
// Check the capability before attempting to load ONNX models.
if (!sys.capabilities.neural.available) {
  // Choose a fallback path for builds without neural inference.
  console.log('No neural runtime in this build');
}

Models must be ONNX data. loadModel(path) reads .onnx files inside the project directory; paths are relative and path traversal is rejected. loadModelFromBuffer(buffer) accepts the same ONNX bytes from an ArrayBuffer / typed-array view.

main.js
// Load an ONNX model from the project directory.
const model = sys.neural.loadModel('models/classifier.onnx');
const fetchedModel = sys.neural.loadModelFromBuffer(modelBytes);
// Inspect model inputs and outputs.
const info = sys.neural.getModelInfo(model);

// Use the first input name declared by the model.
const inputName = info.inputs[0].name;
// Use the first output name declared by the model.
const outputName = info.outputs[0].name;

// Allocate input data for a 1x3x224x224 tensor.
const data = new Float32Array(1 * 3 * 224 * 224);
// Run inference with a named input tensor.
const result = sys.neural.run(model, {
  // Bind the typed array to the model's input name.
  [inputName]: data
});

// Read the model output by name.
const output = result[outputName];

JavaScript uses typed arrays. Lua uses tables with data, shape, and dtype. WebAssembly uses staged host imports that copy tensors into and out of guest memory.

The staged JavaScript form is useful when you want the same conceptual flow as Lua and WebAssembly.

main.js
// Stage input data on the model handle.
sys.neural.setInput(model, inputName, data);
// Run inference using the staged inputs.
sys.neural.run(model);
// Read the staged output tensor by name.
const output = sys.neural.getOutput(model, outputName);

Unload models when a long-lived app no longer needs them. This matters for tools that let users switch models, for games that load scene-specific inference assets, and for Android devices where model memory can be much tighter than on a desktop workstation.

main.js
// Release model resources when they are no longer needed.
sys.neural.unloadModel(model);

Platform notes #

Audio, MIDI, sensors, wake locks, and neural inference all depend on platform permissions or linked backends. Use capability checks and clear UI states. An app that handles unavailable hardware gracefully feels more professional than one that treats every missing feature as a crash.