

This incompatibility results when two different synthesizers use different sound-synthesis techniques, or different internal algorithms and parameters even if the fundamental technique is the same. Many synthesizers provide a collection of built-in instruments that are always available for use some synthesizers also support mechanisms for loading additional instruments.Īn instrument may be vendor-specific-in other words, applicable to only one synthesizer or several models from the same vendor. A specification called General MIDI defines a standard list of 128 instruments, but most synthesizers allow other instruments as well. That sound may emulate a traditional musical instrument, such as a piano or violin it may emulate some other kind of sound source, for instance, a telephone or helicopter or it may emulate no "real-world" sound at all. An instrument is a specification for synthesizing a certain type of sound. Different algorithms, or different settings of parameters within the same algorithm, create different-sounding results.

What all synthesis techniques have in common is the ability to create many sorts of sounds. Other synthesizers use techniques such as frequency modulation (FM), additive synthesis, or physical modeling, which don't make use of stored audio but instead generate audio from scratch using different algorithms. For example, to synthesize the sound of a saxophone playing the note C#4 (MIDI note number 61), the synthesizer might access a very short snippet from a recording of a saxophone playing the note Middle C (MIDI note number 60), and then cycle repeatedly through this snippet at a slightly faster sample rate than it was recorded at, which creates a long note of a slightly higher pitch. A wavetable synthesizer reads stored snippets of audio from memory, playing them at different sample rates and looping them to create notes of different pitches and durations. For example, many synthesizers use wavetable synthesis. How does a synthesizer generate sound? Depending on its implementation, it may use one or more sound-synthesis techniques. Subsequent sections give a more detailed look at the API. Its API includes three interfaces:Īs orientation for all this API, the next section explains some of the basics of MIDI synthesis and how they're reflected in the Java Sound API.

The synthesis architecture might seem complex for readers who are unfamiliar with MIDI. However, it's possible to control a synthesizer directly, without using sequencers or even MidiMessage objects, as explained near the end of this page. Many programs will simply use a sequencer to send MIDI file data to the synthesizer, and won't need to invoke many Synthesizer methods directly.
#JAVA PLAY SOUND HOW TO#
This page shows how to manipulate a synthesizer to play sound. The Synthesizer interface is therefore fundamental to the MIDI package. (Possible exceptions include programs that convert MIDI into musical notation that can be read by a musician, and programs that send messages to external MIDI-controlled devices such as mixing consoles.) The entire apparatus of MIDI files, events, sequences, and sequencers, which was previously discussed, nearly always has the goal of eventually sending musical data to a synthesizer to convert into audio. Most programs that avail themselves of the Java Sound API's MIDI package do so to synthesize sound.
