Developers: How to add MPE Capability
If you are a developer of MIDI sound generators and would like to add MPE (MIDI Polyphonic Expression) compatibility, this page will give you some help. MPE is pretty straightforward and is very similar to the old MIDI Mode 4 (Omni off, Mono). It is summarized here.
As I write this on April 7, 2018, the MMA has not yet published the official MPE spec but they should soon on their site.
Regardless of the details of the spec, the main work is:
1) routing each incoming Per-Note channel to its own voice, listening only for the above 5 per-note messages (Note On, Note Off, Pitch Bend, CC74 and Channel Pressure/Aftertouch), and
2) routing the Common Channel's messages to all voices.
A simple MPE implementation
Note the following text on the above-linked page:
"Most MPE-compatible MPE synths do not implement a Common Channel, instead applying all received messages other than the five per-note messages, regardless of the channel on which they are received, as Common and therefore applying to all voices."
One popular and simple MPE implementation is the following:
1) Allow each channel to act as a conventional polyphonic sound generator, and
2) If the 5 per-note messages are received, they apply only to the received channel's voices. All other received messages apply to all voices.
With this implementation, if a conventional MIDI keyboard's messages are received on any single channel, that channel acts as a conventional one-channel synth. But if MPE data is received on multiple channels, the per-note messages for each channel's single note will use only one voice on the received channel, while all other messages (regardless of the channel over which the are sent) will apply to all voices. There is no need to define which channels are received, just listen to all channels and the host DAW will send you one, some or all channels. If a Pitch Bend Range message is received--useful for changing between the common +/- 2 semitone range of traditional MIDI keyboards and the default +/- 48 semitone range of MPE controllers--it simply follows the rule of applying to all voices. That said, it would also be helpful to give users a simple way to change the Pitch Bend Range manually, and preferably globally so they don't have to change it after selecting each new preset.
Note that a few DAWs including Ableton Live reassign all received messages to the track's assigned MIDI channel, thereby preventing MPE plug-ins from receiving the necessary multiple channels.
Consideration for standalone synths
If you are adding MPE to a standalone software instrument (not a plug-in) or a hardware instrument, you'll need to provide a UI for the user to select single-channel or MPE mode. A simple way to do this is to permit the Receive Channel parameter to include options for single channel 1 through single-channel 16, or OMNI (all channels mixed together without channel information) or MPE (implemented as described above).
And in case someone wishes to send two MPE streams as in split keyboard play, it would be helpful to be able to select the range of received channels for each of two different sounds. For example, your synth could listen on Common Channel 1 and per-note channels 2-8 (from an MPE controller's left split) to play a piano-like MPE sound, and on Common Channel 16 and per-note channels 9-15 (from the MPE controller's right split) to play a solo melody MPE sound.
Creating new expressive sounds
With your MPE implementation, it is important to also create expressive sounds. I use the term expressive sounds instead of MPE sounds because the main goal is to create sounds that respond to how MPE controllers use pressure and Y-axis differently than a MIDI keyboard. So MPE sounds are merely expressive sounds that can be played polyphonically.
Expressive sounds are different from MIDI keyboard sounds in the following ways:
In a MIDI keyboard, pressure (aftertouch) is sent only after the key is fully pressed and the note is sounding, and therefore is not useful for controlling the overall volume of each note from silence to full but rather for adding an extra effect to a note that is already sounding. In an MPE controller, pressure (aftertouch) messages are sent continuously from the lightest to heaviest touch, and can therefore be used to continuously control the overall volume of each note from silence to full level, analogous to wind pressure on a wind instrument or bow pressure on a bowed string instrument. For each reason, it is good to create some sounds that use pressure to continuously control note volume from silence to full.
MPE controllers add Y-axis control. Often people initially make the mistake of thinking that Y-axis should be used in the same way as Mod Wheel, adding LFO modulation. This is not a good idea because one of the main purposes of expressive controllers is to replace the venerable and unnatural LFO with performed gestures like vibrato (left/right finger movement) or tremolo (varying pressure). Instead, a better use of Y-axis is to provide a continuous change in timbre, analogous to picking a guitar or bowing a violin at different string positions between the bridge and neck, or varying embouchure on a wind instrument. A good example of this in subtractive synthesis is to assign Y-axis to vary the pulse width of a pulse oscillator, thereby providing a continuous change in the fundamental harmonic content of the waveform in such a way that all tones produced are useful. In this way, timbral variation becomes a performance gesture.
For some examples of good expressive sounds, download my LinnStrument-optimized sounds for Apple's Logic or MainStage from the Getting Started page.
If you have any questions, you're welcome to contact me.