The music playing up top has never yet been played by a musician or musical instrument, and yet, there we have it! The piece was entirely written in "Staff View" over the course of a couple of hours with a midi-sequencer program by a 14-year old boy who's had no formal training in music, and represents a wee little smidgen of a view into how technology is creatively liberating folks today.
The foundations for MIDI trace back to the 1960's, with the first monophonic synthesizers like the ARP and Moog using analogue circuitry to produce single sounds at a time.
The late 1970's eventually saw Oberheim introduce the first polyphonic synthesizer, with four independent oscillators. Companies like Yamaha, Roland and Moog soon followed suit with their own polyphonic circuits, and ~eventually~ on-board memory. Till then, synthesizers had to be programmed before every performance, and separate synthesizers had to be used for each sound. On board memory allowed one to store synthesizer settings so that sounds could be changed with the flick of a switch.
In the early 80s, Oberheim and Rhodes introduced the OBX and Chroma synthesizers with an interface that allowed one to connect identical keyboards together. This was a landmark in music technology, because musicians could for the first time 'layer' sounds together on one keyboard,.. but there was still no way of connecting different instruments together.
In 1982 a group of synthesizer manufacturers met at a NAMM convention (National Association of Music Merchants) to discuss a standard for the digital transmission and reception of performance information between all types of electronic musical instruments. The original name of the proposed interface was UMI (Universal Musical Interface), but after several revisions this became MIDI (Musical Instrument Digital Interface). The results of this were revealed at the first NAMM convention in Los Angeles 1983, via a simple demonstration connecting two synthesizers, not manufactured by the same company, with two cables. Either synthesizer could be played in this configuration to have both produce sound before the amazed audience, thus demonstrating the two-way nature of the communication. Other variations were also illustrated, creating music history.
Today, MIDI is a standard data-communications protocol that allows electronic musical instruments to interact with each other by transmitting musical performance data between instruments and synthesizers as a series of messages, each of which contains a status byte and in some cases, information bytes. MIDI can be described equally as a protocol, a standard or a language, and the versatility of the interface therefore also allows it to be used to control stage lights, trigger pyrotechnics and transfer binary information between computers.
How MIDI works ~
In its most basic mode, MIDI information tells a synthesizer when to start and stop playing a specific note. Other information shared could include volume and modulation of the note, or even more hardware specific messages. For example, it can tell a synthesizer or sound module to change sounds, master volume, modulation devices, and even how to receive information. More recent applications include using the interface between computers and synthesizers to edit and store sound information for the synthesizer on the computer.
MIDI commands and triggers have a specific byte sequence. The first byte is the status byte, which tells the MIDI device what function to perform. Encoded in the status byte is the MIDI channel (0 to 15), and all other bytes within the same command set are assumed to be on the channel indicated by the status byte until another status byte is received. Two additional bytes are required with each status byte ~ a pitch byte, which tells the MIDI device which note to play, and a velocity byte, which tells the device how loud to play the note.
A software or hardware sequencer allows one to record, edit, and play back the parameters of a musical performance. Although there were such things as analog sequencers, the sequencer didn't really come into its own until the invention of MIDI. A MIDI sequencer does not record sounds in any way. Instead, it records the MIDI data as a series of commands and triggers, so that when you play back the sequence, your sound module or synthesizer will play the actual notes and sounds with the same timing and dynamics as recorded.
The big-big advantage with the system, aside from putting a mind-boggling array of virtual instruments at the users disposal, has mainly to do with editing. With an audio track recorded on analog tape, there is little one can do to change it, other than cutting and pasting sections of tape, but with a MIDI performance, you can change it in any way you like, after the fact. You can for example change the sound from one instrument to another; transpose the pitch without altering the speed; change the tempo without altering the pitch; correct wrong notes; add or modify dynamics.
Hardware sequencers are typically little black boxes dedicated to the task of sequencing. The advantage with these are that they are portable, stable, tough, and often less expensive than buying a computer system. Software sequencers on the other hand are obviously programs for computer. The primary advantage here is that computer monitors can display a larger amount of information than the small LEDs or LCDs that are common to hardware sequencers. This makes editing faster and easier, with more memory, more flexibility, customizing to your own style and printing capability. Both hardware and software sequencers of all types are 'basically' similar in purpose, concepts, and features,.. but the greatest advantage increasingly being built into the latter of late is the ability to sequence and edit both MIDI and audio WAVE data together in a single file.