Under the hood improvements don't always get much love, but there is a segment of Android users that will be thrilled to hear about what Google has done for those working with audio. The headlining change is an API for MIDI, which is the primary interface for communicating music-oriented information between devices. The net result of this will be making it far easier for developers to create apps that interact with hardware for making music or other sorts of sounds. Other changes add to the overall quality of audio that can be worked with on Android and give more options for creating complex tracks.

If you have been using the M Preview, you may have already seen a prompt like this when plugging into your computer. Well, there's your clue that MIDI support is ready to roll in Preview 1. The primary way for MIDI devices to connect to Android phones and tablets is USB, so this will just be another part of the standard connection options.

To be clear, developers could have supported MIDI before the API was added to the M preview. The problem was that they would have to write the code for interpreting every little bit of information from the ground up, like TouchDAW has done. MIDI is used in a lot of different ways, so it isn't as if the API is working magic, but it will save programmers considerable time and promote consistency across apps. Related changes allow developers to have their app notified when certain types of devices (for instance, microphones) are connected to the Android host. Alternatively, they can use the API to see all audio input and output hardware that is currently available.

One thing that might not occur to everyone right away is that this is useful in both directions. My image of MIDI support on Android was connecting a keyboard via USB and tapping out tracks to be recorded and mixed on the phone or tablet. However, the API also supports using the host device for input instead, so you could connect to your computer to receive the information from Android and tweak it with your desktop software or simply use something else as a speaker. I won't pretend to know all of the many use cases for these changes, but they will certainly stimulate development to accommodate those uses.

There are also some enhancements to digital signal processing. Previous versions supported 16-bit sample depth, but M will implement single-point precision float samples, allowing for the level of quality more conventionally used when mastering and mixing audio for professional purposes. Maximum sampling rate is also increased in M from 44.1 kHz/48 kHz in previous versions to 96 kHz. Both of these changes are literally equivalent to moving from CD quality to studio quality. Last but not least, USB digital audio is now multichannel, so Android software can handle multiple simultaneous streams of note information.

More information for developers can be found in the API overview for Android M.