An interdiscplinary fusion of creative practice and audio science.
Bill Evans is a brilliant audio scientist who creates highly innovative and genuinely musical software solutions to some of recording’s most vexing issues.
Virtual Audio Workstations are a natural evolution of DAWs within the context of gen AI, spatial computing, and tangible interfaces. they are based on Dr. Evans’ alternate PhD thesis, Belexes: A Virtual Audio Workstation. The submission included a hardware VAW designed and built by Evans, including circuit board design.
Evans received a KEIF grant from MMU to develop the system’s tangible interaction system—the Volumetric Haptic Display. The VHD is a 3D force-projection system, used to create the VAW’a tangible interaction. It was complimented with a custom-designed 3D variable capacitance sensing system, provided high-precision, simultaneous spacial tracking of all fingers.
The workstation projected 3D imagery to display the systems visual components, employing head and eye-tracking to update the visual perspective in real-time. An alternate visual interface, using Microsoft’s first Hololens, provided the first virtual audio editing environment. The interaction model for the VAW was digital clay, wherein users can shape and mould audio, directly.
A suite of tools allowing the primary conditions and aspects of acoustic drum recordings to be altered after-the-fact. The processing is performed without replacing the original audio. Elements include tuning, acoustic space, microphones and their placement, and head/cymbal tension. The
instruments themselves can be changed, and their acoustic properties altered. It was first deployed by Evans in 2018 on a track with drummer Marco Minnemann.
PRISM is the first system for perceptually-lossless conversion of acoustic recordings to event-based data (i.e., MIDI) data. In this form, music can be parsed as hierarchical grammar (e.g. ), and can be processed with LLMs without incurring audible artefacts or musical errors. As of 2022, a prototype AU/VST plugin is available for demonstration.
This methodology aims to restore musicians’ original intentions in recorded performances, as opposed to the contemporary practice of “fixing mistakes”. Evans’ 2018 Manchester Metropolitan University PhD thesis included submission of the Flying Colors concert film, Live at the Z7.
A group of technologies and methodologies to synthesise authentic human performances from specific artists. Evans is slated to introduce the technology in a forthcoming movie by Academy Award winning director Robert Zemekis.
A set of processes and techniques to increase the clarity of audio tracks, by targeting both physical (sonic) and cognitive auditory phenomena. Evans debuted the technology on Flying Color’s Live at the Z7. As per Wikipedia: “Critics place it among the best-sounding live albums ever made.”
Evans created several perceptually-lossless systems for commercial projects. They included artefact-free vocal isolation for Alice Cooper, and a time-shifting algorithm for a song featuring Steve Vai.
[2025] A traditional audio loop is translated to MIDI. Using features of PRISM’s Performer and Transformer system, the performance, instrumental physics, ambience, and other elements are edited independently of each other.
[2025] PRISM isolates performances from their recordings. This enables Retrospective Engineering—going “back in time” to edit the initial conditions (e.g. instrument setup, acoustics, microphones, tuning) of a recording—instead of traditional mixing (i.e., attempting to alter the results). Featuring Mike Portnoy.
[2025] PRISM features audio transformations that, while impossible in the real world, evolve contemporary audio engineering practices. This video demonstrates several examples using a tom track. Featuring Mike Portnoy.
A deep dive into PRISM’s technologies for lossless conversion of audio to MIDI, including phase-accurate prediction of theoretical audio. Featuring Marco Minnemann and Frank Us.
PRISM is an audio system that converts audio to event-based data (e.g. MIDI) and back again. This enables LLMs to process the hierarchical musical structures (e.g. Schenkerian), isolated from audio data. This video is a quick, 3-minute introduction to editing MIDI translated from audio. Featuring Steve Morse, Chad Wackerman, Jim Cox and John Ferraro.
Edge Case Example: A demonstration of PRISM’s adaptation to a non-trivial isolation example. Featuring Steve Lukather and Joe Bonamassa.