Staff profile

Dr Bruce Wiggins


Lecturer in Electronics, Maths and Computing

Subject

Electrical and Electronic Engineering, Music and Music Production

College

College of Engineering and Technology

Department

Electronics, Computing and Mathematics

Campus

Markeaton Street, Derby Campus

Email

b.j.wiggins@derby.ac.uk

About

I graduated with a 1st class honours in Music Technology and Audio System Design from the University of Derby in 1999.  My interest in audio signal processing spurred me to continue at Derby completing my PhD entitled “An Investigation into the Real-time Manipulation and Control of 3D Sound Fields” in 2004 where I solved the problem of generating Ambisonic decoders for irregular speaker arrays and also carried out work on binaural/transaural reproduction systems.  I'm now a lecturer in the Department of Electronics, Computing and Mathematics in the College of Engineering and Technology where I teach electronics, audio programming and digital signal processing which are all fed from my continuing research interests in Ambisonic surround sound systems earning me excellence awards for my promising research in 2005/6, my application of technology in 2006/7 and excellence in learning, teaching and assessment in 2007/8.  My work on Ambisonics was also featured as an Impact Case Study in REF2014, which can be found here.  I also organise and run the yearly Sounds in Space Research Symposium.  A programme and video of the day from 2017 can be found on this page.

Teaching responsibilities

Lecturer on the courses:

Module leader for the modules:

Also contributes to the modules:

Have previously led the modules:

Research interests

Membership of professional bodies

Qualifications

Undergraduate qualifications

Postgraduate qualifications

Research qualifications

Recent conferences

Experience in industry

Additional interests and activities

Recent Projects

Teaching Informed by Research Projects

2011 to Present - Sounds in Space Research Symposium.  Organise, host and present at our annual Sounds in Space Research Symspoium, which looks at the ethetics, production and technical details of spatial audio.  See videos, pictures and programmes of past events here.

2006/7 & 2007/8 - SPARG True Multi-channel Mixing Environment. The Signal Processing Applications Research Group has carried out much research into the field of hierarchical multi-channel audio platforms and algorithms (e.g. see the PDF here ). However, all current audio mixing and editing software is 'hard wired' to utilise only a fixed number of speakers and with internal workings predicated on stereo mixing paradigms making any true, flexible multi-channel sound mixing problematic at best. This project is carrying out the implementation and documenting of a true hierarchical, flexible, mixing environment using the already established, permanent Multi-channel Sound Research Lab at the University of Derby. Our KRK Multi-channel Sound Lab consists of 30 speakers and four subs allowing for full 3D audio creation and audition. Custom software to drive the various speaker arrays available in the lab (the WigWear Ambisonic Plug-ins) have been constructed and hosted on music production software (such as Reaper -www.reaper.fm and Audiomulch - www.audiomulch.com) in order to allow for the creative use of the system using a convenient and familiar workflow. This work is currently being fed into a number of undergraduate modules allowing our students to create cutting edge, future proof, audio presentations. The outcomes of this work were presented at the Institute of Acoustics' Reproduced Sound 24 International Conference. Posters were also presented at two University of Derby Learning, Teaching and Assessment Conferences.

2008/9 - Novel Human Computer Interaction Development. Following the recent success of the Multi-media Applications project 'Wii are the Music makers' which resulted in local news and radio coverage along with a stand prepared for the 'NanoWhat' event, this projects aim was to both develop novel human computer interfaces and embed this work both into performance (the use of) and technical teaching content (the development of) using cheap, readily available materials which will allow more intuitive control of audio and lighting/show control software (such as Wii controllers and web cams, for example, costing around £20 each to create multi-touch and motion sensing controllers which would normally cost over £2000 each).
 
The work combined the use of hardware such as webcams, wii controllers and mobile phones along with available software (such as glovepie http://carl.kenner.googlepages.com/glovepie and eyesweb and custom written Java applications in order to create flexible, wireless, powerful and intuitive human computer interaction devices which will be used to control various audio/video/lighting parameters in different ways. For example, the motion of someone walking across a stage could be used to control the virtual position of a sound source, or wii controllers handed around the audience to remix audio loops in real-time.
 
Both the technical and artistic outcomes of this work were presented at the Forum for Innovation in Music Production and Composition, Leeds College of Music, UK. A poster was also presented at the University of Derby Learning, Teaching and Assessment Conference.

Novel human computer interaction development

Research Inspired Curriculum Fund Projects

2007/8 - The simulation of distance in multi-channel audio. This project was to carry out the necessary research to firstly ascertain what it is the SoundField mics are recording in order to encode distance. This was tested using our SoundField microphones in combination with the software packages Adobe Audition 3 and the Aurora audio testing plug-in suite. The results can then be used to create a plug-in for standard audio packages (a VST plug-in) to correctly encode distance cues with the panning information. The work will also feed into SPARG's previous research as it can also be used in the calibration of the Ambisonic decoders that have already been created by as distance compensation can then be setup to take into account a) what distance the SoundField mics were calibrated at (i.e. where their 'focal distance' is set to), and b) the distance at which the speakers are. This work was presented at Institute of Acoustics' Reproduced Sound 25 International Conference and the University of Derby's Annual Research Conference.

Wigware Plug-in Suite (Windows and Mac)

Wigware Ambisonic Decoder (WAD)

This program now comes in two flavours, as a Direct Show filter and a VST plugin. The Direct Show Filter allows any Direct Show capable audio player software (such as Windows Media Player) the ability to read and decode B-format wave files (the format of which is specified here on Richard Dobson's web page - details on the Waveformat extensible file format used can be found on Microsofts web site here - details on Ambisonics can be found here).

The VST version comes in 1st and 2nd order versions (3rd and 4th order versions to follow) and allows you to alter the polar patterns of the speaker feeds either across the whole frequency range or using 'Shelf Filters' with a variable cut-off. Both of the VST plugins will derive outputs for a standard ITU 5 speaker array, with higher orders giving better frontal resolution.

Check my personal website (www.brucewiggins.co.uk) for updates to this software:

Wigware Ambisonic Panner (mono to 1st and 2nd order)

Panning plug-in taking mono in, and passing out 1st or 2nd order B-format. These currently include near-field compensation and distance filters, but the panning only currently works on the surface of the sphere. These have been released early due to issues found with currently available Ambisonic encoders.

Wigware Ambisonic Reverb (1st Order....2nd order coming soon)

A simple, recursive, 4-channel, reverb plug-in. Perfect for 1st order Ambisonics.

Recent publications

Conference Papers

Professional journal

Other publications

Other public output (reports, exhibitions)