Papers presented at Asilomar

asilomarAudio Analysis Lab had two papers presented at the Asilomar Conference on Signals, Systems, November 2-5th, 2014. These were

  • Localizing Near and Far Field Acoustic Sources with

    Distributed Microphone Arrays by Martin Weiss Hansen, Jesper Rindom Jensen, Mads Græsbøll Christensen

  • Pitch Estimation for Non-Stationary Speech by Mads Græsbøll Christensen, Jesper Rindom Jensen.

The Asilomar Conference on Signals, Systems, and Computers is a yearly Conference held on the Asilomar Grounds in Pacific Grove, CA, USA. It provides a forum for presenting recent and novel work in various areas of theoretical and applied signal processing.

Open position as Ph.D. student with GN ReSound

The last position as Ph.D. student on our InnovationsFonden project with GN ReSound is now open for applications! The position is within hearing aid signal processing. The Ph.D. student will be employed by GN ReSound and be enrolled in the doctoral school program within Electrical and Electronic Engineering and will work with us at the Audio Analysis Lab with Prof. Mads Græsbøll Christensen as supervisor. Read more at http://www.gn.com/careers/job?jobid=2283.

Bob Sturm joining QMUL

Audio Analysis Lab member Bob Sturm is moving to Queen Mary University of London (QMUL) at the end of the year. QMUL is home to the Centre for Digital Music (C4DM), which is well-known in the music and audio technology research community. We wish Bob all the best and thank him for his many contributions to Aalborg University and hope to continue our collaborations with him (which actually predate him joining AAU)  in the future. Good luck, Bob!

New multichannel audio database introduced at IWAENC

At IWAENC 2014, a new freely available multichannel audio database that has been recorded by members of the Audio Analysis Lab in collaboration with Signal and Information Processing at Dept. of Electronic systems will be presented. We believe that such a database has been long overdue!

The database is called Signal- and Multichannel Audio Recordings Database (SMARD). The database contains recordings from a box-shaped listening room for various loudspeaker and array types. The recordings were made for 48 different configurations of three different loudspeakers and four different microphone arrays. In each configuration, 20 different audio segments were played and recorded ranging from simple artificial sounds to polyphonic music. SMARD can be used for testing algorithms developed for numerous application, and we give examples of source localisation results.

You can read more and download the database at http://www.smard.es.aau.dk/.

Presentations at EUSIPCO 2014

The Audio Analysis Lab is well represented at this years’ EUSIPCO in Lisbon, Portugal next week. We have the following papers scheduled for presentation:

  • A Broadband Beamformer Using Controllable Constraints and Minimum Variance
  • Near-field Localization of Audio: A Maximum Likelihood Approach
  • DOA and Pitch Estimation of Audio Sources Using IAA-based Filtering
  • Spatio-Temporal Audio Enhancement Based on IAA Noise Covariance Matrix Estimates
  • Robust Pitch Estimation Using an Optimal Filter on Frequency Estimates
  • Robust DOA Estimation of Harmonic Signals Using Constrained Filters on Phase Estimates
  • Cluster-Based Adaptation Using Density Forest for HMM Phone Recognition

Audio Analysis Lab has moved

IMG_20140710_161006Over the past few weeks, the Audio Analysis Lab and its members have moved to our new location in Rendsburggade 14 along with our colleagues at AD:MT. We are now located in a brand new building at the harbor front in Aalborg, right between Musikkens Hus and the Utzon Center. In our new building, we have a nice, big corner office on the 2nd floor for the junior members of the lab, offices for senior staff, and nice new lab facilities on the ground floor, including a new listening/measurement room and the relocated Audio Visual Arena (AVA) lab featuring, among other things, an ambisonics system and a 3D tracking system.

Grant for project with GN ReSound

JesperBoldt&FredrikGranWe are extremely happy and proud to announce that the Audio Analysis Lab (AAL) has received a grant from InnovationsFonden together with GN ReSound.

In the project, AAL and ReSound will research in new methods for detecting situations where hearing-aids typically are not able help the users, such as in so called cocktail party scenarios. By being aware of when these situtations happen, the cocktail party problem can be dealt with much more efficiently. Furthermore, wireless communication is now present in many hearing-aids, so distributed processing will be considered for making the above features feasible.

The 12 mill. DKK project will be running for 4 years, starting from this year, and also covers training of three PhD’s. The project will headed by Prof. Mads Græsbøll Christensen, Senior Research Manager Frederik Gran, and Senior Research Scientist Jesper Bünsow Boldt.

Further details about the project can be found here.

ICASSP 2014

ph (111)The Audio Analysis Lab was well-represented at this year’s ICASSP, which was held in lovely Florence, Italy. The following papers by members of the Audio Analysis Lab were presented there:

  • Fundamental Frequency and Model Order Estimation Using Spatial Filtering
  • Joint Sparsity and Frequency Estimation for Spectral Compressive Sensing
  • Model Detection and Comparison for Independent Sinusoids
  • Noise Reduction in the Time Domain using Joint Diagonalization

Villum Foundation Project Workshop 2014

frontlogoVillumOn May 23, the second workshop on the project Spatio-Temporal Filtering Methods for Enhancement and Separation of Speech Signals was held. Assoc. Prof. Richard Heusdens, Delft Tecnical University, gave an invited talk on distributed signal processing. Other that that, the program consisted of the following presentations:

  • J. R. Jensen, Near-field Localization of Audio: A Maximum
    Likelihood Approach
  • M. Abou-Zleikha A Tree-based Ensemble Learning for Speech
    Processing
  • S. M. Nørholm, Spatio-Temporal Audio Enhancement based on IAA
    Noise Covariance Matrix Estimates
  • B. Sturm, A Closer Look at Deep Learning Neural Networks with Lowlevel Spectral Periodicity Features
  • V. Tavakoli, A Theoretical Study of the Speech Enhancement Problem
    with Distributed Microphone Arrays
  • S. Karimian-Azari, Robust Pitch Estimation Using an Optimal Filter on
    Frequency Estimates
  • J. R. Jensen, SMARD – A Single and Multi-Channel Audio Recordings Database
  • J. K. Nielsen, Joint Sparsity and Frequency Estimation for Spectral Compressive Sensing
  • J. K. Nielsen, Bayesian vs. Classical Statistics Through Three Examples
  • H. Purwins, Audio Time Series Analysis: Experiments, Computational
    Analysis, & Cognitive Models
  • S. Karimian-Azari, Robust DOA Estimation of Harmonic Signals Using
    Constrained Filters on Phase Estimates
  • A. Jakobsson, High resolution sparse estimation of exponentially
    decaying signals
  • T. Kronvall, Joint DOA and Pitch Estimation using Block Sparse Techniques
  • M. W. Hansen, Pitch-Based Acoustic Source Localization with
    Distributed Microphone Arrays
  • M. G. Christensen, Multi-Channel Maximum Likelihood Pitch Estimation

The participants this year were employees working on the Villum foundation projet as well as invited guests and colleagues at AAU: Richard Heusdens, Ted Kronvall, Andreas Jakobsson, Jesper Rindom Jensen, Jesper Kjær Nielsen, Mads Græsbøll Christensen, Vincent Tavakoli, Sam Karimian-Azari, Sidsel Marie Nørholm, Martin Weiss Hansen, Hendrik Purwins, Bob Sturm, Mohamed Abou-Zleikha.