ISMIR 2007 Tutorials

The ISMIR 2007 tutorials will take place on Sunday, September 23rd at the Vienna University of Technology.

T1: Synchronization and Matching Techniques for Music Data

by Meinard Müller and Roger B. Dannenberg

Abstract

Modern digital music libraries contain large amounts of textual, visual, and audio data as well as a variety of associated data representations, which describe music at various semantic levels. Typically, for a single musical work, there is a large number of relevant digital documents, which are given in various digital formats and in multiple realizations. For example, in the case of Beethoven's Fifth Symphony, a digital music library may contain the scanned pages of some particular score edition. Or the score may be given in a digital music notation file format, which encodes the page layout of sheet music in a machine-readable form. Furthermore, the library may contain various CD recordings such as the interpretations by Karajan and Bernstein, some historical recordings by Furthwängler and Toscanini, Liszt's piano transcription of Beethoven's Fifth played by Glenn Gould, as well as a synthesized version of a corresponding MIDI file. On the one hand, this complexity and heterogeneity of music data make content-based browsing and retrieval in digital music libraries a challenging task. On the other hand, the availability of different semantically interrelated representations can be exploited to ease many music processing tasks, e.g., by using high-level symbolic information as a-priori knowledge in audio processing tasks. In this tutorial, we will give a detailed overview of state-of-the-art MIR techniques for automatic music alignment, synchronization, and matching. The common goal of these tasks is to automatically link several types of music representations, thus coordinating the multiple information sources related to a given musical work.

Biographies of the presenters

Meinard Müller studied mathematics and computer science at Bonn University, Germany, where he received both a Master's degree in mathematics and the Ph.D. in computer science in 1997 and 2001, respectively. In 2002/2003, he conducted postdoctoral research in combinatorics at the Mathematical Department of Keio University, Japan. Currently, he is a member of the Multimedia Signal Processing Group, Bonn University, working as a researcher and assistant lecturer. His research interests include digital signal processing, multimedia information retrieval, computational group theory, and combinatorics. His special research topics include audio signal processing, music information retrieval, as well as analysis and classification of 3D motion capture data.

Roger B. Dannenberg is an Associate Research Professor of Computer Science and Art on the faculty of the School of Computer Science and School of Art at Carnegie Mellon University, where he is also a fellow of the Studio for Creative Inquiry. He recieved a Ph.D. in Computer Science at Carnegie Mellon in 1982. Dannenberg is well known for his computer music and MIR research, especially in real-time interactive systems. His pioneering work in computer accompaniment can be viewed as the first robust music synchronization and matching system (1983) and led to three patents and the SmartMusic system now used by tens of thousands of music students. Dannenberg is also active as a trumpet player and composer, mainly performing in various jazz groups and mainly composing interactive works using computers.

T2: Music Recommendation

by Oscar Celma and Paul Lamere

Abstract

As the world of online music grows, music recommendation systems become an increasingly important way for music listeners to discover new music. Commercial recommenders such as last.fm and Pandora have enjoyed commercial and critical success. But how well do these systems really work? How good are the recommendations? How far into the ‘long tail’ do these recommenders reach? In this tutorial we look at the current state-of-the-art in music recommendation. We examine current commercial and research systems, focusing on the advantages and the disadvantages of the various recommendation strategies. We look at some of the challenges in building music recommenders and we explore some of the ways that MIR techniques can be used to improve future recommenders.

Biographies of the presenters

Oscar Celma is a researcher at Music Technology Group since 2000, and Associate Professor at the Pompeu Fabra University, Barcelona (Spain). Since 2006 he is an Invited Expert of the W3C Multimedia Semantics Incubator Group. He is a member of the program committee of the Workshop on Learning the Semantics of Audio Signals (LSAS). The main focus of his research lies in the music recommendation arena, especially hybrid approaches. Recently, Oscar received the 2nd prize in the International Semantic Web Challenge for the system named “Foafing the Music”, presented in ISMIR 2005. During his undergraduate studies, he also obtained the diplomas in classical guitar, and composition.

Paul Lamere is the Principal Investigator for a project called Search Inside the Music at Sun Labs where he explores new ways to help people find highly relevant music, even as music collections get very large. Paul is especially interested in hybrid music recommenders and using visualizations to aid music discovery. Paul serves on the program committee for ISMIR 2007 as well as on the program committee for Recommenders’07. Paul also authors Duke Listens, a blog focusing on music discovery and recommendation.

T3: Introduction to MIRToolbox

by Olivier Lartillot

Abstract

MIRtoolbox is an integrated set of functions written in Matlab, dedicated to the extraction of musical features from audio files. The tutorial will provide an overview of the set of features that can be extracted with MIRtoolbox, illustrated with specific examples. The objective is to offer both a synthesis of the approaches in musical feature extraction from audio, and a detailed introduction to the toolbox. We will first describe the elementary mathematical operators commonly used for feature extraction (FFT, autocorrelation, filterbank, etc.) and show how advanced techniques can be applied directly to these operators in order to improve the results and fit them to particular purposes. A detailed overview of the numerous features available in MIRtoolbox will then be given, structured according to the main musical dimensions (pitch, tonality, rhythm, timbre, form, etc.). The tutorial will explain how to perform these operations in the Matlab environment, and how to benefit from the diverse options available for each feature extractor. Examples will show how to perform the different successive steps of these analyses using a series of simple commands. Distinctive aspects of the toolbox will be highlighted and illustrated, such as the simplicity and adaptability power of the syntax. Various tools for statistical analysis, segmentation and clustering will be presented. Finally, we will explain how to write new functions that can take benefit from the building blocks offered by the toolbox, and that can be articulated with other Matlab toolboxes.

Biography of the presenter

Olivier Lartillot is a post-doc researcher at the Music Department of the University of Jyväskylä. He obtained a degree in engineering at Supélec Grande École, France, and a PhD degree in Computer Science at Ircam / University of Paris 6 in February 2004. During his undergraduate studies, he also obtained a BA in Musicology from the University of Paris-Sorbonne. His research interests tackles both the audio and symbolic domains of computational music analysis. He currently designs MIRtoolbox with Petri Toiviainen, within the context of a collaborative project, supported by the European Commission (NEST project “Tuning the Brain for Music”, code 028570), aimed at revealing in particular the relationship between musical structure and emotion. His research in the symbolic domain focuses on diverse topics such as motivic analysis or the estimation of communication processes in music-therapy improvisations. He has published more than 30 scientific papers on these topics and serves as reviewer for several international journals. More information can be found at: http://www.cc.jyu.fi/~lartillo/

T4: Techniques for Implementing the Generative Theory of Tonal Music

by Keiji Hirata, Satoshi Tojo and Masatoshi Hamanaka

Abstract

This tutorial on Techniques for Implementing GTTM will summarize the entire body of work related to computational approaches to the GTTM and report it in a comprehensive way to MIR researchers and computational musicologists. If one wants to realize MIR based on musical semantics, the techniques used for implementing GTTM can provide a powerful tool. Furthermore, the tutorial will put a special focus on perspectives for future deployments, and discussion will be encouraged with experts in the audience.

Biographies of the presenters

Keiji Hirata received his Doctor of Engineering degree in Information Engineering from the University of Tokyo, Japan, in 1987. He then joined NTT Basic Research Laboratories. He spent 1990 to 1993 at the Institute for New Generation Computer Technology (ICOT), where he was engaged in the research and development of parallel inference machines. In 1999, he joined NTT Communication Science Laboratories, where he has been engaged as a researcher ever since. His research interests include musical knowledge programming and remote collaboration. He served as an ICMA board member and the research coordinator in 1998-2001, a board member of the Information Processing Society of Japan (IPSJ) in 2005-2007, and now serves as a board member of the Japan Society for Software Science and Technology (JSSST) in 2007-2011. He received the Takahashi Award from JSSST in 1987 and the IPSJ Best Paper Award in 2001. He co-translated the book "Computer Music Tutorial" by Curtis Roads in 2001. He is a member of the IPSJ, the Japanese Society for Artificial Intelligence (JSAI), and the JSSST.

Satoshi Tojo received his Bachelor of Engineering, Master of Engineering, and Doctor of Engineering degrees from the University of Tokyo, Tokyo, Japan. He worked at the Mitsubishi Research Institute, Inc., Tokyo, Japan, from 1983 to 1995. He has served at the Japan Advanced Institute of Science and Technology (JAIST), Ishikawa, Japan, as associate professor from 1995 to 2000 and professor from 2000. His research interests lie in logic in artificial intelligence, including knowledge representation of artificial agents and formal semantics of natural language. He is also interested in language evolution and language models of music. He is a member of the Association of Computational Linguistics (ACL), the Japanese Society for Artificial Intelligence (JSAI), the Japan Society for Software Science and Technology (JSSST), the Japan Cognitive Science Society (JCSS), and the Information Processing Society of Japan (IPSJ).

Masatoshi Hamanaka received his Doctor of Engineering from the University of Tsukuba, Ibaraki, Japan in 2003. From 2003 to 2006 he was a JSPS Research Fellow (Japan Society for the Promotion of Science) while at the National Institute of Advanced Industrial Science and Technology (AIST); from 2006 to 2007 he was a JST Research Fellow (Japan Science and Technology Agency) at AIST. In 2007, he became an assistant professor in the Department of Intelligent Interaction Technologies, University of Tsukuba, Ibaraki, Japan. At ICMC 2004, he received the Journal of New Music Research Distinguished Paper Award, and in 2001, he received the Best Paper Award in Art at the 5th World Multiconference on Systemics, Cybernetics and Informatics (SCI). His research interests include music information processing.