First International Research Workshop 2014¶
The Jazzomat Research Project: Issues, Applications and Perspectives for Computational Methods in Music Research¶
The Jazzomat Research Project is situated at the intersection of jazz research, cognitive psychology of creativity, and statistical music analysis. One central aim is to describe and discriminate stylistic features of jazz improvisation by examining jazz solos of various artists and styles with the help of a large computer database and newly developed software tools. Moreover, we want to explore the cognitive foundations of improvisation, test theories about the cognition of creative processes, and evaluate and enhance pedagogical approaches towards jazz improvisation. Furthermore, the project generally aims at advancing statistical and computational methods of music analysis in various areas of music information retrieval.
The Jazzomat Research Project is funded by the German Research Foundation with a three-year grant (October 2012 – September 2015). After the first two years of project runtime, the research workshop aims at presenting, sharing and discussing results of the project and at getting further theoretical and methodological input from international researchers with various areas of expertise. These include style analysis of jazz musicians and genres as well as jazz theory and jazz pedagogy (session 1), psychology of creative processes as well as computer-aided analysis of recurring melodic and rhythmic patterns (session 2), and music information retrieval, esp. the interaction of audio-based and symbolic music data analysis (session 3). Additionally, during an evening roundtable jazz musicians and jazz educators will discuss implications of computational jazz research for jazz education.
The workshop is hosted by
The Liszt School of Music WeimarDepartment of Musicology Weimar-JenaCarl-Alexander-Platz 1hochschulzentrum am hornD-99425 Weimar, Germany
Funded by the German Research Foundation (DFG)
Free entrance, but please register via e-mail: jazzomat@hfm-weimar.de
Program¶
Friday 26th September 2014¶
9:30 Welcome & Introduction: The Jazzomat Project
Session I: Jazz Research – Style Analysis – Jazz Theory¶
09:45 Martin Pfleiderer, Weimar
10:30 Martin Schütz, Hamburg
11:15 Coffee break
11:30 Andreas Kissenbeck, München/Münster
12.15 Michael Kahr, Graz
13:00 Lunch Break
Session II: Creative Processes – Pattern Mining¶
14:00 Klaus Frieler, Weimar
14:45 Olivier Lartillot, Aalborg
15:30 Coffee break
15:45 Berit Janssen, Amsterdam
16:30 Daniel Müllensiefen, Klaus Frieler, Kelly Jakubowski, London and Weimar
17:15 Dinner break
Jo Thönes, Wolfgang Bleibel, Michael Kahr, Andreas Kissenbeck, Christian Dittmar, Wolf-Georg Zaddach
21:00 Jazz Concert and Jam Session
with X & The Gang feat. Nils Feldmann
Saturday 27th September 2014¶
Session III: Music Information Retrieval – Perspectives for Audio Analysis¶
9:30 François Pachet, Paris
10:15 Meinard Müller, Erlangen
11:00 Coffee Break
11:15 Jakob Abeßer, Ilmenau
12:00 Christian Dittmar, Erlangen
13:00 Final Discussion: Perspectives for Computational Methods in Music Research
14:00 Farewell Note & End of Workshop
Abstracts¶
Friday 26th September 2014¶
Session I: Jazz Research – Style Analysis – Jazz Theory¶
MeloSpyin’ the Trane. Exploring Improvisations of John Coltrane with MeloSpySuite¶
Martin Pfleiderer, The Liszt School of Music Weimar
John Coltrane is one of the most influential improviser in modern jazz. Currently, twelve transcribed and annotated improvisations of Coltrane are contained within the Weimar Jazz Database. Using these transcriptions, I will explore both the music of John Coltrane and the possibilities the MeloSpySuite offers for the analysis and description of a personal style. In my paper, I will focus on several issues: How can the overall dramaturgy of a solo by Coltrane (e.g. “Impressions”) be characterized by global features explored with MelFeature? How does Coltranes vocabulary of patterns change over time? Which strategies of inside-outside play and of motivic development are applied by Coltrane in his modal period? How can Coltranes style be distinguished from the style of his followers, e.g. Dave Liebman or Michael Brecker?
Martin Pfleiderer
Martin Pfleiderer studied musicology, philosophy, and sociology at Gießen university, and received a doctorate there in 1998. From 1999 to 2005 he was assistant professor for Systematic Musicology at Hamburg University where he received his postdoctoral lecture qualification (habilitation) in 2006 with research on rhythm in jazz and popular music. In 2009 he became professor for the history of jazz and popular music at the Liszt School of Music in Weimar. Since 2012 he is principal investigator in the Jazzomat Research Project funded by the German Research Foundation. He has also performed as jazz saxophonist with various groups.
Structural Aspects of Jazz Improvisation. A New Approach Based on Mid-Level-Analysis¶
Martin Schütz, University of Hamburg
In jazz research, melodic patterns and the role of underlying motor programs in connection with the process of jazz improvisation have been discussed for some time. But can similarities and patterns also be observed by focusing on structural aspects of jazz improvisation? In search of a suitable method to address this question, the “stream-of-ideas-analysis” was developed to examine and to compare the architecture of different jazz improvisations by taking the mid-level perspective of improvisational ideas.
This method allows a novel way of investigating and comparing jazz improvisations by remaining closely linked to the observed data from specific improvisations. Furthermore, this approach is not limited to the extraction of predominant individual patterns of a musician but also offers the possibility to look into the underlying processes of improvisational behaviour. Inspired by methodological aspects of qualitative content analysis and grounded theory, a differentiable and expandable category system was created by using an open coding process which transforms an improvisation into a sequence of continuously succeeding melodic phrases (=”stream of ideas”). This category system currently consists of nine main categories, which are defined either by melodic contour (e. g., the idea “line”) or intra-musical aspects (e. g., variation of the theme, development of motifs). The two main operations of this analytical approach are segmentation (identifying the position and duration of each idea) and categorisation (labeling the segments accordingto their given characteristics with a suitable category).
Currently designed for jazz piano improvisation, we present in this talk an adaptation of the approach to monophonic instruments. Additionally, we discuss the possibility how the manually processed “stream-of-ideas-analysis” could be combined with computerized feature extraction tools (e. g., melfeature) to develop secondary typologies for ideas, and how and whether the process could be fully automated into a computer program.
Martin Schütz
Born 1985 in the southern part of Germany, Martin Schütz began to study musicology and history of art at the University of Freiburg in 2006. After moving to Hamburg, he studied systematic musicology, historical musicology and history of art at the University of Hamburg. Furthermore, he took courses at the Institute of Jazz Research (University of Music and Performing Arts, Graz) as well as at the Institute of Electronic Music and Acoustics (University of Technology, Graz). After receiving his master’s degree in 2011, he has been continuing his research on jazz piano improvisation as a PhD student at the Institute of Systematic Musicology (University of Hamburg).
Model Analysis. A Theoretical Approach to Analysing and Creating Melodic Material¶
Andreas Kissenbeck, University of Music and Performing Arts Munich, University of Münster
Model Analysis is a tool to analyse melodic material. It is based on fundamental aspect of music in terms of perception psychology and audio physics. Therefore a (complying modified) application to any style of music seems possible, but it is mainly designed for the context of tonal music. Model Analysis fractionalizes harmonically consistent segments with respect to different structural and harmonic aspects. It can likewise be used for the construction of melodies or even entire melodic concepts. Therefore this tools is very efficient for developing and improving melodic skills in jazz improvisation. It works equally well for inside and outside playing. After the presentation of the Model Analysis conception, its practical benefits will be demonstrated, since it can be used for
examining the ingredients which create the personal style of a specific player,
learning melodic phrases and applying them to different harmonic situations,
creative developing of personal and unique ways in melodic expression.
There are fundamental differences in conception between Jazzomat and Model Analyses. Apparently Model Analysis is a theory that offers a concept to the user to get to practical results in playing while Jazzomat is a device for empiric research, so it provides information by itself. But the theoretical basis of Jazzomat shows similarities to Model Analysis. These will be discussed as well as the question if Jazzomat algorithms can provide a computerized Model Analysis.
Andreas Kissenbeck
Born in 1969 in Bonn (Germany). Studied mathematics, sport and education science at the universities in Berlin and Ratisbona. Then scholarship and studies in Jazzpiano at the University of Music Würzburg. Later PhD in musicology at the university Würzburg. Pianist/Hammond organist, composer and arranger. 2002 Jazz Price of the Süddeutsche Zeitung. 2006 Next Generation Award of Germany’s biggest jazz magazine Jazz Thing. Played inside and outside of Germany with internationally renowned artists such as Malcolm Duncan, Greg L. May, Benny Bailey, Bobby Shew, Jiggs Whigham, Tony Lakatos, Peter Weniger, Torsten Goods. Lecturer at the universities in Munich and Münster.
Towards the Analysis of Linear Aspects in Tonal Jazz Harmony¶
Michael Kahr, the Institute for Jazz, University of Music and Performing Arts in Graz, Austria
The interrelation of vertical structures and horizontal events are of particular relevance in jazz theory. Chord-scale theory, as a branch of pedagogical and speculative jazz theory, has attempted to formalize this approach. Similarly to classical music, parsimonious voice-leading – the smooth linear connections between adjacent vertical structures – is commonly regarded as a desirable goal in jazz. Some jazz musicians, particularly composers and arrangers, developed idiosyncratic approaches to voice-leading which have been identified as significant aspects of their individual artistic identities.
This paper explores the analysis of vertical events as constituents of tonal jazz harmony in the context of the currently developed Melospy software. The presentation focusses on a specific method for the analysis of voice-leading events, which has been developed for the analysis of the harmonically complex music of jazz composer and multi-instrumentalist Clare Fischer, and examines the capabilities of the MeloSpy software regarding assistance in and potential extension of the previously developed analytical approach.
In particular, the presentation involves (1) a guide for the production of reductive voice-leading graphs, (2) the identification of voice-leading events which are relevant to the jazz practitioner, (3) the results of a previous study of Fischer‘s music using a manually executed statistical approach and (4) the exploration of the potential of the MeloSpy Suite in a selected case study of voice-leading events in the music of Clare Fischer.
The anticipated results from this study will potentially enhance the methodical possibilities of quantitative studies of linear aspects in jazz harmony.
Michael Kahr
Michael Kahr is a jazz pianist, composer, arranger and researcher and is employed as a Senior Lecturer at the Institute for Jazz at the University of Music and Performing Arts in Graz/Austria. He also lectured at the University of Vienna and the University of Sydney. He designed and conducted the post-doctoral research project “Jazz & the City: Identity of a Capital of Jazz”, funded by the Austrian Science Fund FWF from 2011-2013. His dissertation on aspects of harmony and context in the music of Clare Fischer was funded by an International Endeavour Research Scholarship by the Australian Government. In 2010 Kahr conducted a research project in Los Angeles as a Fulbright Scholar and organized the first International Clare Fischer Symposium. In 2011 he received the Morroe Berger/Benny Carter Jazz Research Award from Rutgers University in Newark. Kahr has released several recordings and performed extensively throughout Europe, the U.S., Australia, Africa, China and the Middle East.
Friday 26th September 2014¶
Session II: Creative Processes - Pattern Mining¶
Pattern Usage in Monophonic Jazz Solos¶
Klaus Frieler, The Liszt School of Music Weimar
A common hypothesis is that jazz soloist utilise preconfigured patterns as a “construction kit” to build their solos, while coping with the heavy cognitive demands particularly in high tempo. This assumption was never rigorously empirically tested. One fundamental issue is to find a suitable definition of “pattern” in the context of jazz improvisation. A distinction must be made between internal motor programs and the resulting sound products. The former are not directly observable and need to be inferred from the latter. This can be done by defining patterns as sub-sequences of observed sequences of tone events. This approach has several important issues. First, it is not clear which musical dimensions in which combination are involved, and hence which is the most adequate representation. Second, it is not obvious how to identify true patterns, since repeated subsequences will occur by chance alone and by other mechanisms. Hence, one must rely on occurrence frequencies of observed patterns, which brings in a sampling problem, because occurrence frequencies depend on content and size of the corpus used.
In this study we investigate a comprehensive set of more than 200 jazz solos taken from the Weimar Jazz Database. We applied several transformations concurrently and examined the resulting sequences for pattern content using the melpat tool (MeloSpySuite). For comparison, an equal-sized set of solos was simulated using frequency distributions of single elements estimated from the full corpus. Real and simulated solos were searched for patterns on a range of lengths. Only patterns occurring more than once were counted. Coverage of solos by patterns and other likelihood indices were calculated for each set.
Results clearly indicate that the likelihood of pattern occurrence of more than about 5 or 6 elements is significantly higher in observed solos than in simulated solos, which provides strong empirical evidence that jazz soloist indeed rely on pre-configured patterns, but with small overlap between performers and solos. Moreover, longer tonal patterns tend to appear over the same chordal context and with very similar rhythmical and metrical content. Finally, there seem to be significant differences between individual performers with respect to pattern usage.
Klaus Frieler
Klaus Frieler studied physics and mathematics in Hamburg and graduated 1997 with a diploma in theoretical physics. After years working in the software industry, he finished his Ph.D. in Systematic Musicology in 2008 with a dissertation on mathematical models of melody cognition. From 2008 to 2012 he worked as a lecturer for Systematic Musicology at the University of Hamburg. Currently, he is a post-doc researcher with The Jazzomat Research Project. He also works as (music expert witness), scientific consultant, lecturer and programmer. His main research interests are computational musical, modelling of music cognition, music creativity, music information retrieval and popular music research. Homepage:
Computer-Automated Motivic Analysis of the Weimar Jazz Database Through Exhaustive Pattern Mining¶
Olivier Lartillot, Aalborg University
I present a computer-automated motivic analysis of the Weimar Jazz Database and explain the principles of the computational method, using concrete examples from the database as illustrations. Motivic repetitions are searched for exhaustively along pitch and rhythmic dimensions of various levels of specificity. Motivic patterns are not necessarily defined along fixed sets of parameters: their sequential descriptions can switch from one parametric space to another. For instance, one pattern can start with very specific pitch description and continue with less precise gross contours. Motifs form taxonomies, with more specific motivic patterns highlighting more literal repetitions, regrouped into broader motivic classes defining more generic descriptions. The combinatorial explosion inherent to the mathematical characterisation of the problem is controlled with mechanisms that filter out structural redundancy, based on closed pattern mining and pattern cyclicity.
The parametric dimensions (such as diatonic and metrical representations), along which pattern identification is performed, are reconstructed from MIDI format using original methods. Rhythm quantisation is itself founded on pattern mining: repetition of similar durations induce elementary cyclic patterns, while rhythmical patterns guide the building up of the levels constituting the metrical structure. In order to detect varied repetitions with ornamentation, deep-structure syntagmatic connections between more distant notes in the monody are added to the surface’s syntagmatic chain, and the pattern mining is carried out along the paths of the resulting syntagmatic network.
The conception of these rules for efficient pattern management can be related to intuitive principles related to Gestalt-Theory and to music cognition, guided by phenomenological reflexions about music listening and understanding.
This computational approach is being released as part of MiningSuite, a free open-source Matlab framework for audio and music analysis.
Olivier Lartillot
Olivier Lartillot is a researcher in computational music analysis at the Department of Architecture, Design and Media Technology, Aalborg University, Denmark. Formerly at the Finnish Centre of Excellence in Interdisciplinary Music Research, University of Jyväskylä, he designed MIRtoolbox, a referential tool for music feature extraction from audio. He also works on symbolic music analysis, notably on sequential pattern mining. In the context of his 5-year Academy of Finland research fellowship, he conceived the MiningSuite, an analytical framework that combines audio and symbolic research. He continues his work as part of a collaborative European project called Learning to Create (Lrn2Cr8), which acknowledges the financial support of the Future and Emerging Technologies (FET) programme within the Seventh Framework Programme for Research of the European Commission, under FET grant number 610859.
A Comparison of Similarity Metrics for Musical Pattern Matching¶
Berit Janssen, Meertens Institute Amsterdam
Melodic similarity has been widely investigated for whole melodies; for short melodic patterns, it is still an elusive concept. This study aims to close this gap of knowledge by systematically comparing six similarity measures in a musical pattern matching task, assessing which measure matches human annotations of repeating patterns in a collection of folk songs most closely. Melodic patterns are represented as pitch contours, and compared with the following six similarity measures: the number of mismatches (kMismatch), the amount of pitch differences (Difference), the number of edit operations (Levenshtein distance), the amount of edit operations with a modified substitution function (Substitution Distance), the correlation coefficient (Correlation) and the difference of polynomial interpolation curves (Pitch Derivative) of the pitch contours. Moreover, the influence of pattern length on the quality of the results is assessed. Our analysis indicates that measures such as kMismatch and Difference perform comparably well as Levenshtein and Substitution Distance measures. Shape Similarity and Correlation achieve best results for long patterns, rendering them interesting candidates for many use cases despite their lower performance for shorter musical patterns. We conclude important further steps in this research area: the investigation of different music representations, and the analysis of other melody collections.
Berit Janssen
Berit studied Systematic Musicology in Hamburg, Germany and Electroacoustic Composition at Anglia Ruskin University, Cambridge, UK. After receiving her MA in 2009, she moved to the Netherlands, where she was involved in the research and development at the Studio for Electro-Instrumental Music (STEIM), Amsterdam. She also worked in research and education as a production coordinator for the Digital Art Lab, Zoetermeer. Since 2012, she is a Ph.D. candidate at the Meertens Institute and the University of Amsterdam on the variation of folk song melodies through oral transmission. She uses computational methods to measure the variation of melodies in order to uncover the musical factors behind melodic stability.
Is it the Song and Not the Singer? Hit Song Science Using Structural Features of Melodies¶
Daniel Müllensiefen, Kelly Jakubowski, Goldsmiths, University of London
Klaus Frieler, The Liszt School of Music Weimar
Hit Song Science, i.e. the identification of musical features of commercially successful songs, has usually been approached on the basis of sound features derived from audio recordings. The statistical prediction of commercial success or popularity has had variable success with critics remarking that it might be difficult or even impossible to capture the psychologically important factors for commercial success via acoustic feature analyses.
Therefore, we focus in this study on a complementary aspect, namely compositional and structural features of tunes that might contribute to commercial success. The empirical basis for this study consists of 200 tunes that have all entered the UK charts. We use the highest chart position as well as their length of time in the charts (in weeks) as indicators of their commercial success. For the monophonic main melodies taken from the 200 songs, we compute a broad range of musical features (such as pitch, interval, rhythmic and metrical distribution features, sequential characteristics, and complexity measures) using ‘melfeature’ from the MeloSpySuite toolset. The tunes’ features serve as numerical predictors for chart position and duration in the subsequent modeling steps. First, we perform a feature selection stage using the variable importance indices from a random forest regression model to select the features with largest explanatory power. Second, from this selected subset of features we construct a regression tree model using conditional inference based on permutation tests.
The results of this analysis will indicate to which degree certain structural features of melodies are associated with commercial success in popular music, and possible implications will be discussed. We will contextualize the results within a broader set of questions asking whether and to what degree musical structure can influence behavior, or phrased differently, whether certain musical structures are more prone than others to trigger certain cognitive or affective responses.
Daniel Müllensiefen
Daniel studied Systematic Musicology, Historic Musicology and Journalism at the universities of Hamburg (Germany) and Salamanca (Spain). He did his doctoral dissertation in Systematic Musicology on memory for melodies at the University of Hamburg and obtained his PhD in 2005. From 2006 until 2009 he worked as a Research Fellow in the Computing department at Goldsmiths College, University of London. Since July 2009 he has been a lecturer, and since 2012 senior lecturer, in the Psychology department at Goldsmiths, part of the Music, Mind and Brain research group, and co-director of the Master’s course in Music Mind and Brain at Goldsmiths. Since September 2010 he is also working as Scientist in Residence with the advertising agency DDB UK. His current research projects include earworms and their musical structure, the development of the Goldsmiths Musical Sophistication Index (Gold-MSI),and the notion of melodic similarity in the context of court cases of musical plagiarism. He is also a Co-Investigator on the AHRC supported large grant “Transforming Musicology”.
Kelly Jakubowski
Kelly Jakubowski is in her second year of PhD studies in Psychology at Goldsmiths, University of London under the supervision of Lauren Stewart and Daniel Müllensiefen. Her PhD research aims to combine behavioural, computational and neuroscientific approaches to study the phenomenon of involuntary musical imagery, or “earworms”, and is funded by the Leverhulme Trust. She holds a Bachelor of Music degree in Violin Performance/Music Theory from Baldwin Wallace Conservatory of Music (USA) and completed Masters degrees in Music at the Ohio State University (USA) and Music Psychology at Goldsmiths, University of London. Her other research areas include absolute pitch, melodic memory, and music and emotion.
Panel Discussion (in German): Entfernte Cousinen oder Blutsbrüder? Schnittstellen zwischen Jazzforschung und Jazzpadägogik¶
Wie kann die Jazzausbildung an Hochschulen, Musikschulen und allgemein bildenden Schulen von der Jazzforschung profitieren?
Zwar ist die Geschichte des Jazz und seine kulturellen, technologischen, ökonomischen, politischen und sozialen Rahmenbedingungen fester Bestandteil der Jazzausbildung, doch es gibt noch weitere Berührungspunkte und Potenziale für einer Zusammenarbeit von Jazzforschung und Jazzpraxis – vor allem in den folgenden Bereichen:
Musikpsychologische Kreativitätsforschung: Was ist eigentlich Improvisieren? Wie lernt man Improvisieren? Wie kann man kreatives Musikmachen vermitteln?
Jazzanalyse und Jazztheorie: Wie lässt sich durch die Untersuchung von Improvisationen eine Jazztheorie formulieren bzw. bestehende jazztheoretische Ansätze korrigieren, optimieren und vervollständigen?
Software-Tools in der Jazzausbildung: Welche Tools stellen eine sinnvolle Ergänzung des Unterrichts dar (z.B. Band-in-a-Box oder Songs2See)? Welcher konkrete Bedarf an neuen Computeranwendungen besteht?
Auf der Veranstaltung sollen diese und weitere Fragen zwischen Jazzmusikern, Jazzpädagogen, Jazztheoretikern, Jazzforschern und Musikinformatikern diskutiert werden.
Teilnehmer
Wolfgang Bleibel studierte klassisches Saxophon an der Musikhochschule Detmold, Abt. Münster, und Jazz an der Hochschule für Musik und Darstellende Kunst in Graz und bei Herb Geller (1976-78). Er spielte ab 1978 regelmäßig als erster Altsaxophonist in der Big Band des NDR Hamburg, später beim NDR Hannover, Radio Bremen und WDR Köln, sowie in zahlreiche Konzerte, Produktionen und Platteneinspielungen mit Musikern wie Benny Bailey, Dave Liebman, Bill Dobbins, Anthony Braxton, Philip Catherine, Paul Kuhn, Walter Norris, Joe Pass, Jiggs Whigham, Ray Anderson, Danny Richmond u.a. Ab 1987 unterrichtet er an deutschen Musikhochschulen (Münster, Bremen). Seit 1995 ist er Professor für Jazz-Saxophon an der Hochschule für Musik FRANZ LISZT Weimar und leitet derzeit das dortige Institut für Jazz.
Jo Thönes studierte klassisches Schlagzeug an der Musikhochschule Köln bei Prof. Christoph Caskel. Er spielte zahlreiche Konzerte und Produktionen u. a. mit John Abercrombie, Arild Andersen, Rainer Brüninghaus, Palle Danielsson, Klaus Doldinger, Jasper van´t Hof, Joachim Kühn, Dave Liebman, Palle Mikkelborg, Albert Mangelsdorff, Markus Stockhausen, John Taylor, Gary Thomas, James Blood Ulmer, Kenny Wheeler, Tony Oxley, unternahm zahlreiche Tourneen für das Goethe-Institut nach Osteuropa, Nahost und Südostasien, trat bei internationalen Festivals in Frankfurt am Main, Berlin, Moers, Sevilla, Warschau und Den Haag auf und wirkte bei Aufführungen im Bereich der zeitgenössischen Musik mit Werken von Zimmermann, Kagel, Reich und Ives mit. Seit 1980 ist er auf Workshops tätig und unterrichtet seit 1985 an deutschen Musikhochschulen (Hamburg, Essen). Seit 1995 ist er Professor für Jazz-Drumset an der Hochschule für Musik FRANZ LISZT Weimar.
Wolf-Georg Zaddach wurde 1985 in Lübben/Spreewald geboren und studierte Musikwissenschaft, Kulturmanagement und Neuere Geschichte an der Hochschule für Musik FRANZ LISZT Weimar und der Friedrich-Schiller-Universität Jena (Magister Artium 2011; Thema der Abschlussarbeit: “Jazz in der Tschechoslowakei 1956-1968”) sowie Musikmanagement an der HAMU Praha und Jazzgitarre (VOS Praha) bei David Dorůžka in Prag. Von 11/2011 bis 11/2013 war er Wissenschaftlicher Mitarbeiter am Studiengang Kulturmanagement und ehrenamtlicher Geschäftsführer des weim|art e.V. Er ist seit Oktober 2012 Mitarbeiter im Forschungsprojekt “Melodisch-rhythmische Gestaltung von Jazzimprovisationen. Rechnerbasierte Musikanalyse einstimmiger Jazzsoli”. Im Rahmen einer Promotion im Fach Musikwissenschaft beschäftigt er sich mit dem Heavy und Extreme Metal in der DDR der 1980er Jahre. Hierfür erhielt er im Februar 2013 ein Promotionsstipendium der Studienstiftung des deutschen Volkes. Darüber hinaus ist er freiberuflich als Gitarrist (Jazz, Fusion, Metal) mit Konzerten im In- und Ausland tätig.
Saturday 27th September 2014¶
Session III: Music Information Retrieval – Perspectives for Audio Analysis¶
Imitating Jazz Styles for Composition, Harmonization and Accompaniment. The FlowMachines Project¶
François Pachet, Sony CSL Paris
Imitating a given musical style is an old topic in computer music research, and many approaches have been developed to that aim with varying degrees of success. However, no approach has so far enabled users to constrain the generation so that imitative material not only “sound” like the originals, but also satisfy arbitrary constraints. We describe a large-scale project about style imitation in which we have developed a new generation of algorithms (called Markov constraints) that enable users to generate sequences in arbitrary styles while enforcing several types of structural constraints, including meter, maxOrder (related to plagiarism) or compressibility (related to the idea of „balance”). I describe several systems based on these powerful techniques for composition in the style of (any composer from real or fake books), arranging in the style of (notably the band Take 6), as well as accompaniment (comping) in the style of (several jazz pianists). I illustrate the system by real world examples generated from a comprehensive database of jazz leadsheets. I discuss the issues of validation (what is a “good” composition?) in the context of automatic generation, and more generally of style creation (how do composers create “new” musical languages?).
François Pachet
Francois Pachet received his Ph.D. and Habilitation degrees from Paris 6 University (UPMC). He is a Civil Engineer (Ecole des Ponts and Chaussées) and was Assistant Professor in Artificial Intelligence and Computer Science, at Paris 6 University, until 1997. He is now director of SONY Computer Science Laboratory Paris, where he leads the music research team. The music team conducts research on interactive music listening, composition and performance. Since its creation, the team developed several award winning technologies (constraint-based spatialization, intelligent music scheduling using metadata) and systems (MusicSpace, PathBuilder, Continuator for interactive music improvisation, etc.). His current goal, funded by an ERC Advanced grant, is to create a new generation of authoring tools able to boost individual creativity. These tools, called Flow Machines, abstract “style” from concrete corpora (text, music, etc.), and turn it into a malleable substance that acts as a texture. Applications range from music composition to text or drawing generation and probably much more. Links: FlowMachines, Homepage
Cross-domain Music Retrieval¶
Meinard Müller, University of Erlangen-Nuremberg, International Audio Laboratories Erlangen
Music collections comprise documents of various types and formats including text, symbolic data, audio, image, and video.
In this presentation, we present and discuss various cross-modal music retrieval scenarios that are based on the query-by-example paradigm: given a music representation or a fragment of it (used as query or example), the task is to automatically retrieve documents from a given music collection containing parts or aspects that are similar to it.
Such strategies can be loosely classified according to their specificity, which refers to the degree of similarity between the query and the database documents. While high-specificity retrieval tasks such as audio fingerprinting can be regarded as largely solved, the requirements on retrieval systems change significantly when considering retrieval tasks of lower specificity such as audio matching and version identification. In such scenarios, one needs to deal with variations in aspects such as instrumentation, tempo, musical structure, key, or melody.
In connection with the Jazzomat Research Project, various cross-modal retrieval scenarios may be considered. For example, one may use a symbolic encoding of a monophonic musical theme or solo as a query. One objective could be to retrieve all music recordings where a similar theme or solo is played.
To make the various information sources comparable, one needs to convert the query and the database documents into suitable mid-level representations of musical relevance. For example, chroma-based music representations may be used if one is interested in finding similar harmonic progressions. For comparing monophonic with polyphonic sources, one may require source separation techniques for extracting the main melody from a given music recording.
The main goal of this presentation is to discuss various audio and music processing techniques while highlighting the sometimes subtle differences between various retrieval scenarios. In particular, we elaborate on the differences between fragment-level and document-level retrieval, as well as on various specificity levels found in the music search and matching process.
Meinard Müller
Meinard Müller studied mathematics (Diplom) and computer science (Ph.D.) at the University of Bonn, Germany. In 2002/2003, he conducted postdoctoral research in combinatorics at the Mathematical Department of Keio University, Japan. In 2007, he finished his Habilitation at Bonn University in the field of multimedia retrieval writing a book titled “Information Retrieval for Music and Motion,” which appeared as Springer monograph. From 2007 to 2012, he was a member of the Saarland University and the Max-Planck Institut für Informatik leading the research group “Multimedia Information Retrieval and Music Processing” within the Cluster of Excellence on Multimodal Computing and Interaction. Since September 2012, Meinard Müller holds a professorship for Semantic Audio Processing at the International Audio Laboratories Erlangen, which is a joint institution of the Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) and the Fraunhofer-Institut für Integrierte Schaltungen IIS. His recent research interests include content-based multimedia retrieval, audio signal processing, music processing, music information retrieval, and motion processing.
Score-Informed Estimation of Pitch-Gliding and Vibrato in Trumpet and Saxophone Jazz Solos¶
Jakob Abeßer, The Liszt School of Music Weimar, Fraunhofer IDMT Ilmenau
The personal style of a musician has a strong effect on her or his musical performance. In this study, I will analyze a selection of 23 transcribed trumpet and saxophone solos taken from the Weimar Jazz Database with respect to the common frequency modulation techniques vibrato, bend, slide, and fall-off. As a first step, the fundamental frequency (F0) contour will be tracked in a spectrogram representation of the original audio signal. The parameters note onset, note offset, and note pitch, which are available from the existing solo transcriptions, allow to significantly narrow down the search area in the spectrogram.
In order to automatically classify the applied frequency modulation techniques, a machine learning approach will be utilized. Different numerical descriptors (features) will be computed in order to characterize the shape of each F0 contour with respect to its slope, modulation frequency, modulation range, and number of modulation periods. Furthermore, each tone is segmented into a transient part, which often shows a F0 transition from the previous tone, a stable part with respect to the pitch perception of the tone, as well as a decay part, which often blends to the next F0 contour of the subsequent tone.
With each tone being represented by a number of features, a statistical classification model will be trained based on a set of annotated jazz solos.
In this presentation, I will describe the proposed algorithm in detail and discuss the classification results for different instruments, solos, and artists.
Finally, I will sketch further research steps for incorporating other style-specific parameters such as micro-timing and instrument sound towards a general model of personal articulation in jazz improvisations.
Jakob Abeßer
Jakob Abeßer studied computer engineering at Technischen Universität Ilmenau and graduated in 2008. Since 2008, he worked as a Ph.D. student at the Semantic Music Technologies group at the Fraunhofer Institute for Digital Media Technology (IDMT). In 2010, Jakob Abeßer completed a four month research stay as visiting Ph.D. student at the Finnish Centre of Excellence in Interdisciplinary Music Research in Jyväskylä, Finland. In December 2013, he submitted his Ph.D. thesis entitled “Automatic Transcription of Bass Guitar Tracks applied for Music Genre Classification and Sound Synthesis”. Ever since, he works as a research assistant at IDMT as well as the Liszt School of Music in Weimar, where he participates in the Jazzomat Research Project. His main research interests are music information retrieval, automatic music transcription, musical instrument recognition and modeling, as well as music performance analysis.
Estimating Swing Ratios and Soloist Micro-timing from Jazz Recordings with Aligned Beat Grids¶
Christian Dittmar, International Audio Laboratories Erlangen
Automatic music analysis conducted by means of Digital Audio Signal Processing and Pattern Recognition can be helpful for larger scale systematic musicology studies. Fully automated methods are of course faster than human experts but usually fail when considering complex music recordings. However, detailed focus on certain musical aspects, such as micro-timing might benefit very well from automated analysis. So far, several studies have been published on swing ratio and ensemble timing in jazz recordings. These were conducted in a semi-automatic manner by manually marking onsets of different instruments in spectrograms. We expect that automatic tools can be of great help here. With respect to the Jazzomat Research Project, we plan to investigate the findings in Fribers&Sundström, 2002 by the following steps:
Automatically detect ride cymbal (optionally hi-hat) onsets.
Determine the swing ratio of the ride cymbal w.r.t. manual beat annotations.
Determine swing ratio and onset delay of solo wind instruments.
Compare the two swing ratios and beat onsets of ride cymbal and the soloist.
We constrain the study to a subset of the Jazzomat dataset that fulfills the following requirements:
Beat grid annotations have been manually aligned to the jazz solo recordings.
The solo wind instruments play lines of 8ths and have been manually transcribed.
The drumset is playing a relatively steady beat which is clearly audible, preferably nicely separated in the stereo panorama (e.g., in 1950’s Hardbop recordings).
In addition to this methodology, we plan to investigate acoustic features that represent the salience of periodicities like the beat histogram or the log-lag autocorrelation. The major advantage of having aligned beat grid annotations is, that we can extract these features with fixed starting points and excerpt lengths instead of arbitrary segments. This should lead to very consistent characteristics of the feature vectors given the same rhythmic feel. Furthermore, we want to investigate if a preceeding harmonic/percussive separation, panorama spectral filtering and or bandpass filtering is beneficial for the onset detection.
Christian Dittmar
Christian Dittmar studied electrical engineering with specialization in digital media technology at the University for Applied Sciences in Jena, Germany from 1998 to 2002. After graduation, he joined the Metadata department at Fraunhofer IDMT, Ilmenau, Germany in 2003. Since late 2006 he was heading the Semantic Music Technology group at Fraunhofer IDMT. Together with partners from industry and academia, he managed many R&D projects in music technology. Since 2012 he is also CEO of the spin-off company Songquito, responsible for marketing of the educational music video game Songs2See. Songs2See was awarded the Innovation and Entrepreneur price of the German Gesellschaft für Informatik in 2012. Since July 2014, he is working in the research group of Meinard Müller at the International Audio Laboratories Erlangen.