CLASS LOCATION: Tech Institute Lecture Room 5
DAYS AND HOURS: Mon, Wed 3:30pm - 4:50pm

office location: 3-323 Ford Building
office phone number: 847 491 7184
office hours: Mon 5pm - 6pm

TEACHING ASSISTANT 1: Prem Seetharaman
office location:Ford 3.210
office hours:Thu 12 - 2pm

TEACHING ASSISTANT 2: Fatemeh Pishdadian
office location: Ford 3.210
office hours:Tue 3-5pm

REQUIRED TEXTBOOK: Fundamentals of Music Processing

PROGRAMMING LANGUAGE AND ENVIRONMENT: We will be using iPython Notebook (AKA Jupyter) with Python 2.7


Prior programming experience sufficient to be able to do laboratory assignments in PYTHON is required. Completion of the Engineering Analysis (GEN_ENG 205-2) series or EECS 211 or EECS 231 would demonstrate sufficient experience. A willingness to deal with math is also a prerequisite.


How do you tell the sound of a clarinet from the sound of a kazoo? Is this song a waltz or a tango? If your friend likes Yo La Tengo, would she prefer a CD by the Flaming Lips or Bon Jovi? Can a computer answer these questions?

Researchers in computational music perception apply signal processing, psychology, music theory, machine learning, and natural language processing techniques to auditory user interfaces for human-computer interaction. Current application areas include vocal interfaces and search engines for music databases, machine accompaniment of human musicians, automated music recommendation systems, and tools for music production.

Machine Perception of Music will introduce students to the field of computational music perception through a combination of lectures, readings, and lab work in MATLAB. Students will learn basics of how sound and music are recorded and encoded by computers as .wav and MIDI files. The class will also explore basics of audio perception, including the relationship between pitch and frequency and the difficulties inherent in auditory scene analysis by humans and machines. Basic classification and sequence alignment techniques will also be introduced.