Implementation of the Psychoacoustic Ear Model II in MPEG-1 layer III (MP3) as described in ISO/IEC 11172-3

https://github.com/nick7ong/mp3_codec

Overview

This project implements the Psychoacoustic Ear Model II used in MPEG-1 Layer III (MP3), as described in ISO/IEC 11172-3. The model estimates perceptual masking thresholds to determine which frequency components of an audio signal can be discarded without audible loss, enabling efficient compression. We implemented and simulated key steps of this codec pipeline using Python (NumPy, SciPy), and tested on two audio files: flute.wav (solo flute) and queen.wav (a segment from Queen’s Bohemian Rhapsody).

*See the Results section to hear sound examples

FFT Analysis

We began by loading the audio signals and computing short-time Fourier transforms using a Hann window with 50% overlap (frame size = 1024, hop size = 512). We normalized the resulting spectral energy into Sound Pressure Level (SPL) and decibel full scale (dBFS) representations. Frames with very low energy (below -96 dBFS) were skipped to optimize downstream processing.

Identification of Tonal and Noise Maskers

We applied the Bark scale, a psychoacoustically motivated frequency mapping, to the FFT bins to analyze how the human auditory system groups frequencies. Tonal and noise maskers were identified by scanning SPL spectra for local maxima that meet masking criteria based on spectral shape and frequency proximity. The search window for local peaks dynamically expands with frequency index to model the widening critical bands of human hearing.

image.png

Decimation and Reorganization of Maskers

Not all detected maskers are perceptually relevant. We eliminated those falling below the threshold of human audibility (threshold in quiet), and merged maskers that lie within close Bark-scale proximity. This step reduces redundant or weak masking components, simulating perceptual redundancy removal common in audio codecs.

image.png

Individual Masking Thresholds

Using spreading functions, we calculated how much each tonal or noise masker contributes to masking nearby frequency bins. Tonal maskers have narrow, sharply peaked masking curves, while noise maskers produce broader, flatter masking effects. These individual masking thresholds form the basis for computing global audibility.

image.png

Global Masking Threshold

We aggregated all individual masking curves and the threshold in quiet to produce a global masking threshold for each frame. This threshold defines the frequency-dependent SPL below which any signal is considered inaudible due to simultaneous masking — critical for perceptual quantization and bit allocation.

Quantization and SMR-Driven Low-Pass Filtering

To simulate perceptual compression, we applied two key techniques:

Figure_3.png

Results