Skip to content

Spectral entropy

The spectral entropy captures the "peakiness" of a spectrum. A spectrum with sharp peaks will have low entropy while a spectrum with flat distribution will have high entropy. The definition is based on the definition of the Shannon entropy.

Spectral entropy is computed from the power spectrum Xp=|X|2RM using the following formula:

SpectralEntropy=m=0M1p(m)log2p(m)log2M,

where:

  • p(m) is the probability mass function (PMF) of the power spectrum Xp:

    p(m)=Xp[m]m=0M1Xp[m],
  • Xp[m] is the power at frequency bin m,

  • M is the total number of frequency bins.

Normalization

The entropy is normalized by dividing by log2M to constrain the output to the range [0,1]. This normalization guarantees that the spectral entropy is comparable across different spectra, independent of their resolution. The normalized value of 0 indicates a perfectly deterministic spectrum (e.g., a single peak), while a value of 1 indicates maximum uncertainty (e.g., a flat spectrum).

Single-pass computation

The entropy is derived from the PMF of the power spectrum p(m). As a result, computing the entropy typically requires two steps: first, calculating the total energy of the power spectrum, Xp,sum=m=0M1Xp[m], and second, computing the entropy using the PMF p(m)=Xp[m]Xp,sum.

However, the entropy formula can be reformulated to allow for a single-pass computation:

m=0M1p(m)log2p(m)=m=0M1Xp[m]Xp,sumlog2Xp[m]Xp,sum=m=0M1Xp[m]Xp,sum(log2Xp[m]log2Xp,sum)=m=0M1Xp[m]Xp,sumlog2Xp[m]+log2Xp,summ=0M1Xp[m]Xp,sum=1=log2Xp,sum1Xp,summ=0M1Xp[m]log2Xp[m]

References

Code

INFO

The following snippet is written in a generic and unoptimized manner. The code aims to be comprehensible to programmers familiar with various programming languages and may not represent the most efficient or idiomatic Python practices. Please refer to implementations for optimized implementations in different programming languages.

py
import numpy as np


def spectral_entropy(spectrum: np.ndarray):
    ps = np.abs(spectrum) ** 2
    ps_sum = np.sum(ps)
    if ps_sum == 0.0:
        return 0.0
    p = ps / ps_sum
    p = p[p != 0]
    return -np.sum(p * np.log2(p)) / np.log2(len(ps))

Run in playground