Entropy measures for wave functions
We can define an entropy measure, or information content, for a wave function composed as a linear expansion of Slater determinants as (Shannon 1948; Ivanov, Lyakh, and Adamowicz 2005; Van Raemdonck 2017; Lain et al. 2015): \[\begin{align} \require{physics} I_C &= % - \sum_{ % \vb{k}, \thinspace c_{\vb{k}} \neq 0 % }^{\dim{\mathcal{F}}} % \abs{ c_{\vb{k}} }^2 \log_2( \abs{c_{\vb{k}}}^2 ) \\ &= % - \frac{1}{\ln(2)} \sum_{ % \vb{k}, \thinspace c_{\vb{k}} \neq 0 % }^{\dim{\mathcal{F}}} % \abs{ c_{\vb{k}} }^2 \ln( \abs{c_{\vb{k}}}^2 ) \end{align}\] for a normalized wave function, i.e. \[\begin{equation} \sum_{\vb{k}}^{\dim{\mathcal{F}}} % \abs{c_{\vb{k}}}^2 = 1 % \thinspace . \end{equation}\] Since any coefficient squared is smaller than or equal to 1, the corresponding logarithm is smaller than or equal to 0, which makes the Shannon entropy inherently positive.
The minimal value for the entropy is found when there is only one coefficient (equal to \(1\)) and the rest of the coefficients are equal to zero. In this case (e.g. in a Hartree-Fock expansion), we have \[\begin{equation} I_{C, \text{min}} = 0 \thinspace . \end{equation}\] The maximal entropy is reached for wave functions in which every coefficient is equal, being \[\begin{equation} c_{\vb{k}} = \frac{1}{ \sqrt{ \dim{\mathcal{F}} } } \thinspace , \end{equation}\] leading to an entropy value of \[\begin{equation} I_{C, \text{max}} = \log_2( \dim{\mathcal{F}} ) \thinspace . \end{equation}\]
A wave function is then called less compact (or more smeared out) if there are many coefficients that play an important role, which leads to a high Shannon entropy value. Lower Shannon entropies are therefore characteristic for compact wave functions.