The Olsen addressing scheme
The nonrelativistic Hamiltonian is a spin tensor operator of rank 0, so we can restrict the ONVs of the CI expansion to have the same spin projection. Letting \(N\) be the total number of electrons, and \(M\) the spin projection number, we have \[\begin{equation} \require{physics} \begin{cases} N_\alpha = \frac{1}{2} N + M \\ N_\beta = \frac{1}{2} N - M \thinspace , \end{cases} \end{equation}\] and every ONV in the FCI expansion has \(N_\alpha\) \(\alpha\)-electrons and \(N_\beta\) \(\beta\)-electrons. We then know that the number of ONVs in the expansion is \[\begin{equation} \dim{\mathcal{F}_{\text{FCI}}} % = \binom{K}{N_\alpha} \binom{K}{N_\beta} \thinspace . \end{equation}\] So, let’s write the occupation number vectors as \[\begin{equation} \ket{I_\alpha I_\beta} = \hat{\alpha}_{I_\alpha} \hat{\beta}_{I_\beta} \ket{\text{vac}} \thinspace , \end{equation}\] in which we let the \(\alpha\)-orbitals precede the \(\beta\)-orbitals and the spatial orbitals are written in canonical (ascending) order. We should already point out that \(I_\alpha\) and \(I_\beta\) are integers, being the addresses or numbers associated to a particular string and \(I_\alpha I_\beta\) really does stand for the product of these two integers. Note that \(I_\alpha\) is not equal to the integer representation of the associated computer bit string.
Having introduced an alternative notation for the occupation number vectors: \(\ket{I_\alpha I_\beta}\) as opposed to \(\ket{\vb{k}}\), we are better equipped to try and find the FCI Hamiltonian matrix, i.e. the matrix representation of the (molecular electronic) Hamiltonian in the basis of the ONVs. Just to get an initial feel for how FCI calculation can be performed, we can find the one-electron part of the Hamiltonian matrix as: \[\begin{equation} h_{I_\alpha I_\beta, J_\alpha J_\beta}^\text{SD} % = \sum_{pq}^K h_{pq} \qty( % \matrixel{ % I_\alpha I_\beta % }{ \hat{E}^\alpha_{pq} }{ % J_\alpha J_\beta % } % + \matrixel{ % I_\alpha I_\beta % }{ \hat{E}^\beta_{pq} }{ % J_\alpha J_\beta % } % ) \thinspace , \end{equation}\] which is non-zero only for the following: \[\begin{align} & \ket{I_\alpha I_\beta} = \ket{J_\alpha J_\beta} \\ & \ket{I_\alpha I_\beta} = \hat{E}^\alpha_{pq} \ket{J_\alpha J_\beta} \\ & \ket{I_\alpha I_\beta} = \hat{E}^\beta_{pq} \ket{J_\alpha J_\beta} % \thinspace , \end{align}\] whis means that the determinantal representation of \(\hat{h}\) is a sparse one, and it’s huge. Only for the smallest computations, this means this matrix representation is feasible, and in practice it is even impossible to store this matrix for somewhat larger systems (but they are still small). The two-electron part is a little bit more involved, but still manageable and the key point is that the two-electron part is also huge and sparse. Effectively, in this approach we are setting up the full FCI matrix with elements \[\begin{equation} \label{eq:FCI_Hamiltonian_matrix_elements} \mathcal{H}_{I_\alpha I_\beta, J_\alpha J_\beta} % = \matrixel{ I_\alpha I_\beta }{ \hat{\mathcal{H}} }{ J_\alpha J_\beta } \end{equation}\] and subsequently we will diagonalize this matrix: \[\begin{equation} \vb{H} \vb{C} = E \vb{C} % \thinspace . \end{equation}\] Since we would like to overcome this sparse issue, the goal would be to use this representation, without storing the resulting Hamiltonian matrix \(\vb{H}\). One thing we will try to develop are efficient algorithms to calculate matrix-vector products of the type \[\begin{align} & \boldsymbol{\sigma} = \vb{H} \vb{C} \\ & \sigma_{I_\alpha I_\beta} = \sum_{J_\alpha J_\beta} % \matrixel{ % I_\alpha I_\beta % }{ \hat{\mathcal{H}} }{ % J_\alpha J_\beta % } C_{J_\alpha J_\beta} % \label{eq:matvec_determinantal_basis} % \thinspace , \end{align}\] the types of matrix-vector products that are needed in the Davidson step. This is an iterative approach to find the lowest eigenpair, starting with an initial guess \(\vb{C}_1\) and every step producing a new guess \(\vb{C}_n\), converging eventually to the solution \(\vb{C}\) of the Hamiltonian eigenproblem. As a naive solution to the problem, for every \(I_\alpha I_\beta\), we could loop over all \(J_\alpha J_\beta\) and calculate the corresponding Hamiltonian matrix element. Although simple, it is inefficient because it scales quadratically in the dimension of the CI space since there are two loops: one for every product \(I_\alpha I_\beta\) and one for every product \(J_\alpha J_\beta\).
When the Hamiltonian acts on an ONV \(\ket{J_\alpha J_\beta}\), it produces a new set of ONVs \(E^\sigma_{pq} \ket{I_\alpha I_\beta}\). As we are ultimately concerned with calculating the overlap between \(\ket{I_\alpha I_\beta}\) and these kinds of strings (cfr. equation \(\eqref{eq:matvec_determinantal_basis}\)), we should find a connection between between the occupation of a string and its address (i.e. its ordering number).
For starters, let us first introduce the concept of so-called reverse lexical ordering. This means we will order the spin string as follows. A string A comes before a string B if the last occupation they differ in is smaller in A than in B. This means that, for example: \[\begin{equation} \hat{a}^\dagger_{1 \alpha} % \hat{a}^\dagger_{2 \alpha} % \hat{a}^\dagger_{4 \alpha} < % \hat{a}^\dagger_{1 \alpha} % \hat{a}^\dagger_{3 \alpha} % \hat{a}^\dagger_{5 \alpha} % \thinspace , \end{equation}\] since the last occupation they differ in is \(4\) vs \(5\) and \(4\) is smaller than \(5\). This is called reverse lexical ordering, because the corresponding bit strings are \(\ket{11010}\) and \(\ket{10101}\) and the integer representation of the first bit string is larger than the second, hence the naming reverse.
So now, what is the correspondence between the address (or ordering number) of a spin string and its occupation? We will introduce the so-called addressing scheme. Each spin string will be defined by a path in a \(K \times N_\sigma\) - diagram, starting at \((0,0)\) and ending at \((K,N_\sigma)\), connecting the vertices of the possible pairs \((p,m)\), in which \(p\) is the orbital index and \(m\) is the number of electrons in the orbitals up to \(p\). There are two possible arcs between two vertices, which are diagonal (corresponding to the question: is orbital \(p+1\) occupied?) and vertical (corresponding to the question: is orbital \(p+1\) unoccupied)? We will associate a vertex weight \(W_{p,m}\) to every allowed vertex \((p,m)\) (i.e. an index that is visited by at least one path). Since all paths come from either vertically above (unoccupied) or left-diagonally above (occupied), there is a recurrence relation for vertex weights: \[\begin{equation} W_{p, m} = W_{p-1, m} + W_{p-1, m-1} % \thinspace . \end{equation}\] The weight at the origin is chosen to be \[\begin{equation} W_{0,0} = 1 % \thinspace , \end{equation}\] and forbidden weights (i.e. weights corresponding to forbidden vertices) are set to zero. It can be shown that the address of a spin string in reverse lexical ordering is then given by \[\begin{equation} I(m_1, m_2, \ldots, m_K) % = 1 + \sum_{p}^K Y^{m_p - m_{p-1}}_{p, m_p} % \thinspace , \end{equation}\] in which \(m_p\) is the number of electrons occupied up until orbital \(p\) (not to be confused by the occupation number of orbital \(p\) in the given ONV) and \[\begin{equation} \begin{cases} Y^0_{p, m} = 0 \\ Y^1_{p, m} = W_{p-1, m} \thinspace . \end{cases} \end{equation}\]