3.4. Density matrix representation#

3.4.1. General introduction#

For a general introduction, and discussion of density matrix techniques and applications in AMO physics, see Blum’s textbook Density Matrix Theory and Applications [97], which is referred to extensively herein. The general density operator, for a mixture of independent states \(|\psi_{n}\rangle\), can be defined as per Eqn. 2.8 in Blum [97]:

\[ \hat{\rho}=\sum_{n}W_{n}|\psi_{n}\rangle\langle\psi_{n}| \]

Where \(W_{n}\) defines the (statistical) weighting of each state \(\psi_{n}\) in the mixture.

For a given basis set, \(|\phi_{m}\rangle\), the states can be expanded and the matrix elements of \(\rho\) defined as per Eqns. 2.9 - 2.11 in Blum [97]:

\[ | \psi_{n} \rangle = \sum_{m'} a_{m'}^{(n)}| \phi_{m'}\rangle \]
(3.29)#\[ \hat{\rho}=\sum_{n}\sum_{mm'}W_{n}a_{m'}^{(n)}a_{m}^{(n)*}|\phi_{m'}\rangle\langle\phi_{m}| \]

And the matrix elements - the density matrix - given explicitly as:

(3.30)#\[ \rho_{i,j}=\langle\phi_{i}|\hat{\rho}|\phi_{j}\rangle=\sum_{n}W_{n}a_{i}^{(n)}a_{j}^{(n)*} \]

For all pairs of basis states \((i,j)\). This defines the density matrix in the \(\{|\phi_n\rangle\}\) representation (basis space). Of particular note here is that the mixed states are assumed to be incoherent (independent), whilst the basis expansion is coherent.

3.4.2. Continuum density matrices#

In general, the discussion herein will focus on the photoelectron properties and generally assume a single final ion, and associated free-electron state of interest, hence the final state (Eq. (3.1)) can be simplified to \(|\Psi_f\rangle\equiv|\mathbf{k}\rangle\). This is equivalent to a “pure state” in density matrix terminology, which can then expanded (coherently) in an appropriate representation (basis). Following this, the density operator associated with the continuum state can be written as \(\hat{\rho}=|\Psi_f\rangle\langle\Psi_f|\equiv|\mathbf{k}\rangle\langle\mathbf{k}|\). Making use of the tensor notation introduced in Sect. 3.3, the final continuum state can then be expanded as a density matrix in the \(\zeta\zeta'\) representation (with the observable dimensions \(\{L,M\}\) explicitly included in the density matrix), which will also be dependent on the choice of channel functions (hence “experiment” \(u\)); the density matrix can then be given as:

(3.31)#\[ {\rho}_{L,M}^{u,\zeta\zeta'}=\varUpsilon_{L,M}^{u,\zeta\zeta'}\mathbb{I}^{\zeta,\zeta'} \]

Here the density matrix can be interpreted as the final, LF/AF or MF density matrix (depending on the channel functions used), incorporating both the intrinsic and extrinsic effects (i.e. all channel couplings and radial matrix elements for the given measurement), with dimensions dependent on the unique sets of quantum numbers required - in the simplest case, this will just be a set of partial waves \(\zeta = \{l,m\}\).

In the channel function basis, a radial, or reduced, form of the density matrix can also be constructed, and is given by the coherent product of the radial matrix elements (as defined in Eq. (3.14)):

(3.32)#\[ \rho^{\zeta\zeta'} = \mathbb{I}^{\zeta,\zeta'} \]

This form encodes purely intrinsic (molecular scattering) photoionization dynamics (thus characterises the scattering event), whilst the full form \({\rho}_{L,M}^{u,\zeta\zeta'}\) of Eq. (3.31) includes any additional effects incorporated via the channel functions. For reconstruction problems, it is usually the reduced form of Eq. (3.32) that is of interest, since the remainder of the problem is already described analytically by the channel functions \(\varUpsilon_{L,M}^{u,\zeta\zeta'}\). In other words, the retrieval of the radial matrix elements \(\mathbb{I}^{\zeta,\zeta'}\) and the radial density matrix \(\rho^{\zeta\zeta'}\) are equivalent, and both can be viewed as completely describing the photoionization dynamics.

The \(L,M\) notation for the full density matrix \({\rho}_{L,M}^{u,\zeta\zeta'}\) (Eq. (3.31)) indicates here that these dimensions should not be summed over, hence the tensor coupling into the \(\beta_{L,M}^{u}\) parameters can also be written directly in terms of the density matrix (cf. Eq. (3.13)):

(3.33)#\[ \beta_{L,M}^{u}=\sum_{\zeta,\zeta'}{\rho}_{L,M}^{u,\zeta\zeta'} \]

In fact, this form arises naturally since the \(\beta_{L,M}^{u}\) terms are the state multipoles (geometric tensors) defining the system, which can be thought of as a coupled basis equivalent of the density matrix representations (see, e.g., Ref. [97], Chpt. 4.).

In a more traditional notation (following Eq. (3.1), see also Ref. [100]), the density operator can be expressed as:

(3.34)#\[ \rho(t) =\sum_{LM}\sum_{KQS}A^{K}_{QS}(t)\sum_{\zeta\zeta^{\prime}}\varUpsilon_{L,M}^{u,\zeta\zeta'}|\zeta,\Psi_+\rangle\langle\zeta,\Psi_+|\mu_q\rho_i\mu_{q\prime}^{*}|\zeta^{\prime},\Psi_+\rangle\langle\zeta^{\prime},\Psi_+| \]

This is, effectively, equivalent to an expansion in the various tensor operators defined in the channel function notation above (Eq. (3.31)), but in a standard state-vector notation. Note, also, that this form explicitly defines the initial state of the system as a density matrix \(\rho_i = |\Psi_i\rangle\langle\Psi_i|\), and explicitly allows for time-dependence via the \(A_{Q,S}^{K}(t)\) term. Finally, it is of note that these density matrices are implicitly energy dependent through the dependence on the final state energy \(|\mathbf{k}|\) but, in many cases (including the examples herein) are considered only at a single energy. This is usually due to experimental considerations, which typically provide photoelectron observables at a single energy (or small range \(\delta k\) over which the change in the continuum is considered negligible), hence there are no cross-terms in energy; however, in some cases (e.g. certain types of multi-pulse experiment), interferences between different continuum energies may be access, and density matrices including energy-dependence (e.g. in the \(\zeta = \{l,m,k\}\) representation) may be of interest. (For further discussion of the use of density matrices in other specific cases, see Quantum Metrology Vol. 1 [4], particularly Chpts. 2 & 3, and refs. therein.)

The main benefit of a (continuum) density matrix representation in the current work is as a rapid way to visualize the phase relations between the photoionization matrix elements (the off-diagonal density matrix elements), and the ability to quickly check the overall pattern of the elements, hence confirm that no phase-relations are missing and orthogonality relations are fulfilled - some numerical examples are given below. Since the method for computing the density matrices is also numerically equivalent to a tensor outer-product, density matrices and visualizations can also be rapidly composed for other properties of interest, e.g. the various channel functions defined herein, providing another complementary methodology and tool for investigation. (Further examples can be found in the ePSproc documentation [35], as well as in the literature, see, e.g., Ref. [97] for general discussion, Ref. [84] for application in pump-probe schemes.)

Furthermore, as noted above, the density matrix elements provide a complete description of the photoionization event, and hence make clear the equivalence of the “complete” photoionization experiments (and associated continuum reconstruction methods) discussed herein, with general quantum tomography schemes [101]. The density matrix can also be used as the starting point for further analysis based on standard density matrix techniques - this is discussed, for instance, in Ref. [97], and can also be viewed as a bridge between traditional methods in spectroscopy and AMO physics, and more recent concepts in the quantum information sciences (see, e.g., Refs. [102, 103] for recent discussions in this context). A brief numerical diversion in this direction is given in Sect. 3.4.6, which illustrates the use of the the QuTiP (Quantum Toolkbox in Python) library [104, 105, 106] with the density matrix results derived herein.

3.4.3. Numerical setup#

This follows the setup in Sect. 3.3 Tensor formulation of photoionization, using a symmetry-based set of basis functions for demonstration purposes. (Repeated code is hidden in PDF version.)

Hide code cell content
# Run default config - may need to set full path here
%run '../scripts/setup_notebook.py'

# Override plotters backend?
# plotBackend = 'pl'
*** Setting up notebook with standard Quantum Metrology Vol. 3 imports...
For more details see https://pemtk.readthedocs.io/en/latest/fitting/PEMtk_fitting_basic_demo_030621-full.html
To use local source code, pass the parent path to this script at run time, e.g. "setup_fit_demo ~/github"

*** Running: 2023-12-07 10:42:56
Working dir: /home/jovyan/jake-home/buildTmp/_latest_build/html/doc-source/part1
Build env: html
None

* Loading packages...
* sparse not found, sparse matrix forms not available. 
* natsort not found, some sorting functions not available. 
* Setting plotter defaults with epsproc.basicPlotters.setPlotters(). Run directly to modify, or change options in local env.
* Set Holoviews with bokeh.
* pyevtk not found, VTK export not available. 
* Set Holoviews with bokeh.
Jupyter Book      : 0.15.1
External ToC      : 0.3.1
MyST-Parser       : 0.18.1
MyST-NB           : 0.17.2
Sphinx Book Theme : 1.0.1
Jupyter-Cache     : 0.6.1
NbClient          : 0.7.4
Hide code cell content
# Setup symmetry-defined matrix elements using PEMtk

%run '../scripts/setup_symmetry_basis_tensors.py'
*** Setting up basis set for symmetry-defined matrix elements, see Quantum Metrology Vol. 3 Sect. 3.3...

Set symmetry=D2h, lmax=4
*** Mapping coeffs to ePSproc dataType = matE
Remapped dims: {'C': 'Cont', 'mu': 'it'}
Added dim Eke
Added dim Targ
Added dim Total
Added dim mu
Added dim Type
Found dipole symmetries: 
{'B1u': {'m': [0], 'pol': ['z']}, 'B2u': {'m': [-1, 1], 'pol': ['y']}, 'B3u': {'m': [-1, 1], 'pol': ['x']}}
*** Mapping coeffs to ePSproc dataType = BLM
Remapped dims: {'C': 'Cont', 'mu': 'muX'}
Added dim Eke
Added dim P
Added dim T
Added dim C
*** Mapping coeffs to ePSproc dataType = matE
Remapped dims: {'C': 'Cont', 'mu': 'muX'}
Added dim Eke
Added dim Targ
Added dim Total
Added dim mu
Added dim it
Added dim Type
*** Assigning matrix elements and computing betas...
Set channels neutral sym=A1g, ion sym=A1g
*** Updated self.coeffs['matE'] with new coords.
Assigned 'Total' from A1g x A1g = ['A1g']
Assigned 'Total' from A1g x A1u = ['A1u']
Assigned 'Total' from A1g x B1g = ['B1g']
Assigned 'Total' from A1g x B1u = ['B1u']
Assigned 'Total' from A1g x B2g = ['B2g']
Assigned 'Total' from A1g x B2u = ['B2u']
Assigned 'Total' from A1g x B3g = ['B3g']
Assigned 'Total' from A1g x B3u = ['B3u']
*** Updated self.coeffs['matE'] with new coords.
Assigned dipole-allowed terms for dim = 'Cont' to self.coeffs['symAllowed']
Product basis elements: dict_keys(['BLMtableResort', 'polProd', 'phaseConvention', 'BLMRenorm'])
Full basis elements: dict_keys(['QNs', 'EPRX', 'lambdaTerm', 'BLMtable', 'BLMtableResort', 'AFterm', 'AKQS', 'polProd', 'phaseConvention', 'BLMRenorm', 'matEmult'])

*** Setting trial results for linear ramp ADMs.
Subselected from dataset 'ADM', dataType 'ADM': 50 from 50 points (100.00%)
Computing BLMs for linear ramp case...
    allowed m pol result terms
Dipole Target          
B1u A1g False [0] ['z'] ['B1u'] ['A1g', 'A1g']
B1g False [0] ['z'] ['A1u'] ['A1g', 'A1g']
A1u False [0] ['z'] ['B1g'] ['A1g', 'A1g']
B1u True [0] ['z'] ['A1g'] ['A1g', 'A1g']
B3g False [0] ['z'] ['B2u'] ['A1g', 'A1g']
B3u False [0] ['z'] ['B2g'] ['A1g', 'A1g']
B2g False [0] ['z'] ['B3u'] ['A1g', 'A1g']
B2u False [0] ['z'] ['B3g'] ['A1g', 'A1g']
B2u A1g False [-1, 1] ['y'] ['B2u'] ['A1g', 'A1g']
B1g False [-1, 1] ['y'] ['B3u'] ['A1g', 'A1g']
A1u False [-1, 1] ['y'] ['B2g'] ['A1g', 'A1g']
B1u False [-1, 1] ['y'] ['B3g'] ['A1g', 'A1g']
B3g False [-1, 1] ['y'] ['B1u'] ['A1g', 'A1g']
B3u False [-1, 1] ['y'] ['B1g'] ['A1g', 'A1g']
B2g False [-1, 1] ['y'] ['A1u'] ['A1g', 'A1g']
B2u True [-1, 1] ['y'] ['A1g'] ['A1g', 'A1g']
B3u A1g False [-1, 1] ['x'] ['B3u'] ['A1g', 'A1g']
B1g False [-1, 1] ['x'] ['B2u'] ['A1g', 'A1g']
A1u False [-1, 1] ['x'] ['B3g'] ['A1g', 'A1g']
B1u False [-1, 1] ['x'] ['B2g'] ['A1g', 'A1g']
B3g False [-1, 1] ['x'] ['A1u'] ['A1g', 'A1g']
B3u True [-1, 1] ['x'] ['A1g'] ['A1g', 'A1g']
B2g False [-1, 1] ['x'] ['B1u'] ['A1g', 'A1g']
B2u False [-1, 1] ['x'] ['B1g'] ['A1g', 'A1g']
Cont B1u B2u B3u
Eke Targ Total Type h it l m mu muX
0 A1g B1u U 0 NaN 1 0 0 0 (1+0j)
1 NaN 3 0 0 0 (1+0j)
2 NaN 3 -2 0 0 (0.7071067811865475+0j)
2 0 0 (0.7071067811865475+0j)
B2u U 0 NaN 1 -1 -1 0 (-0.7071067811865475+0j)
1 0 (-0.7071067811865475+0j)
1 -1 0 (-0.7071067811865475-0j)
1 0 (-0.7071067811865475-0j)
1 NaN 3 -3 -1 0 (-0.7071067811865475+0j)
1 0 (-0.7071067811865475+0j)
3 -1 0 (-0.7071067811865475-0j)
1 0 (-0.7071067811865475-0j)
2 NaN 3 -1 -1 0 (-0.7071067811865475+0j)
1 0 (-0.7071067811865475+0j)
1 -1 0 (-0.7071067811865475-0j)
1 0 (-0.7071067811865475-0j)
B3u U 0 NaN 1 -1 -1 0 (0.7071067811865475+0j)
1 0 (0.7071067811865475+0j)
1 -1 0 (-0.7071067811865475-0j)
1 0 (-0.7071067811865475-0j)
1 NaN 3 -1 -1 0 (0.7071067811865475+0j)
1 0 (0.7071067811865475+0j)
1 -1 0 (-0.7071067811865475-0j)
1 0 (-0.7071067811865475-0j)
2 NaN 3 -3 -1 0 (0.7071067811865475+0j)
1 0 (0.7071067811865475+0j)
3 -1 0 (-0.7071067811865475-0j)
1 0 (-0.7071067811865475-0j)

3.4.4. Compute a density matrix#

A basic density matrix computation routine is implemented in the ePSproc codebase [33, 34, 35]. This makes use of input tensor arrays, and computes the density matrix as an outer-product of the defined dimension(s). The numerics essentially compute the outer product from the specified dimensions, which can be written generally as per Eqs. (3.29), (3.30), where \(a_{i}^{(n)}a_{j}^{(n)*}\) are the values along the specified dimensions/state vector/representation. These dimensions must be in input arrays, but will be restacked as necessary to define the effective basis space, and all coherent pairs will be computed.

For instance, considering the ionization matrix elements demonstrated herein, setting indexes (quantum numbers) as [l,m] will select the \(|\zeta\rangle = |l,m\rangle\) basis, hence define the density operator as \(\hat{\rho} = |\zeta\rangle \langle\zeta'| = |l,m\rangle\langle l',m'|\) and the corresponding density matrix elements \(\rho^{\zeta,\zeta'}=\langle\zeta|\hat{\rho}|\zeta'\rangle=a_{l,m}a_{l',m'}^{*}\). Similarly, setting ['l','m','mu'] will set the \(|\zeta\rangle = |l,m,\mu\rangle\) as the basis vector and so forth, where \(|\zeta\rangle\) is used as a generic state vector denoting all required quantum numbers. Additionally, other quantum numbers/dimensions can be kept, summed or selected from the input tensors prior to computation, thus density matrices can be readily computed as a function of other parameters, or averaged, according to the properties of interest, experimental parameters and observables.

Note, however, that this selection is purely based on the numerics, which compute the outer product along the defined dimensions \(|\zeta\rangle\langle\zeta'|\) to form the density matrix, hence does not guarantee a well-formed density matrix in the strictest sense (depending on the basis set), although will always present a basis state correlation matrix of sorts. A brief example, for the D2h defined matrix element is given below; for more examples see the ePSproc documentation [35].

# See the docs for more, 
# https://epsproc.readthedocs.io/en/dev/methods/density_mat_notes_demo_300821.html

# Import routines for density calculation and plotting
from epsproc.calc import density

#*** Compose density matrix

# Set dimensions/state vector/representation
# These must be in original data, but will be restacked as 
# necessary to define the effective basis space.

# Set dimensions for density matrix. Note stacked dims are OK, in this case LM = {l,m}
denDims = 'LM'  
selDims = None  # Select on any other dimensions?
sumDims = None  # Sum over any other dimensions? 
                # (Set sumDims=True to sum over all dims except denDims.)
pTypes=['r','i'] # Plotting types 'r'=real, 'i'=imaginary
thres = 1e-4    # Threshold for outputs (otherwise set to zero and/or dropped from result)
normME = False  # Normalise matrix elements before computing?
normDen = 'max' # Method to normalise density matrix

# Calculate - Ref case
k = sym
matE = data.data[k]['matE'].copy()  # Set data from main class instance by key

# Normalise input matrix elements?
if normME:
    matE = matE/matE.max()

#*** Compute density matrix for given parameters
# See demo at:
#   https://epsproc.readthedocs.io/en/latest/methods/density_mat_notes_demo_300821.html
# API docs:
#   https://epsproc.readthedocs.io/en/latest/modules/epsproc.calc.density.html#epsproc.calc.density.densityCalc
daOut, *_ = density.densityCalc(matE, denDims = denDims, 
                                selDims = selDims, thres = thres)

# Renormlise output?
if normDen=='max':
    daOut = daOut/daOut.max()
elif normDen=='trace':
    # Need sym sum here to get 2D trace
    daOut = daOut/(daOut.sum('Sym').pipe(np.trace)**2)  

# Plot density matrix with Holoviews
# Note sum over 'Sym' dimension to flatten plot to (l,m) dims only.
daPlot = density.matPlot(daOut.sum('Sym'), pTypes=pTypes)
Hide code cell output
Set plot kdims to ['LM', 'LM_p']; pass kdims = [dim1,dim2] for more control.
Hide code cell content
# Glue figure for later - real part only in this case
# Also clean up axis labels from default state labels ('LM' and 'LM_p' in this case).
glue("denMatD2hRealOnly", daPlot.select(pType='Real').opts(xlabel='L,M', ylabel="L',M'"))
glue("symHarmPGmatE", sym, display=False)
WARNING:param.HeatMapPlot03489: Due to internal constraints, when aspect and width/height is set, the bokeh backend uses those values as frame_width/frame_height instead. This ensures the aspect is respected, but means that the plot might be slightly larger than anticipated. Set the frame_width/frame_height explicitly to suppress this warning.

Fig. 3.12 Example density matrix, computed from matrix elements defined purely by D2h symmetry. Note in this case only the real part is non-zero. Axes labels give terms \(\{L,M\}\) and \(\{L',M'\}\).#

3.4.5. Visualising matrix element reconstruction fidelity with density matrices#

To demonstrate the use of the density matrix representation as a means to test similarity or fidelity between two sets of matrix elements, a trial set of matrix elements can be derived from the original set used above, plus random noise, and the differences in the density matrices directly computed. An example is shown in Fig. 3.13; in this example up to 10% random noise has been added to the original (input) matrix elements, and the resultant density matrix computed. The difference matrix (Fig. 3.13(c)) then provides the fidelity between the original and noisy case. In testing retrieval methodologies, this type of analysis thus provides a quick means to test reconstruction results vs. known inputs. Although this case is only illustrated for real density matrices, a similar analysis can be used for the imaginary (or phase) components, thus coherences can also be quickly visualised in this manner.

#*** Set trial matrix element for comparison with the original case computed above
matE = data.data[k]['matE'].copy()

if normME:
    matE = matE/matE.max()
    
# Add random noise, +/- 10%
# Note this is applied to normalised matE
# For the normalised case this results in a standard deviation in the difference 
# density matrix elements of ~sqrt(2*(0.1^2) + 2*0.1) = 0.2
# (Derived from basic error propagation, ignoring the actual values - 
#  see https://en.wikipedia.org/wiki/Propagation_of_uncertainty#Example_formulae.)
noise = 0.1
SD = np.sqrt(4*(noise**2))
# Set range to random values +/-1 * noise
matE_noise = matE + matE*((np.random.rand(*list(matE.shape)) - 0.5) * 2*noise)  

# Compute density matrix
daOut_noise, *_ = density.densityCalc(matE_noise, denDims = denDims, selDims = selDims, thres = thres)

# Renormlise output?
if normDen=='max':
    daOut_noise = daOut_noise/daOut_noise.max()
elif normDen=='trace':
    daOut_noise = daOut_noise/(daOut_noise.sum('Sym').pipe(np.trace)**2)
    
daPlot_noise = density.matPlot(daOut_noise.sum('Sym'), pTypes=pTypes)

# Compute differences
daDiff = daOut.sum('Sym') - daOut_noise.sum('Sym')
daDiff.name = 'Difference'
daPlotDiff = density.matPlot(daDiff, pTypes=pTypes)

print(f'Noise = {noise}, SD (approx) = {SD}')
maxDiff = daDiff.max().values
print(f'Max difference = {maxDiff}')

#*** Layout plot from Holoviews objects for real parts, with custom titles.
daLayout = (daPlot.select(pType='Real').opts(title="(a) Original", xlabel='L,M', 
                                             ylabel="L',M'") 
            + daPlot_noise.select(pType='Real').opts(title="(b) With noise", 
                xlabel='L,M', ylabel="L',M'") 
            + daPlotDiff.select(pType='Real').opts(title="(c) Difference (fidelity)", 
                xlabel='L,M', ylabel="L',M'"))
Hide code cell output
Set plot kdims to ['LM', 'LM_p']; pass kdims = [dim1,dim2] for more control.
Set plot kdims to ['LM', 'LM_p']; pass kdims = [dim1,dim2] for more control.
Noise = 0.1, SD (approx) = 0.2
Max difference = (0.30064583449857096+0j)
Hide code cell content
# Glue plot

# Additional formatting options for PDF vs. HTML outputs.
nCols = 1
if buildEnv == 'pdf':
    nCols = 2

# Glue layout
glue("denMatD2hCompExample",daLayout.cols(nCols).opts(hv.opts.HeatMap(cmap='coolwarm')))

# Glue values
glue("denDiffMax",round(float(maxDiff.real),3))
glue("denSD",SD)
WARNING:param.HeatMapPlot03708: Due to internal constraints, when aspect and width/height is set, the bokeh backend uses those values as frame_width/frame_height instead. This ensures the aspect is respected, but means that the plot might be slightly larger than anticipated. Set the frame_width/frame_height explicitly to suppress this warning.
WARNING:param.HeatMapPlot03717: Due to internal constraints, when aspect and width/height is set, the bokeh backend uses those values as frame_width/frame_height instead. This ensures the aspect is respected, but means that the plot might be slightly larger than anticipated. Set the frame_width/frame_height explicitly to suppress this warning.
WARNING:param.HeatMapPlot03724: Due to internal constraints, when aspect and width/height is set, the bokeh backend uses those values as frame_width/frame_height instead. This ensures the aspect is respected, but means that the plot might be slightly larger than anticipated. Set the frame_width/frame_height explicitly to suppress this warning.

Fig. 3.13 Example density matrices, computed from matrix elements defined purely by D2h symmetry. Here the panels show (a) the original density matrix, (b) density matrix computed with +/- 10% random noise added to the original matrix elements, (c) the difference matrix, which indicates the fidelity of the noisy case relative to the original case. For normalised density matrices the 10% noise case translates to a standard deviation \(\sigma\approx\)0.2 on the differences; the maximum error in the test case as illustrated =0.301.#

3.4.6. Working with density matrices with QuTiP library functions#

From the numerical density matrix, a range of other standard properties can be computed - of particular interest are likely to be various standard quantities such as the trace, Von Neuman entropy and so forth. Naturally these can be computed numerically directly from the relevant formal definitions; however, many of the fundamentals are already implemented in other libraries, and numerical representations can be passed directly to such libraries. In particular, the QuTiP (Quantum Toolkbox in Python) library [104, 105, 106] implements a range of standard functions, metrics, transforms and utility functions for working with state vectors and density matrices. A brief numerical example is given below, see the QuTiP documentation [106] for more possibilities.

3.4.6.1. Convert numerical arrays to QuTiP objects#

# Import QuTip
from qutip import *

# Wrap density matrices to QuTip objects
# Note sum('Sym') to ensure 2D matrix, and .data to pass Numpy data array only
pa = Qobj(daOut.sum('Sym').data)    # Reference continuum density matrix
pb = Qobj(daOut_noise.sum('Sym').data)  # Noisy case

# QuTip objects have data as Numpy arrays, and render as typeset matrices in a notebook
# DEBUG NOTE 22/04/23 - QuTip matrix latex output currently causing PDF build errors, so set hide output for testing.
# See https://github.com/phockett/Quantum-Metrology-with-Photoelectrons-Vol3/issues/8
if buildEnv != 'pdf':
    display(pa)
    # print(pa)
\[\begin{split}Quantum object: dims = [[25], [25]], shape = (25, 25), type = oper, isherm = True $ \\ \left(\begin{matrix}1.0 & 0.0 & 0.0 & 0.0 & 0.707 & \cdots & 1.0 & 0.0 & 0.707 & 0.0 & 0.707\\0.0 & 1.000 & 0.0 & 0.0 & 0.0 & \cdots & 0.0 & 0.0 & 0.0 & 0.0 & 0.0\\0.0 & 0.0 & 1.0 & 0.0 & 0.0 & \cdots & 0.0 & 0.0 & 0.0 & 0.0 & 0.0\\0.0 & 0.0 & 0.0 & 1.000 & 0.0 & \cdots & 0.0 & 0.0 & 0.0 & 0.0 & 0.0\\0.707 & 0.0 & 0.0 & 0.0 & 1.000 & \cdots & 0.707 & 0.0 & 0.0 & 0.0 & 0.0\\\vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \vdots & \vdots\\1.0 & 0.0 & 0.0 & 0.0 & 0.707 & \cdots & 1.0 & 0.0 & 0.707 & 0.0 & 0.707\\0.0 & 0.0 & 0.0 & 0.0 & 0.0 & \cdots & 0.0 & 1.000 & 0.0 & 1.000 & 0.0\\0.707 & 0.0 & 0.0 & 0.0 & 0.0 & \cdots & 0.707 & 0.0 & 1.000 & 0.0 & 1.000\\0.0 & 0.0 & 0.0 & 0.0 & 0.0 & \cdots & 0.0 & 1.000 & 0.0 & 1.000 & 0.0\\0.707 & 0.0 & 0.0 & 0.0 & 0.0 & \cdots & 0.707 & 0.0 & 1.000 & 0.0 & 1.000\\\end{matrix}\right)$\end{split}\]

3.4.6.2. Fidelity metric#

Fidelity between two density matrices \(\rho_{a},\rho_{b}\) can be defined as per Refs. [107, 108]:

\(F(\rho_{a},\rho_{b})=\operatorname{Tr} {\sqrt {{\sqrt {\rho_{a}}}\rho_{b} {\sqrt {\rho_{a}}}}}\)

This is implemented by the fidelity function in the QuTiP (Quantum Toolkbox in Python) library [104, 105, 106]. Of note in this test case is that the resultant is close to limiting-case value of \(F(\rho_{a},\rho_{b})=1\) for the test case herein, despite the added noise and some per-element disparities as shown in Fig. 3.13(c). This reflects the conceptual difference between an element-wise evaluation of the differences, vs. a formal scalar metric.

# Test fidelity, =1 if trace-normalised
print(f"Fidelity (a,a) = {fidelity(pa,pa)}")
print(f"Trace = {pa.tr()}")
print(f"Trace-normed fidelity = {fidelity(pa,pa)/pa.tr()}")
Fidelity (a,a) = 25.00000031332397
Trace = 25.0
Trace-normed fidelity = 1.0000000125329587
# Test fidelity vs noisy case
print(f"Fidelity (a,b) = {fidelity(pa,pb)}")
print(f"Trace a = {pa.tr()}, Trace b = {pb.tr()}")
print(f"Trace-normed fidelity = {fidelity(pa/pa.tr(),pb/pb.tr())}")
Fidelity (a,b) = 22.932109349661026
Trace a = 25.0, Trace b = 21.11865565077017
Trace-normed fidelity = 0.9980325460845719
# This can also be computed rapidly with lower-level QuTip functionality...

# Compute inner term, note .sqrtm() for square root.
inner = pa.sqrtm() * pa * pa.sqrtm()

# Compute fidelity
inner.sqrtm().tr()
(25.00000059788841+8.879641550582872e-08j)