22/07/22
With group/loop over t-points to allow for no thresholding at AFBLM calculation + quick save routines (Pickle only - still need to debug other methods).
20/07/22
Updating from OCS_orbs8-11_AFBLMs_VM-ADMs_140122-JAKE_tidy-replot-280122.ipynb with LF results and updated ADMs.
Note LF all-orb plotting from http://jake/jupyter/user/paul/doc/tree/ePS/OCS/epsman2021/OCS_orb11_proc_131221-JAKE.ipynb - but didn't previously upload!
For methods: https://epsproc.readthedocs.io/en/dev/demos/ePSproc_class_demo_161020.html
Additional notes below
# Quick hack to override default HTML template
# NOT required in some JLab versions.
# https://stackoverflow.com/a/63777508
# https://stackoverflow.com/questions/21971449/how-do-i-increase-the-cell-width-of-the-jupyter-ipython-notebook-in-my-browser
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
!hostname
!conda env list
import sys
import os
from pathlib import Path
import numpy as np
# import epsproc as ep
import xarray as xr
import matplotlib.pyplot as plt
from datetime import datetime as dt
timeString = dt.now()
import epsproc as ep
# Plotters
import hvplot.xarray
from epsproc.plot import hvPlotters
# Multijob class dev code
from epsproc.classes.multiJob import ePSmultiJob
plotWidth = 700
hvPlotters.setPlotters(width = plotWidth, snsStyle='whitegrid')
hvPlotters.opts.defaults(hvPlotters.opts.Curve(height=plotWidth)) # Fix plot height! Currently not set by default.
xr.__version__
ep.__file__
# For class, above settings don't take, not sure why, something to do with namespaces/calling sequence?
# Overriding snsStyle does work however... although NOT CONSISTENTLY????
# AH, looks like ordering matters - set_style LAST (.set seems to override)
import seaborn as sns
sns.set(rc={'figure.figsize':(10,6)}) # Set figure size in inches
sns.set_context("paper")
sns.set_style("whitegrid") # Set plot style
sns.set_palette("Paired") # Set colour mapping
# Try direct fig type setting for PDF output figs
from IPython.display import set_matplotlib_formats
# set_matplotlib_formats('png', 'pdf')
set_matplotlib_formats('svg', 'pdf')
# xr.set_options(display_style='html')
import warnings
# warnings.filterwarnings('once') # Skip repeated numpy deprecation warnings in current build (xr15 env)
warnings.filterwarnings('ignore') # Skip repeated numpy deprecation warnings in current build (xr15 env)
# # Scan for subdirs, based on existing routine in getFiles()
fileBase = Path('/home/paul/ePS/OCS/OCS_survey') # Data dir on Stimpy
# TODO: fix orb label here, currently relies on (different) fixed format
data = ePSmultiJob(fileBase, verbose = 0)
data.scanFiles()
data.jobsSummary()
# Fix labels for plots - currently have some mis-named files!
# Set state labels
v = ['C','B','A','X']
sDict = {n:v[k] for k,n in enumerate(range(9,13))}
for key in data.data.keys():
data.data[key]['jobNotes']['orbKey'] = key # Existing orb key, from filename
data.data[key]['jobNotes']['orbGroup'] = int(key.strip('orb')) + 1 # Corrected orb group
# data.data[key]['jobNotes']['orbLabel'] = f"HOMO - {12 - int(key.strip('orb')) - 1}"
data.data[key]['jobNotes']['orbLabel'] = sDict[data.data[key]['jobNotes']['orbGroup']]
# print(data.data[key]['jobNotes']['orbLabel'])
Note orbital numbering in table below:
Orb
is Gamess file output orbital numbering, but energy but not grouped by degeneracy.OrbGrp
is grouped numbering by degeneracy, also used by ePolyScat, and will be used for labels etc. below.BUT - file names are 0-indexed (oops), so give OrbGrp-1
here. Sorry.
data.molSummary()
These are from ePolyScat's getCro
function, and are LF (unaligned ensemble) results. This provides a good, if general, overview.
# NEED TO SET AGAIN AFTER CLASS IMPORT!
# import warnings
# warnings.filterwarnings('once') # Skip repeated numpy deprecation warnings in current build (xr15 env)
# warnings.filterwarnings('ignore') # Skip repeated numpy deprecation warnings in current build (xr15 env)
# Comparative plot over datasets (all symmetries only)
Etype = 'Eke' # Set for Eke or Ehv energy scale
pGauge = 'L'
Erange=[5, 20] # Plot range (full range if not passed to function below)
# data.plotGetCroComp(pType='SIGMA', Etype = Etype, Erange = Erange, backend = 'hv')
data.plotGetCroComp(pType='SIGMA', pGauge = pGauge, Etype = Etype, Erange = Erange)
# Comparative plot over datasets (all symmetries only)
data.plotGetCroComp(pType='BETA', Etype=Etype, Erange = Erange)
# Possibly may want to replot with hv backend - needs some work still. Cf. XeF2 plotting
# data.plotGetCro(selDims={'Sym':'All'}) #, 'Type':'L'}) #, backend='hv')
# data.plotGetCroComp(pType='BETA', Etype=Etype, Erange = Erange, backend='hv')
20/07/22 PH
Revisiting with updated ADMs from Varun's fitting.
14/01/22 PH
With additional realistic ADMs from VM.
13/12/21 PH
Revisiting AF cases with (hopefully) better match to experimental alignment.
Here we'll start with the molecular alignment defined in terms of axis distribution moments (ADMs), given by parameters $A^K_{Q,S}(t)$:
$ P(\Omega,t) = \sum_{K,Q,S} A^K_{Q,S}(t)D^K_{Q,S}(\Omega)$
Where $P(\Omega,t)$ is the full axis distribution probability, expanded for Euler angles $\Omega$.
This reduces to the 2D case if $S=0$:
$ P(\theta,\phi,t) = \sum_{K,Q} A^K_{Q,0}(t)D^K_{Q,0}(\Omega) = \sum_{K,Q} A^K_{Q}(t)Y_{K,Q}(\Omega)$
In the current code (v1.3.0, 16/08/21), the ADMs can be set directly, and expanded trivially for the 2D case (3D case to do). (Note that the 3D case is supported for AF-BLM calculations, but not yet for direct expansion & visualisation.)
Expectation values of $\langle cos^n(\theta, t)\rangle$ can be calculated directly from the $P(\theta,t)$ distributions:
$\langle cos^n(\theta, t)\rangle = \int_{\theta} cos^n(\theta)P(\theta,t) d\theta$
See, e.g.:
# Load test ADMs from .mat files.
# Code adapted from N2 case, https://epsproc.readthedocs.io/en/latest/methods/geometric_method_dev_pt3_AFBLM_090620_010920_dev_bk100920.html#Test-compared-to-experimental-N2-AF-results...
#
# To check individual file contents just use `loadmat(file)`
#
ADMtype = 'dat'
ADMscaleFactor = 1 # Try quick SF to circumvent thresholding issues in current code - should be renormed out in final Blms
# Tested for 100 - still some weird stuff happening, although very different weird stuff!
# 10 maybe a bit better? Or 2...?
# May also be issue with complex form for ADMs...? Testing setting real part only later
from scipy.io import loadmat
if ADMtype == 'mat':
# Original ADMs 14/01/22 - matlab files
ADMdataDir = Path('~/ePS/OCS/OCS_alignment_ADMs_VM_140122')
renorm = True # Additional renormalisation by (2*K+1)/8*pi^2
else:
# Updated ADMs from dat files
# Note these are also Matlab hdf5, but different var labels.
ADMdataDir = Path('~/ePS/OCS/OCS_ADMs_28K_VM_070722')
renorm = False # Additional renormalisation by (2*K+1)/8*pi^2
fList = ep.getFiles(fileBase = ADMdataDir.expanduser(), fType='.'+ADMtype)
ADMs = []
ADMLabels = []
for f in fList:
if ADMtype == 'mat':
item = Path(f).name.rstrip('ocs.mat') # Get & set variable name
else:
item = Path(f).name.split('_')[0] # Get & set variable name
if item == 'time':
if ADMtype == 'mat':
item = 'timeocs'
t = loadmat(f)[item][0,:]
elif item == 'c2t':
c2t = loadmat(f)[item][:,0]
else:
# Should use re herer!
if ADMtype == 'mat':
K = int(item.strip('D'))
item = item + 'ave'
else:
# try:
K = int(item.strip('A')[0])
# except:
# pass # For now just skip cos^2 t term
# ADMs.append(np.asarray([K,0,0,loadmat(f)[item][:,0]]))
# ADMs.append([K,0,0,loadmat(f)[item][:,0]])
# ADMs.append(loadmat(f)[item][:,0])
ADMs.append(loadmat(f)[item][:,0]*ADMscaleFactor)
ADMLabels.append([K,0,0])
# Add K = 0 (population) term?
# NOTE - may need additional renorm?
addPop = True
if addPop:
if renorm:
ADMs.append(np.ones(t.size) * np.sqrt(4*np.pi))
else:
# ADMs.append(np.ones(t.size)) # * np.sqrt(4*np.pi))
ADMs.append(np.ones(t.size) * 1/(8*np.pi**2))
ADMLabels.append([0,0,0])
ADMLabels = np.array(ADMLabels)
1/(8*np.pi**2)
# Test files
# fList
# loadmat(fList[3])['c2t'][:,0]
# Set ADMs for increasing alignment...
# tPoints = 10
# inputADMs = np.asarray([[0,0,0, *np.ones(10)], [2,0,0, *np.linspace(0,1,tPoints)], [4,0,0, *np.linspace(0,0.5,tPoints)]])
# inputADMs = [[0,0,0, *np.ones(10) * np.sqrt(4*np.pi)],
# [2,0,0, *np.linspace(1,2,tPoints)],
# [4,0,0, *np.linspace(0.3,2.8,tPoints)],
# [6,0,0, *np.linspace(0,3,tPoints)],
# [8,0,0, *np.linspace(0,3,tPoints)]]
# Ballpark ADMs from Fig. 1 in [2]
# inputADMs = [[0,0,0, *np.ones(10) * np.sqrt(4*np.pi)],
# [2,0,0, *np.linspace(0.3,2.2,tPoints)],
# [4,0,0, *np.linspace(0,1.5,tPoints)],
# [6,0,0, *np.linspace(0,1.5,tPoints)],
# [8,0,0, *np.linspace(0,1.5,tPoints)]]
# warnings.filterwarnings('ignore') # Skip repeated numpy deprecation warnings in current build (xr15 env)
# For manual sets
# ADMs = ep.setADMs(ADMs = inputADMs) # TODO WRAP TO CLASS (IF NOT ALREADY!)
# For ADMs from VM - set to Xarray using setADMs() routine
ADMs = ep.setADMs(ADMs = ADMs, KQSLabels = ADMLabels, t=t) # TODO WRAP TO CLASS (IF NOT ALREADY!)
attrCpy = ADMs.attrs
if renorm:
ADMs = ADMs * (2*ADMs.K+1)/(8*np.pi**2) # Additional renormalisation by (2*K+1)/8*pi^2
# Set to separate dataset for testing
ADMs.attrs = attrCpy # Propagate attrs
ADMs.attrs['jobLabel'] = 'Alignment data testing'
data.data['ADM'] = {'ADM': ADMs}
ADMs
# ep.lmPlot(ADMs, xDim='t');
ADMs.unstack().squeeze().real.hvplot.line(x='t').overlay('K')
# ADMs.unstack().squeeze().pipe(np.abs).hvplot.line(x='t').overlay('K')
# Selection & downsampling - adapted from https://epsproc.readthedocs.io/en/latest/methods/geometric_method_dev_pt3_AFBLM_090620_010920_dev_bk100920.html#Test-compared-to-experimental-N2-AF-results...
# See PEMtk for updated routines, https://pemtk.readthedocs.io/en/latest/fitting/PEMtk_fitting_basic_demo_030621-full.html#Subselect-data
# See Xarray docs for basics https://xarray.pydata.org/en/stable/user-guide/indexing.html#indexing-with-dimension-names
trange=[38, 44] # Set range in ps for calc
tStep=2 # Set tStep for downsampling
# SLICE version - was working, but not working July 2022, not sure if it's data types or Xarray version issue? Just get KeyErrors on slice.
# data.data['ADM'] = {'ADM': ADMs.sel(t=slice(trange[0],trange[1], tStep))} # Set and update
# Inds/mask version - seems more robust?
tMask = (ADMs.t>trange[0]) & (ADMs.t<trange[1])
# ind = np.nonzero(tMask) #[0::tStep]
# At = ADMs['time'][:,ind].squeeze()
# ADMin = ADMs['ADM'][:,ind]
data.data['ADM'] = {'ADM': ADMs[:,tMask][:,::tStep]} # Set and update
print(f"Selecting {data.data['ADM']['ADM'].t.size} points")
# plt.plot(At, np.real(ADMin.T))
# plt.legend(ADMs['ADMlist'])
# plt.xlabel('t/ps')
# plt.ylabel('ADM')
# plt.show()
# # Set in Xarray
# ADMX = ep.setADMs(ADMs = ADMs['ADM'][:,ind], t=At, KQSLabels = ADMs['ADMlist'], addS = True)
# # ADMX
data.data['ADM']['ADM'].unstack().squeeze().real.hvplot.line(x='t').overlay('K')
# Check imag values - not sure if these are causing issues later?
data.data['ADM']['ADM'].unstack().squeeze().imag.hvplot.line(x='t').overlay('K')
data.data['ADM']['ADM']
# Axis distributions P(theta) vs. t (summed over phi)
# Pt = data.padPlot(keys = 'ADM', dataType='ADM', Etype = 't', pStyle='grid', reducePhi='sum', returnFlag = True) #, facetDims=['t', 'Eke'])
Pt = data.padPlot(keys = 'ADM', dataType='ADM', Etype = 't', pType='r', pStyle='grid', reducePhi='sum', returnFlag = True) #, facetDims=['t', 'Eke']) # Real plot to check values OK.
# 22/07/22 - NOTICE SOME -ve VALUES HERE WITH Re plot and updated ADMs. Issue with renorm somewhere still?
# The returned data array contains the plotted values
Pt = data.data['ADM']['plots']['ADM']['pData']
Pt
# Check no negative values...
print(Pt.min() > 0)
def computeCNs(Pt, N = None, Nrange = [0,10], Nstep = 2):
"""
Function to compute <cos^N> expectation values from P(theta) distributions.
NOTE: currently assumes 1D case. Should generalise!
"""
if N is None:
N = np.arange(Nrange[0], Nrange[1], Nstep)
# Generate cos^N basis
# cn = xr.ones_like(Pt) * (np.cos(Pt.Theta)**N) # Use multiplication to keep t coord - this fails for array-like N with broadcast errors
cn = xr.ones_like(Pt) * np.cos(Pt.Theta).expand_dims({'N':N}).T.pipe(np.power,N) # OK with transpose
Jt = np.sin(Pt.Theta) # Jacobian
# norm = (Pt).sum('Theta') # * np.sqrt(2)
norm = (Pt * Jt).sum('Theta') # * (4*np.pi) # * np.sqrt(2)
print(norm)
cnMetric = (Pt * cn * Jt).sum('Theta')/norm
# cnMetric = Pt.dot(cn, dims='Theta') # Multiply and sum over Theta - dot version (no Jacobian or renorm)
# cnMetric = (Pt.dot(cn * Jt, dims='Theta')/Pt.dot(Jt, dims='Theta')) # Dot version with renorm
return cnMetric
# Compute and plot for a range of N
PtNorm = Pt #/Pt.sum('Theta') # Norm integral to 1 - now set in function
cnMetric = computeCNs(PtNorm)
# cnMetric.plot() # Basic plot
# sf = np.sqrt(2) # Seem to need additional sqrt(2) normalisation? Probably SPH definition and/or integration domain and/or distribution generation params.
sf = 1
ep.lmPlot(cnMetric*sf, xDim='t'); # With lmPlot()
# Seem to need additional sqrt(2) normalisation? Probably SPH definition?
# (cnMetric*np.sqrt(2)).sel(N=2).plot()
# (cnMetric*np.sqrt(2)).plot.line(x='t')
# Testing exact values
# sf = np.sqrt(2)
# sf = 2
# (cnMetric*sf).plot.line(x='t')
# (ADMs).plot.line(x='t')
(cnMetric*sf).fillna(0).hvplot.line(x='t').overlay('N')
cnMetric
1/np.sqrt(8*np.pi**2)
np.sqrt(np.pi)
1/np.sqrt(2)
# Plot ref cos^2 to check
# CURRENTLY NOT MATCHING REF VALUES - not sure if it's ADM renorm issue, or cos^2 calculator issue!
c2tXR = xr.DataArray(c2t, coords=[("t", t)])
c2tXR.name = 'cos2'
c2tXR.real.hvplot.line(x='t')
c2tXR
# Check from ADMs
K = 2
c2ADM = (1/3) + (2/3)*(8*np.pi**2 / (2*K + 1) )* data.data['ADM']['ADM'].sel(K=2).unstack().squeeze(drop=True)
c2ADM.real.hvplot()
New plotting code to update current package versions!
TODO:
# Compute AFBLMs for aligned case
keys = data._keysCheck(None)
keys = list(set(keys) - set(['ADM']))
#*************** Throw everything in method - breaks for large datasets (RAM issues)
# data.AFBLM(keys = keys, AKQS = ADMs, selDims = {'Type':'L'}) # NOTE THIS USES independently set ADMs!
# data.AFBLM(keys = keys, AKQS = data.data['ADM']['ADM'].real, selDims = {'Type':'L'}, thres=1e-3) # OK for 61 t-points (x2 downsample)
# data.AFBLM(keys = keys, AKQS = data.data['ADM']['ADM'].real, selDims = {'Type':'L'}, thres=1e-3, thresDims = ['Eke','t']) # NEED TO ADD DIM CHECKING TO thresDims
# data.AFBLM(keys = keys, AKQS = data.data['ADM']['ADM'].real, selDims = {'Type':'L'}, thres=1e-4) # 1e-4 working OK for 30 t-points (x4 downsample)
# data.AFBLM(keys = keys, AKQS = data.data['ADM']['ADM'].real, selDims = {'Type':'L'}, thres=1e-5) # 1e-5 working OK for 16 t-points (x8 downsample)
# data.AFBLM(keys = keys, AKQS = data.data['ADM']['ADM'].real, selDims = {'Type':'L'}, thres=1e-6) # 1e-6 working OK for 16 t-points (x8 downsample)
# data.AFBLM(keys = keys, AKQS = data.data['ADM']['ADM'].real, selDims = {'Type':'L'}, thres=None) # FAILS for 16 t-points (x8 downsample) at 60Gb RAM on Jake (Swap file size issue????)
# OK for 8 t-points (x16 downsample) at ~55Gb RAM peak on Jake.
# thres=1e-4) BREAKS KERNEL #, thresDims=['Eke','t']) # TODO: currently thersDims must be in matE only for AFBLM routine.
# 21/07/22 version - above only gives L=2 max, missing L=4,6. But bumping threshold to 1e-4 kills calculation...
# data.AFBLM(keys = keys, selDims = {'Type':'L'}) # Default case... sets/uses ADMs per key
# UPDATE: testing with scale factor for ADMs, and forcing to real - might be better?
#*********** Using grouping to avoid memory issues
# QUITE slow with current code, but more robust
# TODO: make this better with basis set recycle (cf. fitting code)
#
import pickle
def calcAFBLMGroup(data,ADMs,keys,AFBLMdict):
"""Wrap for XR groupby"""
# Class routine - returns to main datastruture
data.AFBLM(keys = keys, AKQS = ADMs, selDims = {'Type':'L'}, thres=None) #.map(standardize)
# Copy to new structure & restack
# keys = data._keysCheck(keys)
# AFBLMdict = {}
# Restack data to lists for merge
for key in keys:
AFBLMdict[key]['AFBLMlist'].append(data.data[key]['AFBLM'].copy())
return AFBLMdict
AFBLMdict = {k:{'AFBLMlist':[]} for k in keys}
# tBins = np.arange(38,39.1,0.5) # Test range
tBins = np.arange(trange[0], trange[1] + 0.1, 0.5)
# trange
import time
for item in data.data['ADM']['ADM'].groupby_bins("t", tBins):
start = time.time()
print(f'Calculating for group: {item[0]}, {item[1].t.size} t-points.')
AFBLMdict = calcAFBLMGroup(data, item[1], keys, AFBLMdict)
end = time.time()
print(f'Delta: {end - start}s')
# Concat XR lists
for key in keys:
AFBLMdict[key]['full'] = xr.concat(AFBLMdict[key]['AFBLMlist'],"t")
AFBLMdict[key]['full'].name = f'{key}_AFBLM'
# Push back to main class
data.data[key]['AFBLM'] = AFBLMdict[key]['full'].copy()
# Save object with Pickle
with open(f'OCS_{key}_AFBLM_220722.pickle', 'wb') as handle:
print(f'Saving {key} with pickle')
pickle.dump(AFBLMdict[key]['full'], handle, protocol=pickle.HIGHEST_PROTOCOL)
# Save full class with Pickle
# NOW GIVING ERRORS ON RERUN (although worked previously), AttributeError: Can't pickle local object 'plotTypeSelector.<locals>.<lambda>'
# NOT SURE WHERE THIS IS COMING FROM AS YET
# UPDATE: IN data.data['ADM']['plots']['ADM']['pData'].attrs['pTypeDetails']
# TODO: GENERAL DEBUG FOR ACCIDENTAL FUNCTION PASSING HERE!!!!
# import pickle
with open(f'OCS_AFBLM_220722.pickle', 'wb') as handle:
data.data['ADM']['plots']['ADM']['pData'].attrs['pTypeDetails'] = [] # UGH, this breaks pickle unless removed.
pickle.dump(data, handle, protocol=pickle.HIGHEST_PROTOCOL)
print(f'Saved data (full class) with pickle')
# for key in keys:
# AFBLMdict[key]['full'] = xr.concat(AFBLMdict[key]['AFBLMlist'],"t")
# AFBLMdict[key]['full'].name = f'{key}_AFBLM'
# with open(f'OCS_AFBLM_220722.pickle', 'wb') as handle:
# data.data['ADM']['plots']['ADM']['pData'].attrs['pTypeDetails'] = [] # UGH, this breaks pickle unless removed.
# pickle.dump(data, handle, protocol=pickle.HIGHEST_PROTOCOL)
# print(f'Saved data (full class) with pickle')
# AFBLMdict[key]['full'] = xr.concat(AFBLMdict['orb10'], "t")
# AFBLMdict['orb10']['full'].unstack('BLM').sel({'m':0}).squeeze().real.hvplot.line(x='t')
# Quick stack to dict & XR dataset
dataType = 'AFBLM'
thres = None
selDims = None
Etype = 'Eke'
pDict = []
for key in keys:
# subset = ep.matEleSelector(data.data[key][dataType], thres=thres, inds = selDims, dims = [Etype, 't'], sq = True)
subset = ep.matEleSelector(data.data[key][dataType], thres=thres, inds = selDims, dims = 't', sq = True)
subset.name = 'BLM'
subset = ep.plotTypeSelector(subset, pType = 'r')
# pDict.append(subset.expand_dims({'Orb':[key]}))
# pDict.append(subset.expand_dims({'Orb':[data.data[key][dataType].attrs['jobLabel']]}))
pDict.append(subset.expand_dims({'Orb':[data.data[key]['jobNotes']['orbLabel']]}))
# hvDS = hv.Dataset(subset.unstack('BLM'))
# display(hvDS.to(hv.Curve, kdims=Etype).overlay(['l','m']))
# pDict[key] = hvDS.to(hv.Curve, kdims=Etype).overlay(['l','m'])
# hv.HoloMap(pDict, kdims = ['Orb']) # Individual plots OK, but won't stack?
xrAF = xr.concat(pDict, 'Orb')
# xrAF.unstack().dropna(dim = 't', how = 'all').drop('m') #.fillna(0)
NOTE: use the histogram at the side to change the colour mapping scale.
For old line plots see https://phockett.github.io/ePSdata/OCS-preliminary/OCS_orbs8-11_AFBLMs_VM-ADMs_140122-JAKE_tidy.html
# Heatmaps for cross-sections
# Set opts to match sizes - should be able to link plots or set gridspace to handle this?
hvPlotters.setPlotDefaults(fSize = [750,300], imgSize = 600)
cmap = 'coolwarm' # See http://holoviews.org/user_guide/Colormaps.html
# Get cross-sections
XS = np.abs(xrAF.XSrescaled) # Should be in MB
XS.name = 'XS/MB'
hvXS = hvPlotters.hv.Dataset(XS.fillna(0))
# Plot heatmap + ADMs
(hvXS.to(hvPlotters.hv.HeatMap, kdims=['t',Etype]).opts(cmap=cmap).hist() + data.data['ADM']['ADM'].unstack().squeeze().real.hvplot.line(x='t').overlay('K')).cols(1)
# Heatmap for l=2,4,6
# BLM = np.abs(xrAF.XSrescaled) # Should be in MB
# XS.name = 'XS/MB'
hvDS = hvPlotters.hv.Dataset(xrAF.unstack('BLM').fillna(0).squeeze(drop=True))
# Plot heatmap + ADMs
(hvDS.select(l=[2,4], m=0).to(hvPlotters.hv.HeatMap, kdims=['t',Etype]).opts(cmap=cmap).hist() + data.data['ADM']['ADM'].unstack().squeeze().real.hvplot.line(x='t').overlay('K')).cols(1) #.overlay(['l','t']).layout('Orb').cols(1)
# SUPER SLOW WITH LARGER DATASETS - must be a way to improve on this? Seems a bit better with subsets and if NaNs dropped properly, also if high-dim vars are overlaid rather than Holomapped
# hvPlotters.opts.defaults(hvPlotters.opts.Curve(height=700))
# lSel = [2,4,6]
# hvDS = hvPlotters.hv.Dataset(xrAF.unstack('BLM'))
# hvDS.select(l=lSel).to(hvPlotters.hv.Curve, kdims=Etype).overlay(['l','m'])
# hvDS.to(hvPlotters.hv.Curve, kdims=Etype).overlay(['l','m'])
# # Try with subsets to speed up plotters
# hvDS = hvPlotters.hv.Dataset(xrAF.unstack().dropna(dim = 't', how = 'all').drop('m').fillna(0))
# # hvDS.to(hvPlotters.hv.Curve, kdims=Etype).overlay(['l','m'])
# # Full set of line plots
# hvDS.to(hvPlotters.hv.Curve, kdims=Etype).overlay(['l'])
Plots need some work still!
# Try to match expt....
Estates = {'X': (11.9,12.6),
'A': (7.5,8.7),
'B': (7.5,8.7),
'C': (5.5,6.2)}
# For different energy slices need to subselect & restack? May be a better way... Probably with hv.transforms?
hvObjsXS = {}
hvObjs = {}
for k,v in Estates.items():
print(k,v)
# hvObjs[k] = hvDS.select(l=[2,4,6], Orb=k, Eke=v).to(hvPlotters.hv.Curve, kdims='t') #.overlay(['l']) # Not sure how to drop Edim here...
# hvObjs[k] = xrAF.unstack().sel(l=[2,4,6], Orb=k, Eke=slice(v[0],v[1])).sum('Eke').unstack().drop('m').real.fillna(0).hvplot.line(x='t') #.dropna(dim = 't', how = 'all').drop('m').hvplot.line(x='t')
# hvObjs[k] = xrAF.unstack().sel(l=[2,4], Orb=k, Eke=slice(v[0],v[1])).sum('Eke').unstack().drop('m').real.fillna(0).hvplot.line(x='t') #.dropna(dim = 't', how = 'all').drop('m').hvplot.line(x='t')
hvObjs[k] = xrAF.unstack().sel(l=[2,4,6,8], m=0, Orb=k, Eke=slice(v[0],v[1])).sum('Eke').drop('m').real.hvplot.line(x='t').overlay('l') # FOR no threshold case also need m=0 for contiguous plots!
# hvObjs[k] = xrAF.unstack().sel(l=[2,4], Orb=k, Eke=slice(v[0],v[1])).sum('Eke').unstack().drop('m').pipe(np.abs).fillna(0).hvplot.line(x='t') #.dropna(dim = 't', how = 'all').drop('m').hvplot.line(x='t')
hvObjsXS[k] = XS.sel(Orb=k, Eke=slice(v[0],v[1])).sum('Eke').fillna(0).hvplot.line(x='t')
# Stack - should be able to use HoloMap, but not working so far
# for k,v in Estates.items():
# Cross-sections for E slices in Orb order
# (hvObjsXS['X'] + hvObjsXS['A'] + hvObjsXS['B'] + hvObjsXS['C']).opts(shared_axes=False).cols(1) #.opts(opts.Cuver(axiswise=True))
(hvObjsXS['X'] + hvObjsXS['A'] + hvObjsXS['B'] + hvObjsXS['C']).cols(1)
# layout = []
# for k in Estates.keys():
# layout.append(hvObjsXS[k].overlay('l'))
# BLM for E slices in Orb order
(hvObjs['X'] + hvObjs['A'] + hvObjs['B'] + hvObjs['C']).cols(1) #.opts(shared_axes=False).cols(1)
# hvPlotters.hv.HoloMap(hvObjs, kdims=['l'], group='Orb') #.relabel(group='Phases', label='Sines', depth=1)
# hvDS.select(l=[2,4,6], Orb=k, Eke=v)
# print(data.jobInfo['ePolyScat'][0])
# print('Run: ' + jobInfo['Starting'][0].split('at')[1])
import scooby
scooby.Report(additional=['epsproc', 'xarray', 'jupyter', 'holoviews', 'pandas'])
# Check current Git commit for local ePSproc version
!git -C {Path(ep.__file__).parent} branch
!git -C {Path(ep.__file__).parent} log --format="%H" -n 1
# Check current remote commits
!git ls-remote --heads https://github.com/phockett/ePSproc
20/07/22
Updating from OCS_orbs8-11_AFBLMs_VM-ADMs_140122-JAKE_tidy-replot-280122.ipynb with LF results and updated ADMs.
Note LF all-orb plotting from http://jake/jupyter/user/paul/doc/tree/ePS/OCS/epsman2021/OCS_orb11_proc_131221-JAKE.ipynb - but didn't previously upload!
28/01/22
Adding additional plot types & energy slices to match expt.
14/01/22
Quick attempt at AF-BLMs for OCS, based on guessed/reasonable ADMs - action starts below after setup stage). Status: basically working, may be some remaining bugs with ADMs, cos^2 and renormalisation, but trends with alignment should be correct.
For methods: https://epsproc.readthedocs.io/en/dev/demos/ePSproc_class_demo_161020.html
14/01/22 PH
Rerunning with some additional test ADMs from Varun Makhija.
07/01/22 PH
Versions
13/12/21 PH
STATUS: basics in place, need to trim code and fix a few things still.
Revisiting AF cases with (hopefully) better match to experimental alignment.
TODO:
FIXED:
09/06/21 PH
Quick look at initial ePS results for orb 11 (HOMO), 2.5eV step size.
TODO:
For methods: https://epsproc.readthedocs.io/en/dev/demos/ePSproc_class_demo_161020.html