arpys package

Submodules

arpys.dataloaders module

Provides several Dataloader objects which open different kinds of data files - typically acquired at different sources (i.e. beamlines at various synchrotrons) - and crunch them into the same shape and form. The output form is an argparse.Namespace object like this:

Namespace(data,
          xscale,
          yscale,
          zscale,
          angles,
          theta,
          phi,
          E_b,
          hv)

Where the entries are as follows:

data np.array of shape (z,y,x); this has to be a 3D array even for 2D data - z=1 in that case. x, y and z are the lengths of the x-, y- and z-scales, respectively. The convention (z,y,x) is used over (x,y,z) as a consequence of matplotlib.pcolormesh transposing the data when plotting.
xscale np.array of shape(x); the x axis corresponding to the data.
yscale np.array of shape(y); the y axis corresponding to the data.
zscale np.array of shape(z); the z axis corresponding to the data.
angles 1D np.array; corresponding angles on the momentum axis of the analyzer. Depending on the beamline (analyzer slit orientation) this is expressed as theta or tilt. Usually coincides with either of x-, y- and zscales.
theta float or 1D np.array; the value of theta (or tilt in rotated analyzer slit orientation). Is mostly used for angle-to-k conversion.
phi float; value of the azimuthal angle phi. Mostly used for angle-to-k conversion.
E_b float; typical binding energy for the electrons represented in the data. In principle there is not just a single binding energy but as this is only used in angle-to-k conversion, where the typical variations of the order <10eV doen’t matter it suffices to give an average or maximum value.
hv float or 1D np.array; the used photon energy in the scan(s). In Case of a hv-scan, this obviously coincides with one of the x-, y- or z-scales.

Note that any change in the output structure has consequences for all programs and routines that receive from a dataloader (which is pretty much everything in this module) and previously pickled files.

arpys.dataloaders.start_step_n(start, step, n)[source]

Return an array that starts at value start and goes n steps of step.

class arpys.dataloaders.Dataloader(*args, **kwargs)[source]

Bases: object

Base dataloader class (interface) from which others inherit some methods (specifically the __repr__() function). The date attribute should indicate the last date that this specific dataloader worked properly for files of its type (as beamline filetypes may vary with time).

name = 'Base'
date = ''
print_m(*messages)[source]

Print message to console, adding the dataloader name.

class arpys.dataloaders.Dataloader_Pickle(*args, **kwargs)[source]

Bases: arpys.dataloaders.Dataloader

Load data that has been saved using python’s pickle module. ARPES pickle files are assumed to just contain the data namespace the way it would be returned by any Dataloader of this module.

name = 'Pickle'
load_data(filename)[source]
class arpys.dataloaders.Dataloader_i05(*args, **kwargs)[source]

Bases: arpys.dataloaders.Dataloader

Dataloader object for the i05 beamline at the Diamond Light Source.

name = 'i05'
load_data(filename)[source]
class arpys.dataloaders.Dataloader_ALS(*args, **kwargs)[source]

Bases: arpys.dataloaders.Dataloader

Object that allows loading and saving of ARPES data from the MAESTRO beamline at ALS, Berkely, in the newer .h5 format Organization of the ALS h5 file (June, 2018):

/-0D_Data
| |
| +-Cryostat_A
| +-Cryostat_B
| +-Cryostat_C
| +-Cryostat_D
| +-I0_NEXAFS
| +-IG_NEXAFS
| +-X                       <--- only present for xy scans (probably)
| +-Y                       <--- "
| +-Sorensen Program        <--- only present for dosing scans
| +-time
| 
+-1D_Data
| |
| +-Swept_SpectraN          <--- Not always present
|
+-2D_Data
| |
| +-Swept_SpectraN          <--- Usual location of data. There can be 
|                                several 'Swept_SpectraN', each with an 
|                                increasing value of N. The relevant data 
|                                seems to be in the highest numbered 
|                                Swept_Spectra.
|
+-Comments
| |
| +-PreScan
|
+-Headers
  |
  +-Beamline
  | |
  | +-[...]
  | +-EPU_POL               <--- Polarization (Integer encoded)
  | +-BL_E                  <--- Beamline energy (hv)
  | +-[...]
  | 
  +-Computer
  +-DAQ_Swept
  | |
  | +-[...]
  | +-SSPE_0                <--- Pass energy (eV)
  | +-[...]
  | |
  +-FileFormat
  +-Low_Level_Scan
  +-Main
  +-Motors_Logical
  +-Motors_Logical_Offset
  +-Motors_Physical
  +-Motors_Sample           <--- Contains sample coordinates (xyz & angles)
  | |
  | +-[...]
  | +-SMOTOR3               <--- Theta
  | +-SMOTOR5               <--- Phi
  | +-[...]
  |
  +-Motors_Sample_Offset
  +-Notebook
name = 'ALS'
get(group, field_name)[source]

Return the value of property field_name from h5File group group. field_name must be a bytestring (e.g. b'some_string')! This also returns a bytes object, remember to cast it correctly. Returns None if field_name was not found in group.

load_data(filename)[source]
class arpys.dataloaders.Dataloader_ALS_fits(work_func=4)[source]

Bases: arpys.dataloaders.Dataloader

Object that allows loading and saving of ARPES data from the MAESTRO beamline at ALS, Berkely, which is in .fits format.

name = 'ALS .fits'
k_stretch = 1.05
CUT = 'null'
MAP = 'Slit Defl'
HV = 'mono_eV'
DOPING = 'Sorensen Program'
load_data(filename)[source]
load_cut()[source]

Read data from a ‘cut’, which is just one slice of energy vs k.

load_map(swept)[source]

Read data from a ‘map’, i.e. several energy vs k slices and bring them in the right shape for the gui, which is (energy, k_parallel, k_perpendicular)

load_hv_scan()[source]

Read data from a hv scan, i.e. a series of energy vs k cuts (shape (n_kx, n_energy), each belonging to a different photon energy hv. The returned shape in this case must be (photon_energies, energy, k) The same is used for doping scans, where the output shape is (doping, energy, k)

class arpys.dataloaders.Dataloader_SIS(filename=None)[source]

Bases: arpys.dataloaders.Dataloader

Object that allows loading and saving of ARPES data from the SIS beamline at PSI which is in hd5 format.

name = 'SIS'
min_cuts_for_map = 10
load_data(filename)[source]

Extract and return the actual ‘data’, i.e. the recorded map/cut. Also return labels which provide some indications what the data means.

load_zip(filename)[source]

Load and store a deflector mode file from SIS-ULTRA.

read_viewer(viewer)[source]

Extract the file ID from a SIS-ULTRA deflector mode output file.

read_metadata(keys, metadata_file)[source]

Read the metadata from a SIS-ULTRA deflector mode output file.

load_h5(filename)[source]

Load and store the full h5 file and extract relevant information.

class arpys.dataloaders.Dataloader_ADRESS(*args, **kwargs)[source]

Bases: arpys.dataloaders.Dataloader

ADRESS beamline at SLS, PSI.

name = 'ADRESS'
load_data(filename)[source]
class arpys.dataloaders.Dataloader_CASSIOPEE(*args, **kwargs)[source]

Bases: arpys.dataloaders.Dataloader

CASSIOPEE beamline at SOLEIL synchrotron, Paris.

name = 'CASSIOPEE'
date = '18.07.2018'
HV = 'hv'
FSM = 'FSM'
load_data(filename)[source]

Single cuts are stored as two files: One file contians the data and the other the metadata. Maps, hv scans and other external loop-scans are stored as a directory containing these two files for each cut/step of the external loop. Thus, this dataloader distinguishes between directories and single files and changes its behaviour accordingly.

load_from_dir(dirname)[source]

Load 3D data from a directory as it is output by the IGOR macro used at CASSIOPEE. The dir is assumed to contain two files for each cut:

BASENAME_INDEX_i.txt     -> beamline related metadata
BASENAME_INDEX_ROI1_.txt -> data and analyzer related metadata

To be more precise, the assumptions made on the filenames in the directory are:

  • the INDEX is surrounded by underscores (_) and appears after the first underscore.
  • the string ROI appears in the data filename.
load_from_file(filename)[source]

Load just a single cut. However, at CASSIOPEE they output .ibw files if the cut does not belong to a scan…

load_from_ibw(filename)[source]

Load scan data from an IGOR binary wave file. Luckily someone has already written an interface for this (the python igor package).

load_from_txt(filename)[source]
get_metadata(filename)[source]

Extract some of the metadata stored in a CASSIOPEE output text file. Also try to detect the line number below which the data starts (for np.loadtxt’s skiprows.)

Returns

i int; last line number still containing metadata.
energy 1D np.array; energy (y-axis) values.
angles 1D np.array; angle (x-axis) values.
hv float; photon energy for this cut.
get_outer_loop(dirname, filenames)[source]

Try to determine the scantype and the corresponding z-axis scale from the additional metadata textfiles. These follow the assumptions made in self.load_from_dir. Additionally, the MONOCHROMATOR section must come before the UNDULATOR section as in both sections we have a key hv but only the former makes sense. Return a string for the scantype, the extracted z-scale and the value for hv for non-hv-scans (scantype, zscale, hvs[0]) or (None, None, hvs[0]) in case of failure.

arpys.dataloaders.load_data(filename, exclude=None, suppress_warnings=False)[source]

Try to load some dataset filename by iterating through all_dls and appliyng the respective dataloader’s load_data method. If it works: great. If not, try with the next dataloader. Collects and prints all raised exceptions in case that no dataloader succeeded.

arpys.dataloaders.dump(D, filename, force=False)[source]

Wrapper for pickle.dump(). Does not overwrite if a file of the given name already exists, unless force is True.

Parameters

D python object to be stored.
filename str; name of the output file to create.
force boolean; if True, overwrite existing file.
arpys.dataloaders.load_pickle(filename)[source]

Shorthand for loading python objects stored in pickle files.

Parameters

filename str; name of file to load.
arpys.dataloaders.update_namespace(D, *attributes)[source]

Add arbitrary attributes to a Namespace.

Parameters

D argparse.Namespace; the namespace holding the data and metadata. The format is the same as what is returned by a dataloader.
attributes tuples or len(2) lists; (name, value) pairs of the attributes to add. Where name is a str and value any python object.
arpys.dataloaders.add_attributes(filename, *attributes)[source]

Add arbitrary attributes to an argparse.Namespace that is stored as a python pickle file. Simply opens the file, updates the namespace with update_namespace and writes back to file.

Parameters

filename str; name of the file to update.
attributes tuples or len(2) lists; (name, value) pairs of the attributes to add. Where name is a str and value any python object.

arpys.fit2d module

Implementation of the 2D ARPES spectra fitting procedure as outlined by Li et al. in Coherent organization of electronic correlations as a mechanism to enhance and stabilize high-Tc cuprate superconductivity (DOI: 10.1038/s41467-017-02422-2).

arpys.fit2d.im_sigma_factory(lamb=1, T=0, i_step=0, e_step=0, w_step=0.01, i_gauss=0, e_gauss=0, w_gauss=0.01, offset=0)[source]

Factory to create functions that represent the imaginary part of the self-energy. Confer the documentation of im_sigma for explanations of the parameters. The factory pattern is used here because a function for im_sigma is needed in the Kramers-Kroning relations that give re_sigma.

Returns a function of the energy.

See also:im_sigma re_sigma
arpys.fit2d.im_sigma(E, lamb=1, T=0, i_step=0, e_step=0, w_step=0.1, i_gauss=0, e_gauss=0, w_gauss=0.1, offset=0)[source]

Imaginary part of the self-energy. It is parametrized as follows:

im_sigma(E) = lamb * sqrt(E^2 + (pi*k*T)^2)

                         i_step
              + ----------------------------
                 exp((E-e_step)/w_step) + 1

              + i_gauss * exp(-(E-e_gauss)^2 /(2*w_gauss))

              + offset

Parameters

E float or 1d-array; binding energy/argument to the self-energy in eV
lamb float; coefficient of “standard” self-energy term
T float; temperature in K
i_step float; coefficient of step function
e_step float; energy at which step occurs
w_step float; width of the step function
i_gauss float; coefficient of Gaussian
e_gauss float; center energy of Gaussian
w_gauss float; width of Gaussian (sigma)
offset float; constant additive offset

All energies are given in eV.

Constants

pi 3.14159…
k Boltzmann constant: 8.6173e-05 eV/K
arpys.fit2d.re_sigma(E, im_sig, e0=-5, e1=5)[source]

Calculate the real part of the self-energy from its imaginary part using the Kramers-Kroning relation:

                     e1
                     /  im_sigma(E')
re_sigma(E) = 1/pi * |  ------------ dE'
                     /     E' - E
                    e0

Parameters

E float; energy at which to evaluate the self-energy
im_sig func; function for the imaginary part of the self-energy
e0 float; lower integration bound
e1 float; upper integration bound
arpys.fit2d.self_energy_factory(im_kwargs={}, re_kwargs={})[source]

Combine real and imaginary parts of the self-energy to yield the full, complex self-energy.

Returns a function of the energy.

Parameters

im_kwargs dict; keyword arguments to im_sigma_factory
re_kwargs dict; keyword arguments to re_sigma

Confer respective documentations for further explanations on the parameters.

arpys.fit2d.g11(k, E, sig, band, gap)[source]

Return the complex electron removal portion of the Green’s function in the Nambu-Gorkov formalism:

                          E - sig(E) + band(k)
g11(k, E) = -------------------------------------------------
             (E-sig(E))^2 - band(k)^2 - gap*(1-Re(sig(E))/E)

Parameters

k array of length 2; k vector (in-plane) at which to evaluate g11
E float; energy at which to evaluate g11
sig func; function that returns the complex self-energy at E
band func; function that returns the bare band at k
gap func; function that returns the superconducting gap at k
arpys.fit2d.g11_alt(k, E, sig, band, gap)[source]

Variation of g11 which takes precalculated values of sig, band and gap.

Parameters

k array of length 2; k vector (in-plane) at which to evaluate g11
E float; energy at which to evaluate g11
sig complex; value of the complex self-energy at E
band float; value of the bare band at this k
gap float; value of the gap at this k
arpys.fit2d.arpes_intensity(k, E, i0, im_kwargs, re_kwargs, band, gap)[source]

Return the expected ARPES intensity at point (E,k) as modeled by:

                     g11(k, E)
I_ARPES = i0 * (-Im -----------) * f(E, T)
                        pi

Note that no broadening is applied.

Parameters

k array of length 2; k vector (in-plane) at which to evaluate
E float; energy at which to evaluate ARPES intensity
i0 float; global amplitude multiplier
im_kwargs dict; kwargs to im_sigma_factory
re_kwargs dict; kwargs to :func: re_sigma <arpys.fit2d.re_sigma_factory>
band func; function that returns the bare band at k
gap func; function that returns the superconducting gap at k
arpys.fit2d.compute_self_energy_parallel(self_energy_func, n_proc, E)[source]

Calculate the self energy more efficiently by splitting the work to several subprocesses. Since multiprocessing.Pool cannot handle local functions and lambdas, we have to do the job splitting by hand.

Parameters

self_energy_func func; function of E that returns the complex self-energy.
n_proc int; number of subprocesses to spawn (should be smaller or equal to the number of availabel cpus)
E 1d-array; energies at which to evaluate the self-energy.

Returns

self_energies 1d-array of same length as E;

This function simply splits the evaluation:

self_energies = [self_energy_func(e) for e in E]

into n_proc separate parts:

self_energies = [self_energy_func(e) for e in E[i0:i1]] + 
                [self_energy_func(e) for e in E[i1:i2]] + 
                [self_energy_func(e) for e in E[i2:i3]] + 
                ...

all of which can be evaluated simultaneously by a different subprocess.

arpys.fit2d.arpes_intensity_alt(k, E, i0, im_kwargs, re_kwargs, band, gap, n_proc=1)[source]

Alternative implementation of arpes_intensity that is hopefully a bit faster.

Return the expected ARPES intensity at point (E,k) as modeled by:

                     g11(k, E)
I_ARPES = i0 * (-Im -----------) * f(E, T)
                        pi

Note that no broadening is applied.

Parameters

k 2D array of shape (2,nk); k vectors (in-plane)
E array of length ne; energies at which to evaluate ARPES intensity
i0 float; global amplitude multiplier
im_kwargs dict; kwargs to :func: im_sigma_factory <arpys.fit2d.im_sigma_factory>
re_kwargs dict; kwargs to :func: re_sigma <arpys.fit2d.re_sigma_factory>
band func; function that returns the bare band at k
gap func; function that returns the superconducting gap at k

Returns

intensity 2D array of shape (ne, nk);
arpys.fit2d.band_factory(bottom, m_e=1)[source]

Create a function that represents a parabolic band with band bottom at energy bottom.

Parameters

bottom float; energy of the band bottom in eV, measured from the Fermi level.
m_e float; effective electron mass in units of electron rest mass. Tunes the opening of the parabola.

Returns

band func; a function of a length 2 array k that returns the energy of a band at given k. k should be given in inverse Angstrom.

arpys.postprocessing module

Contains different tools to post-process (ARPES) data.

arpys.postprocessing.make_slice_nd(data, dimension, index, integrate=0)[source]

Create a slice at index index, integrating +- integrate pixels along dimension dimension of some N-dimensional array data.

Parameters

data N-dimensional np.array; the data to take a slice from
dimension int; the dimension along which to slice.
index int; index along dimension at which to slice.
integrate int; optionally integrate by +- integrate pixels around index.

Returns

result (N-1)-dimensional np.array; the resulting data slice.
arpys.postprocessing.make_slice(data, d, i, integrate=0, silent=False)[source]

Create a slice out of the 3d data (l x m x n) along dimension d (0,1,2) at index i. Optionally integrate around i.

Parameters

data array-like; map data of the shape (l x m x n)
d int, d in (0, 1, 2); dimension along which to slice
i int, 0 <= i < data.size[d]; The index at which to create the slice
integrate int, 0 <= integrate < |i - n|; the number of slices above and below slice i over which to integrate
silent bool; toggle warning messages

Returns

res np.array; Slice at index with dimensions shape[:d] + shape[d+1:] where shape = (l, m, n).
arpys.postprocessing.make_map(data, i, integrate=0)[source]

Create a ‘top view’ slice for FSM data. If the values of i or integrate are bigger than what is possible, they are automatically reduced to the maximum possible.

Parameters

data array-like; map data of the shape (l x m x n) where l corresponds to the number of energy values
i int, 0 <= i < n; The index at which to create the slice
integrate int, 0 <= integrate < |i - n|; the number of slices above and below slice i over which to integrate

Returns

res np.array; Map at given energy with dimensions (m x n)
See also:make_slice.

make_map is basically a special case of make_slice.

arpys.postprocessing.arbitrary_slice_discrete(data, p0, p1, xunits=None, yunits=None)[source]

Create a slice of some 3d data cube along the plane defined by two given points p0 and p1 and parallel to the z-axis, taking only discrete values at the pixels (as opposed to using an interpolation).

Parameters

data 3d array of shape (z, y, x); the data cube to slice from.
p0 2d array-like; starting point in the xy plane.
p1 2d array-like; endpoint in the xy plane
xunits 1d array of length x; units used along the x axis.
yunits 1d array of length y; units used along the y axis.

Returns

cut 2d array of shape (z, h); the extracted cut. h is the distance in pixels between p0 and p1.
arpys.postprocessing.arbitrary_slice_interpolated()[source]
arpys.postprocessing.normalize_globally(data, minimum=True)[source]

The simplest approach: normalize the whole dataset by the global min- or maximum.

Parameters

data array-like; the input data of arbitrary dimensionality
minimum boolean; if True, use the min, otherwise the max function

Returns

res np.array; normalized version of input data
arpys.postprocessing.convert_data(data)[source]

Helper function to convert data to the right shape.

arpys.postprocessing.convert_data_back(data, d, m, n)[source]

Helper function to convert data back to the original shape which is determined by the values of d, m and n (outputs of convert_data).

arpys.postprocessing.normalize_per_segment(data, dim=0, minimum=False)[source]

Normalize each column/row by its respective max value.

Parameters

data array-like; the input data with shape (m x n) or (1 x m x n)
dim int; along which dimension to normalize (0 or 1)
minimum boolean; if True, use the min, otherwise the max function

Returns

res np.array; normalized version of input data in same shape
arpys.postprocessing.normalize_per_integrated_segment(data, dim=0, profile=False, in_place=True)[source]

Normalize each MDC/EDC by its integral.

Parameters

data array-like; the input data with shape (m x n) or (1 x m x n).
dim int; along which dimension to normalize (0 or 1)
profile boolean; if True return a tuple (res, norm) instead of just res.
in_place boolean; whether or not to update the input data in place. This can be used if one is only interested in the normalization profile and does not want to spend computation time with actually changing the data (as might be the case when processing FSMs). If this is False data will not be in the output. TODO This doesn’t make sense.

Returns

res np.array; normalized version of input data in same shape. Only given if in_place is True.
norms np.array; 1D array of length X for dim=0 and Y for dim=1 of normalization factors for each channel. Only given if profile is True.
arpys.postprocessing.norm_int_edc(data, profile=False)[source]

Shorthand for normalize_per_integrated_segment with arguments dim=1, profile=False, in_place=True. Returns the normalized array.

arpys.postprocessing.normalize_above_fermi(data, ef_index, n_pts=10, dist=0, inverted=False, dim=1, profile=False, in_place=True)[source]

Normalize data to the mean of the n_pts smallest values above the Fermi level.

Parameters

data array-like; data of shape (m x n) or (1 x m x n)
ef_index int; index of the Fermi level in the EDCs
n int; number of points above the Fermi level to average over
dist int; distance from Fermi level before starting to take points for the normalization. The points taken correspond to EDC[ef_index+d:ef_index+d+n] (in the non-inverted case)
dim either 1 or 2; 1 if EDCs have length n, 2 if EDCs have length m
inverted boolean; this should be set to True if higher energy values come first in the EDCs
profile boolean; if True, the list of normalization factors is returned additionally

Returns

data array-like; normalized data of same shape as input data
profile 1D-array; only returned as a tuple with data (data, profile) if argument profile was set to True. Contains the normalization profile, i.e. the normalization factor for each channel. Its length is m if dim==2 and l if dim==1.
arpys.postprocessing.norm_to_smooth_mdc(data, mdc_index, integrate, dim=1, n_box=15, recursion_level=1)[source]

Normalize a cut to a smoothened average MDC of the intensity above the Fermi level.

Parameters

data array of shape (1 x m x n) or (m x n);
mdc_index int; index in data at which to take the mdc
integrate int; number of MDCs above and below mdc_index over which to integrate
dim either 1 or 2; 1 if EDCs have length n, 2 if EDCs have length m
n_box int; box size of linear smoother. Confer smooth
recursion_level int; number of times to iteratively apply the smoother. Confer smooth

Returns

result normalized data in same shape
arpys.postprocessing.subtract_bg_fermi(data, n_pts=10, ef=None, ef_index=None)[source]

Use the mean of the counts above the Fermi level as a background and subtract it from every channel/k. If no fermi level or index of the fermi level is specified, do the same as <func> subtract_bg_matt() but along EDCs instead of MDCs.

Parameters

data array-like; the input data with shape (m x n) or (1 x m x n) containing momentum in y (n momentum points (?)) and energy along x (m energy points) (plotting amazingly inverts x and y)
n_pts int; number of smallest points to take in order to determine bg

Returns

res np.array; bg-subtracted version of input data in same shape
arpys.postprocessing.subtract_bg_matt(data, n_pts=5, profile=False)[source]

Subtract background following the method in C.E.Matt’s “High-temperature Superconductivity Restrained by Orbital Hybridisation”. Use the mean of the n_pts smallest points in the spectrum for each energy (i.e. each MDC).

Parameters

data array-like; the input data with shape (m x n) or (1 x m x n) containing momentum in y (n momentum points) and energy along x (m energy points) (plotting amazingly inverts x and y)
n_pts int; number of smallest points to take in each MDC in order to determine bg
profile boolean; if True, a list of the background values for each MDC is returned additionally.

Returns

res np.array; bg-subtracted version of input data in same shape
profile 1D-array; only returned as a tuple with data (data, profile) if argument profile was set to True. Contains the background profile, i.e. the background value for each MDC.
arpys.postprocessing.subtract_bg_shirley(data, dim=0, profile=False, normindex=0)[source]

Use an iterative approach for the background of an EDC as described in DOI:10.1103/PhysRevB.5.4709. Mathematically, the value of the EDC after BG subtraction for energy E EDC’(E) can be expressed as follows:

                       E1
                       /
EDC'(E) = EDC(E) - s * | EDC(e) de
                       /
                       E

where EDC(E) is the value of the EDC at E before bg subtraction, E1 is a chosen energy value (in our case the last value in the EDC) up to which the subtraction is applied and s is chosen such that EDC’(E0)=EDC’(E1) with E0 being the starting value of the bg subtraction (in our case the first value in the EDC).

In principle, this is an iterative method, so it should be applied repeatedly, until no appreciable change occurs through an iteration. In practice this convergence is reached in 4-5 iterations at most and even a single iteration may suffice.

Parameters

data np.array; input data with shape (m x n) or (1 x m x n) containing an E(k) cut
dim int; either 0 or 1. Determines whether the input is aranged as E(k) (n EDCs of length m, dim=0) or k(E) (m EDCs of length n, dim=1)
profile boolean; if True, a list of the background values for each MDC is returned additionally.

Returns

data np.array; has the same dimensions as the input array.
profile 1D-array; only returned as a tuple with data (data, profile) if argument profile was set to True. Contains the background profile, i.e. the background value for each MDC.
arpys.postprocessing.subtract_bg_shirley_old(data, dim=0, normindex=0)[source]

Use an iterative approach for the background of an EDC as described in DOI:10.1103/PhysRevB.5.4709. Mathematically, the value of the EDC after BG subtraction for energy E EDC’(E) can be expressed as follows:

                       E1
                       /
EDC'(E) = EDC(E) - s * | EDC(e) de
                       /
                       E

where EDC(E) is the value of the EDC at E before bg subtraction, E1 is a chosen energy value (in our case the last value in the EDC) up to which the subtraction is applied and s is chosen such that EDC’(E0)=EDC’(E1) with E0 being the starting value of the bg subtraction (in our case the first value in the EDC).

In principle, this is an iterative method, so it should be applied repeatedly, until no appreciable change occurs through an iteration. In practice this convergence is reached in 4-5 iterations at most and even a single iteration may suffice.

Parameters

data np.array; input data with shape (l x m) or (1 x l x m) containing an E(k) cut
dim int; either 0 or 1. Determines whether the input is aranged as E(k) (m EDCs of length l, dim=0) or k(E) (l EDCs of length m, dim=1)

Returns

data np.array; has the same dimensions as the input array.
arpys.postprocessing.subtract_bg_kaminski(data)[source]

Unfinished

Use the method of Kaminski et al. (DOI: 10.1103/PhysRevB.69.212509) to subtract background. The principle is as follows: A lorentzian + a linear background y(x) = ax + b is fitted to momentum distribution curves. One then uses the magnitude of the linear component at every energy as the background at that energy for a given k point.

Parameters

data array-like; the input data with shape (l x m) or (l x m x 1) containing momentum in y (m momentum points) and energy along x (l energy points) (plotting amazingly inverts x and y)
arpys.postprocessing.apply_to_map(data, func, dim=1, output=True, fargs=(), fkwargs={})[source]

Untested Apply a postprocessing function func which is designed to be applied to an energy vs k cut to each cut of a map.

Parameters

data array; 3D array of shape (l x m x n) representing the data.
func function; a function that can be applied to 2D data
dim int; dimension along which to apply the function func: 0 - l cuts 1 - m cuts 2 - n cuts
output boolean; if True, collect the output of every application of func on each slice in a list results and return it. Can be set to False if no return is needed in order to save some memory.
fargs tuple; positional arguments to be passed on to func.
fkwargs dict; keyword arguments to be passed on to func.

Returns

returns list; contains the return value of every call to func that was made in the order they were made.
arpys.postprocessing.laplacian(data, dx=1, dy=1, a=None)[source]

Apply the second derivative (Laplacian) to the data.

Parameters

data array-like; the input data with shape (m x n) or (1 x m x n)
dx float; distance at x axis
dy float; distance at y axis
a float; scaling factor for between x and y derivatives. Should be close to dx/dy.

Returns

res np.array; second derivative of input array in same dimensions
arpys.postprocessing.curvature(data, dx=1, dy=1, cx=1, cy=1)[source]

Apply the curvature method (DOI: 10.1063/1.3585113) to the data.

Parameters

data array-like; the input data with shape (m x n) or (1 x m x n)
dx float; distance at x axis
dy float; distance at y axis
cx float; weight of gradient in x direction
cy float; weight of gradient in y direction

Returns

res np.array; curvature of input array in same dimensions
arpys.postprocessing.smooth(x, n_box, recursion_level=1)[source]

Implement a linear midpoint smoother: Move an imaginary ‘box’ of size ‘n_box’ over the data points ‘x’ and replace every point with the mean value of the box centered at that point. Can be called recursively to apply the smoothing n times in a row by setting ‘recursion_level’ to n.

At the endpoints, the arrays are assumed to continue by repeating their value at the start/end as to minimize endpoint effects. I.e. the array [1,1,2,3,5,8,13] becomes [1,1,1,1,2,3,5,8,13,13,13] for a box with n_box=5.

Parameters

x 1D array-like; the data to smooth
n_box int; size of the smoothing box (i.e. number of points around the central point over which to take the mean). Should be an odd number - otherwise the next lower odd number is taken.
recursion_level int; equals the number of times the smoothing is applied.

Returns

res np.array; smoothed data points of same shape as input.
arpys.postprocessing.smooth_derivative(x, n_box=15, n_smooth=3)[source]

Apply linear smoothing to some data, take the derivative of the smoothed curve, smooth that derivative and take the derivative again. Finally, apply a last round of smoothing.

Parameters

Same as in arpys.postprocessing.smooth().  
n_smooth corresponds to recursion_level.  
arpys.postprocessing.zero_crossings(x, direction=0)[source]

Return the indices of the points where the data in x crosses 0, going from positive to negative values (direction = -1), vice versa (direction=1) or both (direction=0). This is detected simply by a change of sign between two subsequent points.

Parameters

x 1D array-like; data in which to find zero crossings
direction int, one of (-1, 0, 1); see above for explanation

Returns

crossings list; list of indices of the elements just before the crossings
arpys.postprocessing.old_detect_fermi_level(edc, n_box, n_smooth, orientation=1)[source]

This routine is more useful for detecting local extrema, not really for detecting steps.

arpys.postprocessing.detect_step(signal, n_box=15, n_smooth=3)[source]

Try to detect the biggest, clearest step in a signal by smoothing it and looking at the maximum of the first derivative.

arpys.postprocessing.fermi_fit_func(E, E_F, sigma, a, b, T=10)[source]

Fermi Dirac distribution with an additional linear inelastic background and convoluted with a Gaussian for the instrument resolution.

Parameters

E 1d-array; energy values in eV
E_F float; Fermi energy in eV
sigma float; instrument resolution in units of the energy step size in E.
a float; slope of the linear background.
b float; offset of the linear background at E_F.
T float; temperature.
arpys.postprocessing.fit_fermi_dirac(energies, edc, e_0, T=10, sigma0=10, a0=0, b0=-0.1)[source]

Try fitting a Fermi Dirac distribution convoluted by a Gaussian (simulating the instrument resolution) plus a linear component on the side with E<E_F to a given energy distribution curve.

Parameters

energies 1D array of float; energy values.
edc 1D array of float; corresponding intensity counts.
e_0 float; starting guess for the Fermi energy. The fitting procedure is quite sensitive to this.
T float; (fixed) temperature.
sigma0 float; starting guess for the standard deviation of the Gaussian in units of pixels (i.e. the step size in energies).
a0 float; starting guess for the slope of the linear component.
b0 float; starting guess for the linear offset.

Returns

p list of float; contains the fit results for [E_F, sigma, a, b].
res_func callable; the fit function with the optimized parameters. With this you can just do res_func(E) to get the value of the Fermi-Dirac distribution at energy E.
arpys.postprocessing.fit_gold(D, e_0=None, T=10)[source]

Apply a Fermi-Dirac fit to all EDCs of an ARPES Gold spectrum.

Parameters

D argparse.Namespace object; ARPES data and metadata as is created by a Dataloader object. Assumes the energies to be stored in D.xscale and the data to be of shape (1, n_angles, n_energies).
e_0 float; starting guess for the Fermi energy in the energy units provided in D. If this is not given, a starting guess will be estimated by detecting the step in the integrated spectrum using detect_step.
T float; Temperature.

Returns

fermi_levels list; Fermi energy for each EDC.
sigmas list; standard deviations of instrument resolution Gaussian for each EDC. This is in units of energy steps. To convert to energy, just multiply by the energy step in D.
slopes array; slopes of the linear background in the Fermi fit function.
offsets array; offsets of the linear background in the Fermi fit function.
functions list of callables; functions of energy that produce the fit for each EDC.
arpys.postprocessing.fit_gold_array(gold, energies, e_0=None, T=10)[source]

Apply a Fermi-Dirac fit to all EDCs of an ARPES Gold spectrum, represented by a numpy array.

*Parameters*

gold 2d np.array; shape (n_angles, n_energies).
energies 1d np.array of length n_energies.
e_0 float; starting guess for the Fermi energy in the energy units provided in D. If this is not given, a starting guess will be estimated by detecting the step in the integrated spectrum using detect_step.
T float; Temperature.

*Returns*

fermi_levels array; Fermi energy for each EDC.
sigmas array; standard deviations of instrument resolution Gaussian for each EDC. This is in units of energy steps. To convert to energy, just multiply by the energy step in D.
slopes array; slopes of the linear background in the Fermi fit function.
offsets array; offsets of the linear background in the Fermi fit function.
functions list of callables; functions of energy that produce the fit for each EDC.
arpys.postprocessing.get_pixel_shifts(energies, fermi_levels, reference=None)[source]

Use the output from fit_gold to create a list of integers, indicating how many pixels each channel should be shifted by.

Parameters

energies 1d-array, length N; kinetic energies as output by most beamlines
fermi_levels 1d-array, length M; detected Fermi steps in energy units, as output by fit_gold
reference None or int; index of the reference channel to use. If None, use the mean shift (rounded down) as a reference.

Returns

pixel_shifts 1d-array, length M; number of pixels (positive or negative) each channel should be shifted by.
arpys.postprocessing.apply_pixel_shifts(data, shifts, dim=None)[source]

Shift the arrays in data along dimension dim by a number of pixels as specified by sifts. len(shifts) has to be equal to data.shape[dim]. If dim is not specified, the axis of data that has the same length as shifts is automatically chosen.

Parameters

data 2d-array of shape (N x M);
shifts 1d-array of shape (len(data.shape[dim]));
dim int or None; can be given to specify along which axis the shifts should be applied (e.g. for square arrays).

Returns

shifted_data 2d-array of same shape as input data, except that shifted_data[dim] = data[dim] - *cutoff where cutoff is equal to the twice the absolute maximum shift.
cutoff int; the number of pixels that had to be cut off at the start and end of each array.
arpys.postprocessing.adjust_fermi_level(energies, fermi_levels)[source]

Use the output from fit_gold to create an adjusted energy mesh, useful for plotting.

Parameters

energies 1d-array, length N; kinetic energies as output by most beamlines
fermi_levels 1d-array, length M; detected Fermi steps in energy units, as output by fit_gold

Returns

adjusted_energies 2d-array, NxM;
arpys.postprocessing.angle_to_k(alpha, beta, hv, dalpha=0, dbeta=0, orientation='horizontal', work_func=4)[source]

Convert angles of the experimental geometry to k-space coordinates. Confer the sheet “ARPES angle to k-space conversion” [doc/a2k.pdf] for detailed explanations.

Parameters

alpha array of length nkx; angles in degrees along the independent rotation (often called “theta” or “polar” at beamlines.
beta array of length nky; angles in degrees along the dependent rotation (not the azimuth, often called “tilt”).
hv float; used photon energy in eV.
dalpha float; offset to alpha in degrees.
dbeta float; offset to alpha in degrees.
orientation str; determines the analyzer slit orientation, which can be horizontal (default) or vertical. The first letter of the given string must be either ‘h’ or ‘v’.
work_func float; value of the work function in eV.

Returns

KX array of shape (nkx, nky); mesh of k values in parallel direction in units of inverse Angstrom.
KY array of shape (nkx, nky); mesh of k values in perpendicular direction in units of inverse Angstrom.
arpys.postprocessing.best_a2k(alpha, beta, hv, dalpha=0, dbeta=0, orientation='horizontal', work_func=4)[source]

Alias for angle_to_k

arpys.postprocessing.a2k(D, lattice_constant, dalpha=0, dbeta=0, orientation='horizontal')[source]

Shorthand angle to k conversion that takes the output of a Dataloader object as input and passes all necessary information on to the actual converter (angle_to_k).

arpys.postprocessing.alt_a2k(angle, tilt, theta, phi, hv, a, b=None, c=None, work_func=4)[source]

Unfinished Alternative angle-to-k conversion approach using rotation matrices. Determine the norm of the k vector from the kinetic energy using the relation:

       sqrt( 2*m_e*hv )
k_F = ------------------
             hbar

Then initiate a k vector in the direction measured and rotate it with the given tilt, theta and phi angles.

This follows Denys’ definitions (github ilikecarbs)

arpys.postprocessing.step_function_core(x, step_x=0, flip=False)[source]

Implement a perfect step function f(x) with step at step_x:

        / 0   if x < step_x
        |
f(x) = {  0.5 if x = step_x
        |
        \ 1   if x > step_x

Parameters

x array; x domain of function
step_x float; position of the step
flip boolean; Flip the > and < signs in the definition
arpys.postprocessing.step_function(x, step_x=0, flip=False)[source]

np.ufunc wrapper for step_function_core. Confer corresponding documentation.

arpys.postprocessing.step_core(x, step_x=0, flip=False)[source]

Implement a step function f(x) with step at step_x:

        / 0 if x < step_x
f(x) = {
        \ 1 if x >= step_x
See also:step_function_core.
arpys.postprocessing.step_ufunc(x, step_x=0, flip=False)[source]

np.ufunc wrapper for step_core. Confer corresponding documentation.

arpys.postprocessing.lorentzian(x, a=1, mu=0, gamma=1)[source]

Implement a Lorentzian curve f(x) given by the expression:

                    a
       ----------------------------
f(x) =                         2
                  /   /  x-mu \                  pi*gamma*( 1+( ------- )  )
                  \   \ gamma /  /

Parameters

x array; variable at which to evaluate f(x)
a float; amplitude (maximum value of curve)
mu float; mean of curve (location of maximum)
gamma float; half-width at half-maximum (HWHM) of the curve

Returns

res array containing the value of the Lorentzian at every point of input x

Warning

Deprecated. Use lorentzian() instead.

arpys.postprocessing.gaussian(x, a=1, mu=0, sigma=1)[source]

Implement a Gaussian bell curve f(x) given by the expression:

               2
1    / (x-mu)         f(x) = a * exp( - - * ( -------  ) )
2    \ sigma  /

Parameters

x array; variable at which to evaluate f(x)
a float; amplitude (maximum value of curve)
mu float; mean of curve (location of maximum)
sigma float; standard deviation (width of the curve)

Returns

res array containing the value of the Gaussian at every point of input x
arpys.postprocessing.gaussian_step(x, step_x=0, a=1, mu=0, sigma=1, flip=False, after_step=None)[source]

Implement (as a broadcastable np.ufunc) a sort-of convolution of a step-function with a Gaussian bell curve, defined as follows

        / g(x, a, mu, sigma) if x < step_x
f(x) = {
        \ after_step         if x >= step_x

where g(x) is the gaussian.

Parameters

x array; x domain of function
step_x float; position of the step
a float; prefactor of the Gaussian
mu float; mean of the Gaussian
sigma float; standard deviation of the Gaussian
flip boolean; Flip the > and < signs in the definition
after_step float; if not None, set a constant value that is assumed after the step. Else assume the value of the Gaussian at the step_x.
arpys.postprocessing.fermi_dirac(E, mu=0, T=4.2)[source]

Return the Fermi Dirac distribution with chemical potential mu at temperature T for energy E. The Fermi Dirac distribution is given by:

                 1
n(E) = ----------------------
        exp((E-mu)/(kT)) + 1

and assumes values from 0 to 1.

Parameters

E 1d-array of float; the energy values in electronvolt.
mu float; the chemical potential in electronvolt.
T float; temperature in Kelvin
arpys.postprocessing.rotation_matrix(theta)[source]

Return the 2x2 rotation matrix for an angle theta (in degrees).

arpys.postprocessing.rotate_XY(X, Y, theta=45)[source]

Rotate a coordinate mesh of (n by m) points by angle theta. X and Y hold the x and y components of the coordinates respectively, as if generated through np.meshgrid.

Parameters

X n by m array; x components of coordinates.
Y n by m array; y components of coordinates.
theta float; rotation angle in degrees

Returns

U,V n by m arrays; U (V) contains the x (y) components of the rotated coordinates. These can be used as arguments to pcolormesh()
See also:arpes.postprocessing.rotate_xy()
arpys.postprocessing.rotate_xy(x, y, theta=45)[source]

Rotate the x and y cooridnates of rectangular 2D data by angle theta.

Parameters

x 1D array of length n; x values of the rectangular grid
y 1D array of length m; y values of the rectangular grid
theta float; rotation angle in degrees

Returns

U,V n by m arrays; U (V) contains the x (y) components of the rotated coordinates. These can be used as arguments to pcolormesh()
See also:arpes.postprocessing.rotate_XY()
arpys.postprocessing.symmetrize_around(data, p0, p1)[source]
Unfinished:

Symmetrize around the line connecting the points p0 and p1. p0, p1: indices of points

arpys.postprocessing.flip_linear(data, i, x=None)[source]

Flip an array around pixel i to the right.

Parameters

data 1d-array of length n0;
i int; index around which to flip.
x 1d-array of length n0; optional; equidistantly spaced monotonically in- or decreasing independent variable corresponding to data.

Results

flipped 1d-array of length 2*i + 1.
new_x 1d-array of same length as flipped. The extrapolated x values.
arpys.postprocessing.symmetrize_linear(data, i, x=None, valid=True)[source]

Symmetrize an array around pixel i.

Parameters

data 1d-array of length n0;
i int; index around which to symmetrize.
x
d-array of length ``n0`; optional; equidistantly spaced
monotonically in- or decreasing independent variable corresponding to data.
valid bool; if True, cut the result to the region where the original and flipped data are overlayed (length=2*(n0-i)-1 if i>=n0/2 or length=2(i+1)-1 if i<n0/2), otherwise return the full array (length=2*i+1 if i>=n0/2 or length=2*n0-i)-1 if i<n/2).

Results

symmetrized 1d-array; the symmetrized data. Its length depends on the value of the parameter valid.
new_x 1d-array of same length as symmetrized. The extrapolated x values.

See also

flip_linear(): the difference to this is that here the array values are being summed up, which leads to a valid and less valid region.

symmetrize_rectangular()

arpys.postprocessing.symmetrize_rectangular(data, i, k=None)[source]

Symmetrize a piece of rectangular data around column i by simply mirroring the data at column i and overlaying it in the correct position.

Parameters

data array of shape (ny, nx0); data to be symmetrized.
i int; index along x (0 <= i < nx0) around which to symmetrize.
k array of length nx0; the original k values (x scale of the data). If this is given, the new, expanded k values will be calculated and returned.

Returns

result array of shape (ny, nx1); the x dimension has expanded.
sym_k array of length nx1; the expanded k values (x scale to the resulting data). If k is None, sym_k will also be None.

Here’s a graphical explanation for the coordinates used in the code for the case i < nx0/2. If i > nx0/2 we flip the data first such that we can apply the same procedure and coordinates.

Original image:

+----------+
|   |      |
|          |
|   |      |
|          |
|   |      |
+----------+

Mirrored image:

x----------x
|      |   |
|          |
|      |   |
|          |
|      |   |
x----------x

Overlay both images:

x--+-------x--+
|  |   |   |  |
|  |       |  |
|  |   |   |  |
|  |       |  |
|  |   |   |  |
x--+-------x--+
^  ^   ^   ^  ^
0  |   i  nx0 |
   |          |
nx0-2*i      nx1 = 2*(nx0-i)
arpys.postprocessing.symmetrize_map(kx, ky, mapdata, clean=False, overlap=False, n_rot=4, debug=False)[source]

Apply all valid symmetry operations (rotation around 90, 180, 270 degrees, mirror along x=0, y=0, y=x and y=-x axis) to a map and sum their results together in order to get a symmetric picture. The clean option allows to automatically cut off unsymmetrized parts and returns a data array of reduced size, containing only the points that could get fully symmetrized. In this case, kx and ky are also trimmed to the right size. This functions implements a couple of optimizations, leading to slightly more complicated but faster running code.

Parameters

kx n length array
ky m length array
mapdata (m x n) array (counterintuitive to kx and ky but consistent with pcolormesh)
clean boolean; toggle whether or not to cut off unsymmetrized parts

Returns

kx, ky if clean is False, these are the same as the input kx and ky. If clean is True the arrays are cut to the right size
symmetrized 2D array; the symmetrized map. Either it has shape (m x n) clean is False) or smaller, depending on how much could be symmetrized.
arpys.postprocessing.find_symmetry_index(spectrum, eps=0.13, sub_ac=None)[source]

Find the symmetry center of an ARPES spectrum (angle vs energy) by autocorrelating it with its shifted mirror image.

Parameters

spectrum 2d-array; shape (n_e, n_k) where n_e is the number of energy channels and n_k the number of angular or k channels.
eps float; threshold value under which circumstances not to subtract the autocorrelation. It should not be subtracted when the image is already very symmetric around its center. Higher values make it less likely that it is subtracted.
sub_ac boolean or None; manually fix whether or not the autocorrelation should be subtracted. Overrides eps.

Returns

imax int; the index along the angular dimension (dimension 1 in spectrum) at which the symmetry center is found to be.
debug list; debug information [corr, autocorr, delta, delta0, SUBTRACT_AUTOCORR]
arpys.postprocessing.get_lines(data, n, dim=0, i0=0, i1=-1, offset=0.2, integrate='max', **kwargs)[source]

Extract n evenly spaced rows/columns from data along dimension dim between indices i0 and i1. The extracted lines are normalized and offset such that they can be nicely plotted close by each other - like for example in a typical EDC or MDC plot.

Parameters

data 2d np.array; the data from which to extract lines.
n int; the number of lines to extract.
dim int; either 0 or 1, specifying the dimension along which to extract lines.
i0 int; starting index in data along dim.
i1 int; ending index in data along dim.
offset float; how much to vertically translate each successive line.
integrate int or other; specifies how many channels around each line index should be integrated over. If anything but a small enough integer is given, defaults to the maximally available integration range.
kwargs any other passed keyword arguments are discarded.

Returns

lines list of 1d np.arrays; the extracted lines.
indices list of int; the indices at which the lines were extracted.
arpys.postprocessing.plot_edcs(ax, data, energy, momenta=None, lw=0.5, color='k', label_fmt='{:.2f}', n=10, offset=0.2, **getlines_kwargs)[source]

Create an EDC plot by plotting every nth EDC in data against energy. The EDCs are normalized to their overall maximum and shifted from each other by offset. See get_lines for more options on the extraction of EDCs.

Parameters

ax matplotlib.axes.Axes; the axes in which to plot.
data 2d np.array; the data from which to extract EDCs.
energy 1d np.array; the associated energy values.
momenta 1d np.array; the associated angle or momentum values. This is optional but if given, will be used to calculate appropriate tick values.
lw float; the linewidth of the plotted lines.
color any color argument understood by matplotlib. Color of the plotted lines.
label_fmt str; a format string for the ticklabels.
n int; number of lines to extract from data.
offset float; how far apart to space the lines from each other.
getlines_kwargs other kwargs are passed to get_lines

Returns

lines2ds list of Line2D objects; the drawn lines.
xticks list of float; locations of the 0 intensity value of each line
xtickvalues list of float; if momenta were supplied, corresponding xtick values in units of momenta. Otherwise this is just a copy of xticks.
xticklabels list of str; xtickvalues formatted according to label_fmt.
arpys.postprocessing.plot_cuts(data, dim=0, zs=None, labels=None, max_ppf=16, max_nfigs=4, **kwargs)[source]

Plot all (or only the ones specified by zs) cuts along dimension dim on separate subplots onto matplotlib figures.

Parameters

data 3D np.array with shape (z,y,x); the data cube.
dim int; one of (0,1,2). Dimension along which to take the cuts.
zs 1D np.array; selection of indices along dimension dim. Only the given indices will be plotted.
labels 1D array/list of length z. Optional labels to assign to the different cuts
max_ppf int; maximum number of plots per figure.
max_nfigs int; maximum number of figures that are created. If more would be necessary to display all plots, a warning is issued and only every N’th plot is created, where N is chosen such that the whole ‘range’ of plots is represented on the figures.
kwargs dict; keyword arguments passed on to pcolormesh. Additionally, the kwarg gamma for power-law color mapping is accepted.
arpys.postprocessing.k_abs(hv, E_B=0, phi=4, lattice_constant=None)[source]

Calculate the absolute value of the electron momentum k (in inverse Angstrom or in units of pi/lattice_constant) for an incident photon energy hv, electron binding energy E_B and work function phi (all given in electronvolt).

Parameters

hv float or 1d-array; incident photon energy (eV)
E_B float; electron binding energy (eV)
phi float; work function (eV)
lattice_constant float; lattice constant along a direction of interest. If this is given, the result will be expressed in units of pi/lattice_constant instead of inverse Angstrom.

Returns

k_abs float or 1d-array; Absolute value of the photoelectron momentum given either in units of inverse Angstrom (if lattice_constant is None) or in pi/lattice_constant.
See also:hv
arpys.postprocessing.kramers_kronig(f, omega, e0=-10, e1=10, points=[], verbosity=0)[source]

Directly calculate the Kramers-Kronig transform of a function f(omega). This uses numerical integration as opposed to hilbert which makes use of the Fourier transform. Performance-wise, this is therefore much slower, as a diverging integral has to be calculated for every point omega. The result, however, should be more precise.

Parameters

f callable; the function on which the Kramer-Kronig transform is applied.
omega array; list of points at which to evaluate the KK transform. One can often save some calculations by making use of evenness/oddness of the input function f.
e0 float; lower integration bound.
e1 float; upper integration bound.
points list; _dangerous_points at which a divergence is expected.
verbosity int; if > 0, print progress report
arpys.postprocessing.hv(k, E_B=0, phi=4, lattice_constant=None)[source]

Inverse of k_abs. Refer to the documentation there.

Module contents