Note: This tutorial will be depreciated and entirely replaced upon the release of the 0.3 version.
SlothPy makes extensive use of the HDF5 file format (powerful and portable binary format for fast I/O and big data) to store all the data related to the software. The core file format .slt is in fact .hdf5/h5 file in disguise, so it can be additionally opened and modified with programs such as HDFView and HDFCompas programs or directly accessed through the h5py Python wrapper. Firstly, let us import the pacgake
[ ]:
import slothpy as slt
To start using SlothPy one has to create at least one instance of the core class object - Compound. The Compound is intrinsically associated with a .slt file stored on your disk. You can create the Compound from an output file of quantum chemistry software, access the existing .slt file, or add more data to it. Those operations are handled by the Compound creation methods. There is one .rassi.h5 file produced from the MOLCAS calculation included in the “examples” folder from our repository on GitHub. We will use it as an example in this tutorial. To create a Compound from it we have to include its relative path and use slt.compound_from_molcas() method. Note: always check the documentation of the used methods on our website or use your editor to do so (e.g. Shift+Tab in Jupyter Notebooks).
[ ]:
NdCo = slt.compound_from_molcas(".", "Nd_tutorial", "bas3", "./examples", "NdCo_DG_bas3")
After the creation, you can check what is inside your file using the print() method or like this:
[ ]:
NdCo
You can see the list of HDF5 groups and data sets contained in the file together with their Description attributes. This is the way that SlothPy will save all your results. Having already .slt file on your disk you can add to it more ab initio results (just use the path and name of an existing file) or access it at a later point using slt.compound_from_slt() method.
[ ]:
NdCo = slt.compound_from_slt(".", "Nd_tutorial")
# If you already created "Nd_tutorial.slt" on your disk, use this line only.
All available methods are accessed through an instance of the Compound class that constitutes the user interface and API documented in the Reference Manual. Let us start by computing molar powder-averaged magnetisation for our Nd-based compound. Firstly, we need some imports to create a range of magnetic field and temperature values:
[ ]:
from numpy import linspace
fields_mth = linspace(0.0001, 7, 50)
temperatures_mth = linspace(1, 10, 10)
The method for computing magnetisation as a function of field and temperature is called calculate_mth().
[ ]:
%%time
mth = NdCo.calculate_magnetisation("bas3", fields_mth, 4, temperatures_mth, 10)
Here, we calculate powder-averaged (using Lebedev-Laikov integration grid) magnetisation for the Nd-based compound in the magnetic field range from 1 Oe to 7 T (50 points) and for 10 temperature values from 1 K to 10 K. We include in the Zeeman Hamiltonian only 10 Spin-Orbit states from the ground multiplet $ ^{4}I_{9/2} $ of Nd(III). The calculation should finish immediately due to the very low amount of SO states included.
The result is a numpy NdArray (10, 50) with the structure [temperatures, fields] returned from the function call to the mth variable. It is ready for you to process it using Python as you want. Remember to always check the output format in the Reference Manual (or using your editor/IDE) - Returns section of each method.
[ ]:
mth
We can, for example, plot it as a magnetic field function M(H) iterating over different temperatures:
[ ]:
from matplotlib.pyplot import plot, show
for mh in mth:
plot(fields_mth, mh)
show()
or save it to a .csv file on the disk to be accessible for your other programs:
[ ]:
from numpy import savetxt
savetxt('my_mth_array.csv', mth.T, delimiter=',') # Here we transpose to have data for each temperature in columns
Now, let us run the above calculation once again, but this time we will use a better, denser grid and include more SO-states. It will take a little more time. Additionally, we will save the results to our Nd_tutorial.slt file using slt keyword. (confront the documentation for the comprehensive description of all the options)
[ ]:
%%time
mth = NdCo.calculate_magnetisation("bas3", fields_mth, 6, temperatures_mth, 32, slt="bas3")
If you invoke the representation of the Nd_tutorial.slt file once again you can see that a new group “bas3_magnetisation” was created which contains datasets for magnetisation (bas3_mth), magnetic fields (bas3_fields), and temperatures (bas3_temperatures). (all with Description attributes)
[ ]:
NdCo
SlothPy provides an array-like interface for reading and writing data from and to .slt files. As an example, let us read the magnetisation to another variable mth_custom_read together with field values:
[ ]:
mth_custom_read = NdCo["bas3_magnetisation", "bas3_mth"]
field_values = NdCo["bas3_magnetisation", "bas3_fields"]
Here, we provide a full group and dataset name to access the data. Now we can do what we want with the arrays. Let us confirm that they indeed represent magnetisation once again by plotting them:
[ ]:
for mh in mth_custom_read:
plot(field_values, mh)
show()
Since we have our data saved in the .slt file we can actually plot it using the build-in function (available for various methods - see documentation, all starting with “plot” and having plenty of customization parameters).
[ ]:
NdCo.plot_magnetisation("bas3")
When you invoke plotting functions for specific methods you do not need to provide a suffix, like “_magnetisation”, the program will handle it for you.
If you need you can even create your own custom groups with datasets (in the form of numpy NdArrays) in the file and use them in your scripts.
[ ]:
one_to_ten = linspace(1, 10, 10)
NdCo["my_custom_group", "one_to_ten_dataset", "My description of the dataset",
"My description of the whole group"] = one_to_ten
NdCo
The last two strings giving a description of the group and data set are optional. Later you can add to the existing group more data sets or create datasets without a group (they have to be at least 1-dimensional ArrayLike):
[ ]:
NdCo["my_custom_group", "123_dataset"] = [1,2,3]
NdCo["my_dataset_without_a_group"] = [1]
NdCo
If you now try to re-run the previous calculation you should see a SlothPyError, due to the already existing name of the group (SlothPy prevents you from accidentally overwriting the data).
[ ]:
%%time
mth = NdCo.calculate_magnetisation("bas3", fields_mth, 6, temperatures_mth, 32, slt="bas3")
We can use delete_group_dataset to manually remove datases/groups from the .slt file.
[ ]:
NdCo.delete_group_dataset("my_dataset_without_a_group")
NdCo.delete_group_dataset("my_custom_group", "123_dataset")
NdCo.delete_group_dataset("bas3_magnetisation")
NdCo
As you already should notice (when reading the docstring of calculate_mth method) SlopthPy provides you full control over the amount of CPUs you want to assign to your calculation and threads to be used. Computationally demanding methods are parallelized using a certain amount of separate processes - the number of processes that will be used is (number of CPUs) // (number of threads). Additionally, in the Note section of each method, the user is informed over what quantity the job will be parallelized. In the case of calcualte_mth, the work is distributed over field values (here 50). So using for example 10 parallel processes each will compute the magnetisation for 5 values of the field. As default, SlothPy uses all available logical cores with 1 thread for the linear algebra libraries. For jobs with a very high number of points to be parallelized, you should benefit from a greater number of processes (not including time for Python’s multiprocessing setup for a huge amount of data). On the other hand with increasing matrices size it is beneficial to use more threads for operations such as diagonalization etc. It is not a trivial task to choose good settings for very demanding calculations that is why we provide, within SlothPy, the autotune module to do it for you. It tests all possible meaningful setups and gives you time estimates for each of them. It takes some time to do this (because it actually truly does a part of the calculations to benchmark them) so it is advised to use it for jobs that will take hours. To demonstrate it with very small matrices provided in our file (they are 364 x 364 - that is how many SO states are there) we will run two examples using all available CPUs on your machine (if you want to leave some you should change number_cpu = 0 to your desired number):
[ ]:
fields_process = linspace(0.0001, 7, 60)
fields_threads = linspace(0.0001, 7, 3)
[ ]:
%%time
mth = NdCo.calculate_magnetisation("bas3", fields_process, 6, temperatures_mth, 364, number_cpu = 0, autotune=True)
[ ]:
%%time
mth = NdCo.calculate_magnetisation("bas3", fields_threads, 6, temperatures_mth, 364, number_cpu = 0, autotune=True)
In the first case autotune module should choose (but depending on your hardware) more processes and fewer threads than in the second one, where we paralleize only over 3 field points. Time estimates include only pure calculation steps and in our tests for a variety of different methods they should give results within 15-20% of overall execution time maximal error. After autotuning you can run the calculation with the chosen setting manually to see how much time it will take compared to our estimate. Can you choose better settings by yourself?
[ ]:
num_of_cpu = #fill here the number you were autotuning for
num_of_threads = #fill here the number of threads chosen by the autotune module (for fields_process and _threads)
[ ]:
%%time
mth = NdCo.calculate_magnetisation("bas3", fields_process, 6, temperatures_mth, 364, num_of_cpu, num_of_threads)
[ ]:
%%time
mth = NdCo.calculate_magnetisation("bas3", fields_threads, 6, temperatures_mth, 364, num_of_cpu, num_of_threads)
The necessity of the autotune module will become visible for matrices with a number of states over 500-1000 or even 2000+ when calculations with certain settings (how many field values and grids) can take many hours or even days. It also all depends on your hardware e.g. how many possibilities is there to check. For me writing this tutorial now I am testing it on 128 logical cores (64 physical) CPU, so there are many possibilities to choose a number of threads and processes - therefore it is harder and also more time-consuming for the module, but still better than trying manually all the possibilities.
In the following part of the tutorial, we will give a glimpse into more methods presenting their use for our Nd-based compound. Make sure to check the documentation of each presented method - it always contains a comprehensive description of all the used parameters. We begin by calculating magnetic susceptibility - in the form of a product with temperature - (the derivative of magnetisation with respect to the magnetic field) in 1000, 2000, 3000, and 5000 Oe.
[ ]:
temperatures = linspace(1, 300, 300)
fields = [0.1, 0.2, 0.3, 0.5]
[ ]:
%%time
chitht = NdCo.calculate_susceptibility("bas3", temperatures, fields, number_of_points=2, delta_h=0.0001,
states_cutoff=256, number_cpu=0, number_threads=1, T=True, slt="your_name")
[ ]:
NdCo.plot_susceptibility("your_name")
You can calculate the Helmholtz free energy or internal energy in the applied magnetic field (here instead of averaging we calculate with the field applied in the “z” direction):
[ ]:
from numpy import linspace
temperatures = linspace(1, 300, 10)
fields = linspace(0.1, 1, 5)
[ ]:
%%time
eth = NdCo.calculate_energy("bas3", fields, [[0., 0., 1., 1.]], temperatures, "helmholtz", 364, number_cpu=0,
number_threads=1, slt="your_name")
[ ]:
NdCo.plot_energy("your_name")
You can calculate the Zeeman splitting of the ground $ ^{4}I_{9/2} $ multiplet for various directions (here x, y, z) or take powder-average:
[ ]:
%%time
zeeman = NdCo.calculate_zeeman_splitting("bas3", 10, fields, [[1,0,0],[0,1,0],[0,0,1]], states_cutoff=364,
number_cpu=0, number_threads=1, average=False, slt="your_name")
[ ]:
NdCo.plot_zeeman("your_name")
SlothPy allows you to calculate directional data of the above quantities in the form of 3D plots over spherical angles. This type of calculation is pretty demanding and heavy on memory usage. See the Note sections for the following methods to see examples of memory estimation that is needed to handle big resulting and intermediate arrays!!! (example: for a calculation with 100 field values 1-10 T, 300 temperatures 1-300 K, and spherical_grid = 60, the resulting array will take 3 * 100 * 300 * 2 * 60 * 60 * 8 bytes = 5.184 GB)
[ ]:
%%time
r = NdCo.calculate_energy_3d("bas3", fields, "mesh", 35, temperatures, "helmholtz", 128, slt="your_namez")
[ ]:
%%time
r = NdCo.calculate_magnetisation_3d("bas3", fields, "mesh", 35, temperatures, 128, slt="your_name")
[ ]:
%%time
r = NdCo.calculate_susceptibility_3d("bas3", temperatures, fields, "mesh", 35, 2, states_cutoff=128, slt="your_name")
# Carefull! This is the most demanding calculation it has to perform (2*number_of_points+1) times the
# magnetisation calculation for numerical differentiation using finite difference method with the custom stencil.
At this point, you may need to adjust (lower or raise) states_cutoff-s, density of spherical grid, or number of field values depending on your hardware (CPU and memory) capability. Also, the autotune module should always choose 1 thread due to the large number of the point we parallelize over (number of fields) * 2 * spherical_grid**2, since your matrices are small and can have at most 364 x 364 size it cannot prioritize multithreading for them.
After you succeed we can plot the results for particular temperatures and field values with the plot_3d method which supports energy, magnetisation, and susceptibility plotting (see the datatype keyword):
[ ]:
NdCo.plot_3d("your_name", "helmholtz_energy", 10, 30)
# Options: "helmholtz_energy", "internal_energy", "magnetisation", "chi", "chit"
Anyway, probably the best method to examine this type of data is to scan it using an interactive 3D plot where you can change fields and temperatures using sliders and see changes in real-time!
[ ]:
NdCo.interactive_plot_3d("your_name", "helmholtz_energy")
# Options: "helmholtz_energy", "internal_energy", "magnetisation", "chi", "chit"
After you manually find all the instersting “phase” transitions you can even prepare .gif animations varying temperature or field strenght while keepeing the other constant.
[ ]:
NdCo.animate_3d("your_name", "helmholtz_energy", "temperature", "animation", i_start=0, i_end=30, i_constant=2,
fps=6, dpi=200)
The animation should be saved to your current directory.
We will end this basic tutorial just by mentioning other methods. More advanced combinations resulting in e.g. studies on relaxation dynamics for molecular magnets will be addressed in more specialized tutorials.
In order to calculate pseudo-g-tensors for doublet states within the ground $ ^{4}I_{9/2} $ multiplet (10 states - 5 doublets) you can run the:
[ ]:
g_tensors, magnetic_axes = NdCo.calculate_g_tensor_and_axes_doublet("bas3", [0,1,2,3,4])
[ ]:
g_tensors, magnetic_axes
The first entry of the tuple contains g-tensor components x, y, and z for the doublets and magnetic_axes are rotation matrices from the initial coordinate system to the main magnetic axes of each doublet (Coordinates of the main axes X, Y, and Z in the initial x, y, z frame are columns of such matrices). We can use those matrices to rotate angular momenta in almost all the following methods to express the quantities in the reference frame of the chosen doublet’s magnetic axes. Note, that you need to use the inverse rotation (transpose of an orthogonal matrix) to express everything in the new reference frame! (see also Exporting Module for an easy way to add the main magnetic axes to your .mol2/.xyz files with molecular coordinates)
[ ]:
your_rotation = magnetic_axes[0].T # Inverse rotation for the ground doublet
your_rotation
As an example, we calculate the susceptibility (Van-Vleck) tensor using true numerical differentiation in the reference frame of the ground doublet state:
[ ]:
temperatures = linspace(1, 300, 300)
fields = [0.1, 0.2]
[ ]:
NdCo.calculate_susceptibility_tensor("bas3", temperatures, fields, number_of_points=3, delta_h=0.0001, states_cutoff=364,
number_cpu=0, number_threads=1, rotation=your_rotation)
Now we will do a little demonstration considering our 10-state ground manifold. Firstly, let us take a look at SOC energies of the states:
[ ]:
NdCo.soc_energies_cm_1("bas3", 10)
They come in doublets as expected for the Krammers ion. We can then obtain Crystal-Fields-Parameters for S = 9/2 pseudo-spin (in the form of J - total angular momneta) for the SOC matrix taking as “z” quantization axis the main magnetic axis of a ground doublet (use your_rotation). Note that the order of CFPs cannot exceed 2S here and because SOC Hamiltonian (without magnetic fields) is time even operator we only need even orders. We will use real parameters.
[ ]:
NdCo.soc_crystal_field_parameters("bas3", 0, 9, order=9, pseudo_kind = "magnetic",
even_order=True, rotation=your_rotation, slt="your_name")
For the comparison let us get the initial SOC matrix in the same pseudo-spin Jz basis:
[ ]:
soc_matrix = NdCo.soc_zeem_in_z_angular_magnetic_momentum_basis("bas3", 0, 9, "soc", "magnetic",
rotation=your_rotation)
soc_matrix
Verify it by computing its eigenvalues which should be the same as SOC energies from before.
[ ]:
from numpy.linalg import eigvalsh
energies = eigvalsh(soc_matrix) * 219474.6 # Convert it from Hartree to cm-1
energies
We can rebuild the whole matrix from the saved CFP (ITO) parameters and verify that it is the same as soc_matrix (giving the same eigenvalues):
[ ]:
soc_matrix_from_cfp = NdCo.matrix_from_ito("your_name_soc_ito_decomposition", False)
energies = eigvalsh(soc_matrix_from_cfp) * 219474.6 # Convert it from Hartree to cm-1
energies
[ ]:
soc_matrix - soc_matrix_from_cfp
If you need numerical precision for the matrix recreation, you should use odd orders of ITOs (Irreducible Tensor Operators) as well:
[ ]:
NdCo.soc_crystal_field_parameters("bas3", 0, 9, order=9, pseudo_kind = "magnetic",
even_order=False, rotation=your_rotation, slt="your_name_odd")
[ ]:
soc_matrix_from_cfp_odd = NdCo.matrix_from_ito("your_name_odd_soc_ito_decomposition", False)
energies = eigvalsh(soc_matrix_from_cfp) * 219474.6 # Convert it from Hartree to cm-1
energies
[ ]:
soc_matrix - soc_matrix_from_cfp_odd
Using all orders is necessary e.g. when dealing with Zeeman Hamiltonians containing interaction with magnetic field on top of the energy of SOC states. We can follow the seam procedure for a full Zeeman matrix originating from interaction with a magnetic field in the “y” direction (2 T):
[ ]:
NdCo.zeeman_matrix_ito_decpomosition("bas3", 0, 9, 2, [0., 1., 0.], order=9, pseudo_kind = "magnetic",
rotation=your_rotation, slt="your_name")
[ ]:
zeeman_matrix = NdCo.soc_zeem_in_z_angular_magnetic_momentum_basis("bas3", 0, 9, "zeeman", "magnetic", field=2.,
orientation= [0., 1., 0.],
rotation=your_rotation)
zeeman_matrix
[ ]:
zeeman_matrix_from_cfp = NdCo.matrix_from_ito("your_name_zeeman_ito_decomposition", False)
energies = eigvalsh(zeeman_matrix_from_cfp) * 219474.6 # Convert it from Hartree to cm-1
energies # Zeeman splitting
[ ]:
energies = eigvalsh(zeeman_matrix) * 219474.6 # Convert it from Hartree to cm-1
energies # Zeeman splitting
[ ]:
zeeman_matrix - zeeman_matrix_from_cfp
Feel free to experiment with two available pseudo-spin bases (total angular or magnetic).
We will end this short introduction to SlothPy software by showing how to obtain a % decomposition in the pseudo-spin basis of a SOC/Zeeman matrix.
[ ]:
NdCo.matrix_decomposition_in_z_pseudo_spin_basis("bas3", "soc", "magnetic", 0, 9, rotation=your_rotation)
[ ]:
NdCo.matrix_decomposition_in_z_pseudo_spin_basis("bas3", "zeeman", "magnetic", 0, 9, rotation=your_rotation,
orientation=[0., 1., 0.], field=2.)
Here, the resulting matrix contains % weights of pseudo-spin states (columns - from -S to S) in each state from the list (rows - here 10 states from 0 to 9) where pseudo-spin number S = number_of_states / 2 - 1/2 = 10/2 - 1/2 = 9/2.
To explore more methods check the Reference Manual and more specialized, upcoming tutorials.
MTZ