Nudge Elastic Band¶
janus-core
contains various machine learnt interatomic potentials (MLIPs), including MACE based models (MACE-MP, MACE-OFF), CHGNet, SevenNet and more, full list on https://github.com/stfc/janus-core.
Other will be added as their utility is proven beyond a specific material.
Aim¶
We showcase the use of NEB with janus and MLIPs by stuing Ethanol oxidation reactions catalyzed by water molecules, the full study was carried out in this paper Chemical Physics Letters 363 (2002) 80–86
Set up environment (optional)¶
These steps are required for Google Colab, but may work on other systems too:
[ ]:
# import locale
# locale.getpreferredencoding = lambda: "UTF-8"
# ! pip uninstall torch torchaudio torchvision numpy -y
# ! uv pip install janus-core[all] data-tutorials torch==2.5.1 --system
# get_ipython().kernel.do_shutdown(restart=True)
Use data_tutorials
to get the data required for this tutorial:
[ ]:
from data_tutorials.data import get_data
get_data(
url="https://raw.githubusercontent.com/stfc/janus-core/main/docs/source/tutorials/data/",
filename=["ethanol_reactants.extxyz", "ethanol_products.extxyz","ethanol_reactants_1water.extxyz","ethanol_products_1water.extxyz","ethanol_reactants_2water.extxyz","ethanol_products_2water.extxyz"],
folder="../data",
)
Command-line help and options¶
Once janus-core
is installed, the janus
CLI command should be available:
[1]:
! janus neb --help
Usage: janus neb [OPTIONS]
Run Nudged Elastic Band method.
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ --init-struct PATH [default: None] │
│ --final-struct PATH [default: None] │
│ --band-structs PATH [default: None] │
│ --neb-class TEXT Name of ASE NEB │
│ class to use. │
│ [default: NEB] │
│ --n-images INTEGER Number of images │
│ to use in NEB. │
│ [default: 15] │
│ --write-band --no-write-band Whether to write │
│ out all band │
│ images after │
│ optimization. │
│ [default: │
│ no-write-band] │
│ --write-kwargs DICT Keyword arguments │
│ to pass to │
│ ase.io.write when │
│ saving results. │
│ Must be passed as │
│ a dictionary │
│ wrapped in quotes, │
│ e.g. "{'key': │
│ value}". │
│ [default: None] │
│ --neb-kwargs DICT Keyword arguments │
│ to pass to │
│ neb_method. Must │
│ be passed as a │
│ dictionary wrapped │
│ in quotes, e.g. │
│ "{'key': value}". │
│ [default: None] │
│ --interpolator [ase|pymatgen] Choice of │
│ interpolation │
│ strategy. │
│ [default: ase] │
│ --interpolator-k… DICT Keyword arguments │
│ to pass to │
│ interpolator. Must │
│ be passed as a │
│ dictionary wrapped │
│ in quotes, e.g. │
│ "{'key': value}". │
│ [default: None] │
│ --optimizer TEXT Name of ASE NEB │
│ optimizer to use. │
│ [default: │
│ NEBOptimizer] │
│ --fmax FLOAT Maximum force for │
│ NEB optimizer. │
│ [default: 0.1] │
│ --steps INTEGER Maximum number of │
│ steps for │
│ optimization. │
│ [default: 100] │
│ --optimizer-kwar… DICT Keyword arguments │
│ to pass to │
│ neb_optimizer. │
│ Must be passed as │
│ a dictionary │
│ wrapped in quotes, │
│ e.g. "{'key': │
│ value}". │
│ [default: None] │
│ --plot-band --no-plot-band Whether to plot │
│ and save NEB band. │
│ [default: │
│ no-plot-band] │
│ --minimize --no-minimize Whether to │
│ minimize initial │
│ and final │
│ structures. │
│ [default: │
│ no-minimize] │
│ --minimize-kwargs DICT Keyword arguments │
│ to pass to │
│ optimizer. Must be │
│ passed as a │
│ dictionary wrapped │
│ in quotes, e.g. │
│ "{'key': value}". │
│ [default: None] │
│ --arch [mace|mace_mp|ma MLIP architecture │
│ ce_off|m3gnet|ch to use for │
│ gnet|alignn|seve calculations. │
│ nnet|nequip|dpa3 [default: mace_mp] │
│ |orb|mattersim] │
│ --device [cpu|cuda|mps|xp Device to run │
│ u] calculations on. │
│ [default: cpu] │
│ --model-path TEXT Path to MLIP │
│ model. │
│ [default: None] │
│ --read-kwargs DICT Keyword arguments │
│ to pass to │
│ ase.io.read. Must │
│ be passed as a │
│ dictionary wrapped │
│ in quotes, e.g. │
│ "{'key': value}". │
│ By default, │
│ read_kwargs['inde… │
│ = -1, so only the │
│ last structure is │
│ read. │
│ [default: None] │
│ --calc-kwargs DICT Keyword arguments │
│ to pass to │
│ selected │
│ calculator. Must │
│ be passed as a │
│ dictionary wrapped │
│ in quotes, e.g. │
│ "{'key': value}". │
│ For the default │
│ architecture │
│ ('mace_mp'), │
│ "{'model': │
│ 'small'}" is set │
│ unless │
│ overwritten. │
│ [default: None] │
│ --file-prefix PATH Prefix for output │
│ files, including │
│ directories. │
│ Default directory │
│ is │
│ ./janus_results, │
│ and default │
│ filename prefix is │
│ inferred from the │
│ input stucture │
│ filename. │
│ [default: None] │
│ --log PATH Path to save logs │
│ to. Default is │
│ inferred from │
│ `file_prefix` │
│ [default: None] │
│ --tracker --no-tracker Whether to save │
│ carbon emissions │
│ of calculation │
│ [default: tracker] │
│ --summary PATH Path to save │
│ summary of inputs, │
│ start/end time, │
│ and carbon │
│ emissions. Default │
│ is inferred from │
│ `file_prefix`. │
│ [default: None] │
│ --config TEXT Configuration │
│ file. │
│ --help Show this message │
│ and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
run a simple Nudge Elastic Bands¶
0 water molecules case¶
[2]:
%%writefile neb.yml
init_struct: ../data/ethanol_reactants.extxyz
final_struct: ../data/ethanol_products.extxyz
n_images: 11
device: cpu
arch: mace_mp
minimize: True
plot_band: True
write_band: True
calc_kwargs:
dispersion: True
model: medium-omat-0
tracker: False
Writing neb.yml
visualise the inputs
[3]:
from ase.io import read
from weas_widget import WeasWidget
r = read("../data/ethanol_reactants.extxyz")
p = read("../data/ethanol_products.extxyz")
v=WeasWidget()
v.from_ase([r,p])
v.avr.model_style = 1
v.avr.show_hydrogen_bonds = True
v
[3]:
[4]:
!janus neb --config neb.yml
/Users/elliottkasoar/Documents/PSDI/janus-core/.venv/lib/python3.12/site-packages/e3nn/o3/_wigner.py:10: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
_Jd, _W3j_flat, _W3j_indices = torch.load(os.path.join(os.path.dirname(__file__), 'constants.pt'))
cuequivariance or cuequivariance_torch is not available. Cuequivariance acceleration will be disabled.
Using medium OMAT-0 model under Academic Software License (ASL) license, see https://github.com/gabor1/ASL
To use this model you accept the terms of the license.
Using Materials Project MACE for MACECalculator with /Users/elliottkasoar/.cache/mace/maceomat0mediummodel
Using float64 for MACECalculator, which is slower but more accurate. Recommended for geometry optimization.
/Users/elliottkasoar/Documents/PSDI/janus-core/.venv/lib/python3.12/site-packages/mace/calculators/mace.py:139: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
torch.load(f=model_path, map_location=device)
Using TorchDFTD3Calculator for D3 dispersion corrections
Step Time Energy fmax
LBFGS: 0 16:04:37 -45.882336 0.000646
Step Time Energy fmax
LBFGS: 0 16:04:37 -44.967101 0.000453
Step Time fmax
NEBOptimizer[ode]: 0 16:04:39 4.9176
NEBOptimizer[ode]: 1 16:04:40 4.3168
NEBOptimizer[ode]: 2 16:04:40 3.4105
NEBOptimizer[ode]: 3 16:04:41 2.5245
NEBOptimizer[ode]: 4 16:04:42 2.4167
NEBOptimizer[ode]: 5 16:04:42 1.9228
NEBOptimizer[ode]: 6 16:04:43 1.8611
NEBOptimizer[ode]: 7 16:04:43 1.7459
NEBOptimizer[ode]: 8 16:04:44 1.3831
NEBOptimizer[ode]: 9 16:04:45 0.7608
NEBOptimizer[ode]: 10 16:04:46 0.6523
NEBOptimizer[ode]: 11 16:04:46 0.6026
NEBOptimizer[ode]: 12 16:04:47 0.5555
NEBOptimizer[ode]: 13 16:04:48 0.4538
NEBOptimizer[ode]: 14 16:04:49 0.4449
NEBOptimizer[ode]: 15 16:04:49 0.4384
NEBOptimizer[ode]: 16 16:04:50 0.4322
NEBOptimizer[ode]: 17 16:04:51 0.4093
NEBOptimizer[ode]: 18 16:04:51 0.5584
NEBOptimizer[ode]: 19 16:04:53 0.3333
NEBOptimizer[ode]: 20 16:04:53 0.3303
NEBOptimizer[ode]: 21 16:04:54 0.3266
NEBOptimizer[ode]: 22 16:04:54 0.3129
NEBOptimizer[ode]: 23 16:04:55 0.2650
NEBOptimizer[ode]: 24 16:04:56 0.2537
NEBOptimizer[ode]: 25 16:04:57 0.2517
NEBOptimizer[ode]: 26 16:04:57 0.2496
NEBOptimizer[ode]: 27 16:04:58 0.2474
NEBOptimizer[ode]: 28 16:04:58 0.2385
NEBOptimizer[ode]: 29 16:04:59 0.2017
NEBOptimizer[ode]: 30 16:05:00 0.1961
NEBOptimizer[ode]: 31 16:05:00 0.1940
NEBOptimizer[ode]: 32 16:05:01 0.1923
NEBOptimizer[ode]: 33 16:05:01 0.1895
NEBOptimizer[ode]: 34 16:05:02 0.1786
NEBOptimizer[ode]: 35 16:05:02 0.1368
NEBOptimizer[ode]: 36 16:05:03 0.1698
NEBOptimizer[ode]: 37 16:05:04 0.1318
NEBOptimizer[ode]: 38 16:05:04 0.1307
NEBOptimizer[ode]: 39 16:05:05 0.1260
NEBOptimizer[ode]: 40 16:05:05 0.1096
NEBOptimizer[ode]: 41 16:05:06 0.1054
NEBOptimizer[ode]: 42 16:05:07 0.1355
NEBOptimizer[ode]: 43 16:05:07 0.1019
NEBOptimizer[ode]: 44 16:05:08 0.1012
NEBOptimizer[ode]: 45 16:05:08 0.1004
NEBOptimizer[ode]: 46 16:05:09 0.0971
[5]:
!ls janus_results/
ethanol_products-final-opt.extxyz ethanol_reactants-neb-plot.svg
ethanol_reactants-init-opt.extxyz ethanol_reactants-neb-results.dat
ethanol_reactants-neb-band.extxyz ethanol_reactants-neb-summary.yml
ethanol_reactants-neb-log.yml
[6]:
from IPython.display import SVG, display
display(SVG("janus_results/ethanol_reactants-neb-plot.svg"))
[7]:
nebp = read("janus_results/ethanol_reactants-neb-band.extxyz", index=":")
w=WeasWidget()
w.from_ase(nebp)
w.avr.model_style = 1
w.avr.show_hydrogen_bonds = True
w
[7]:
is the barrier realistic? compare with the numbers from the paper.
1 water molecule¶
we can use the previous config and just overwrite the init and final structures
[8]:
! janus neb --config neb.yml --init-struct ../data/ethanol_reactants_1water.extxyz --final-struct ../data/ethanol_products_1water.extxyz
/Users/elliottkasoar/Documents/PSDI/janus-core/.venv/lib/python3.12/site-packages/e3nn/o3/_wigner.py:10: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
_Jd, _W3j_flat, _W3j_indices = torch.load(os.path.join(os.path.dirname(__file__), 'constants.pt'))
cuequivariance or cuequivariance_torch is not available. Cuequivariance acceleration will be disabled.
Using medium OMAT-0 model under Academic Software License (ASL) license, see https://github.com/gabor1/ASL
To use this model you accept the terms of the license.
Using Materials Project MACE for MACECalculator with /Users/elliottkasoar/.cache/mace/maceomat0mediummodel
Using float64 for MACECalculator, which is slower but more accurate. Recommended for geometry optimization.
/Users/elliottkasoar/Documents/PSDI/janus-core/.venv/lib/python3.12/site-packages/mace/calculators/mace.py:139: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
torch.load(f=model_path, map_location=device)
Using TorchDFTD3Calculator for D3 dispersion corrections
Step Time Energy fmax
LBFGS: 0 16:05:12 -60.262994 1.061688
LBFGS: 1 16:05:12 -60.291126 0.393867
LBFGS: 2 16:05:12 -60.299581 0.373133
LBFGS: 3 16:05:12 -60.319127 0.416926
LBFGS: 4 16:05:12 -60.333978 0.497735
LBFGS: 5 16:05:12 -60.348770 0.343877
LBFGS: 6 16:05:12 -60.357433 0.275794
LBFGS: 7 16:05:12 -60.362061 0.179321
LBFGS: 8 16:05:13 -60.366442 0.222576
LBFGS: 9 16:05:13 -60.370525 0.223311
LBFGS: 10 16:05:13 -60.374084 0.258707
LBFGS: 11 16:05:13 -60.377990 0.232420
LBFGS: 12 16:05:13 -60.382620 0.250770
LBFGS: 13 16:05:13 -60.387675 0.246213
LBFGS: 14 16:05:13 -60.391150 0.405453
LBFGS: 15 16:05:13 -60.396162 0.230709
LBFGS: 16 16:05:13 -60.400317 0.195376
LBFGS: 17 16:05:13 -60.409343 0.265397
LBFGS: 18 16:05:13 -60.414254 0.239674
LBFGS: 19 16:05:13 -60.421410 0.292684
LBFGS: 20 16:05:13 -60.427253 0.264662
LBFGS: 21 16:05:13 -60.430653 0.244037
LBFGS: 22 16:05:13 -60.434123 0.159676
LBFGS: 23 16:05:13 -60.436048 0.133209
LBFGS: 24 16:05:14 -60.438018 0.117606
LBFGS: 25 16:05:14 -60.439269 0.093829
Step Time Energy fmax
LBFGS: 0 16:05:14 -54.782788 4.967930
LBFGS: 1 16:05:14 -55.842124 4.419715
LBFGS: 2 16:05:14 -57.563214 3.842097
LBFGS: 3 16:05:14 -58.156407 3.085929
LBFGS: 4 16:05:14 -58.603480 1.822973
LBFGS: 5 16:05:14 -58.754786 1.667331
LBFGS: 6 16:05:14 -58.893874 1.111416
LBFGS: 7 16:05:14 -58.986243 0.764498
LBFGS: 8 16:05:14 -59.033748 0.684629
LBFGS: 9 16:05:14 -59.089144 0.741023
LBFGS: 10 16:05:15 -59.127877 0.668211
LBFGS: 11 16:05:15 -59.155500 0.531275
LBFGS: 12 16:05:15 -59.176890 0.480577
LBFGS: 13 16:05:15 -59.191363 0.377988
LBFGS: 14 16:05:15 -59.201482 0.238313
LBFGS: 15 16:05:15 -59.206878 0.235811
LBFGS: 16 16:05:15 -59.212756 0.217196
LBFGS: 17 16:05:15 -59.220000 0.247323
LBFGS: 18 16:05:15 -59.227507 0.234202
LBFGS: 19 16:05:15 -59.235758 0.268516
LBFGS: 20 16:05:15 -59.246952 0.323743
LBFGS: 21 16:05:15 -59.262551 0.338733
LBFGS: 22 16:05:15 -59.278414 0.269254
LBFGS: 23 16:05:15 -59.287097 0.199524
LBFGS: 24 16:05:15 -59.291332 0.166812
LBFGS: 25 16:05:16 -59.295380 0.189973
LBFGS: 26 16:05:16 -59.298914 0.196096
LBFGS: 27 16:05:16 -59.301667 0.108823
LBFGS: 28 16:05:16 -59.302967 0.094901
Step Time fmax
NEBOptimizer[ode]: 0 16:05:18 3.4130
NEBOptimizer[ode]: 1 16:05:19 2.9855
NEBOptimizer[ode]: 2 16:05:20 2.6281
NEBOptimizer[ode]: 3 16:05:20 2.3281
NEBOptimizer[ode]: 4 16:05:21 3.0049
NEBOptimizer[ode]: 5 16:05:22 1.7608
NEBOptimizer[ode]: 6 16:05:22 1.6214
NEBOptimizer[ode]: 7 16:05:23 1.5002
NEBOptimizer[ode]: 8 16:05:24 1.3879
NEBOptimizer[ode]: 9 16:05:25 0.9764
NEBOptimizer[ode]: 10 16:05:26 0.9044
NEBOptimizer[ode]: 11 16:05:27 0.8428
NEBOptimizer[ode]: 12 16:05:27 0.7601
NEBOptimizer[ode]: 13 16:05:28 0.5953
NEBOptimizer[ode]: 14 16:05:29 0.5891
NEBOptimizer[ode]: 15 16:05:30 0.5835
NEBOptimizer[ode]: 16 16:05:30 0.5727
NEBOptimizer[ode]: 17 16:05:31 0.5219
NEBOptimizer[ode]: 18 16:05:32 0.5117
NEBOptimizer[ode]: 19 16:05:33 0.5031
NEBOptimizer[ode]: 20 16:05:33 0.4907
NEBOptimizer[ode]: 21 16:05:34 0.4469
NEBOptimizer[ode]: 22 16:05:35 0.4386
NEBOptimizer[ode]: 23 16:05:36 0.4306
NEBOptimizer[ode]: 24 16:05:36 0.4047
NEBOptimizer[ode]: 25 16:05:37 0.2972
NEBOptimizer[ode]: 26 16:05:38 0.2663
NEBOptimizer[ode]: 27 16:05:39 0.2604
NEBOptimizer[ode]: 28 16:05:39 0.2545
NEBOptimizer[ode]: 29 16:05:40 0.2301
NEBOptimizer[ode]: 30 16:05:41 0.2192
NEBOptimizer[ode]: 31 16:05:42 0.2135
NEBOptimizer[ode]: 32 16:05:43 0.2107
NEBOptimizer[ode]: 33 16:05:44 0.2016
NEBOptimizer[ode]: 34 16:05:44 0.3006
NEBOptimizer[ode]: 35 16:05:45 0.1776
NEBOptimizer[ode]: 36 16:05:46 0.1772
NEBOptimizer[ode]: 37 16:05:47 0.1772
NEBOptimizer[ode]: 38 16:05:47 0.1807
NEBOptimizer[ode]: 39 16:05:48 0.2123
NEBOptimizer[ode]: 40 16:05:49 0.2047
NEBOptimizer[ode]: 41 16:05:50 0.2074
NEBOptimizer[ode]: 42 16:05:50 0.2087
NEBOptimizer[ode]: 43 16:05:51 0.2132
NEBOptimizer[ode]: 44 16:05:51 0.2231
NEBOptimizer[ode]: 45 16:05:52 0.2673
NEBOptimizer[ode]: 46 16:05:53 0.3758
NEBOptimizer[ode]: 47 16:05:54 0.2659
NEBOptimizer[ode]: 48 16:05:54 0.2672
NEBOptimizer[ode]: 49 16:05:55 0.2681
NEBOptimizer[ode]: 50 16:05:56 0.2720
NEBOptimizer[ode]: 51 16:05:56 0.2898
NEBOptimizer[ode]: 52 16:05:57 0.2937
NEBOptimizer[ode]: 53 16:05:58 0.2950
NEBOptimizer[ode]: 54 16:05:59 0.2961
NEBOptimizer[ode]: 55 16:05:59 0.3007
NEBOptimizer[ode]: 56 16:06:00 0.3202
NEBOptimizer[ode]: 57 16:06:01 0.3223
NEBOptimizer[ode]: 58 16:06:02 0.3238
NEBOptimizer[ode]: 59 16:06:02 0.3253
NEBOptimizer[ode]: 60 16:06:03 0.3318
NEBOptimizer[ode]: 61 16:06:03 0.3599
NEBOptimizer[ode]: 62 16:06:05 0.3654
NEBOptimizer[ode]: 63 16:06:05 0.3619
NEBOptimizer[ode]: 64 16:06:06 0.3628
NEBOptimizer[ode]: 65 16:06:06 0.3630
NEBOptimizer[ode]: 66 16:06:07 0.3641
NEBOptimizer[ode]: 67 16:06:08 0.3694
NEBOptimizer[ode]: 68 16:06:08 0.3937
NEBOptimizer[ode]: 69 16:06:09 0.4009
NEBOptimizer[ode]: 70 16:06:10 0.5961
NEBOptimizer[ode]: 71 16:06:10 0.9070
NEBOptimizer[ode]: 72 16:06:11 0.4085
NEBOptimizer[ode]: 73 16:06:12 0.4090
NEBOptimizer[ode]: 74 16:06:12 0.4077
NEBOptimizer[ode]: 75 16:06:13 0.4063
NEBOptimizer[ode]: 76 16:06:13 0.3985
NEBOptimizer[ode]: 77 16:06:14 0.3334
NEBOptimizer[ode]: 78 16:06:15 0.3197
NEBOptimizer[ode]: 79 16:06:15 0.2972
NEBOptimizer[ode]: 80 16:06:16 0.2915
NEBOptimizer[ode]: 81 16:06:16 0.2757
NEBOptimizer[ode]: 82 16:06:17 0.2748
NEBOptimizer[ode]: 83 16:06:18 0.2705
NEBOptimizer[ode]: 84 16:06:18 0.2564
[9]:
!ls janus_results/
ethanol_products-final-opt.extxyz
ethanol_products_1water-final-opt.extxyz
ethanol_reactants-init-opt.extxyz
ethanol_reactants-neb-band.extxyz
ethanol_reactants-neb-log.yml
ethanol_reactants-neb-plot.svg
ethanol_reactants-neb-results.dat
ethanol_reactants-neb-summary.yml
ethanol_reactants_1water-init-opt.extxyz
ethanol_reactants_1water-neb-band.extxyz
ethanol_reactants_1water-neb-log.yml
ethanol_reactants_1water-neb-plot.svg
ethanol_reactants_1water-neb-results.dat
ethanol_reactants_1water-neb-summary.yml
[10]:
display(SVG("janus_results/ethanol_reactants_1water-neb-plot.svg"))
[11]:
from ase.io import read
from weas_widget import WeasWidget
nebp = read("janus_results/ethanol_reactants_1water-neb-band.extxyz", index=":")
w1=WeasWidget()
w1.from_ase(nebp)
w1.avr.model_style = 1
w1.avr.show_hydrogen_bonds = True
w1
[11]:
2 water molecules¶
[12]:
! janus neb --config neb.yml --init-struct ../data/ethanol_reactants_2water.extxyz --final-struct ../data/ethanol_products_2water.extxyz
/Users/elliottkasoar/Documents/PSDI/janus-core/.venv/lib/python3.12/site-packages/e3nn/o3/_wigner.py:10: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
_Jd, _W3j_flat, _W3j_indices = torch.load(os.path.join(os.path.dirname(__file__), 'constants.pt'))
cuequivariance or cuequivariance_torch is not available. Cuequivariance acceleration will be disabled.
Using medium OMAT-0 model under Academic Software License (ASL) license, see https://github.com/gabor1/ASL
To use this model you accept the terms of the license.
Using Materials Project MACE for MACECalculator with /Users/elliottkasoar/.cache/mace/maceomat0mediummodel
Using float64 for MACECalculator, which is slower but more accurate. Recommended for geometry optimization.
/Users/elliottkasoar/Documents/PSDI/janus-core/.venv/lib/python3.12/site-packages/mace/calculators/mace.py:139: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
torch.load(f=model_path, map_location=device)
Using TorchDFTD3Calculator for D3 dispersion corrections
Step Time Energy fmax
LBFGS: 0 16:06:21 -74.874702 1.110446
LBFGS: 1 16:06:22 -74.906042 0.425709
LBFGS: 2 16:06:22 -74.916713 0.406215
LBFGS: 3 16:06:22 -74.941726 0.498748
LBFGS: 4 16:06:22 -74.960387 0.514109
LBFGS: 5 16:06:22 -74.979367 0.340381
LBFGS: 6 16:06:22 -74.991632 0.343507
LBFGS: 7 16:06:22 -74.999807 0.272232
LBFGS: 8 16:06:22 -75.009224 0.276522
LBFGS: 9 16:06:22 -75.017780 0.265146
LBFGS: 10 16:06:22 -75.024942 0.296568
LBFGS: 11 16:06:22 -75.032384 0.272457
LBFGS: 12 16:06:22 -75.036593 0.657137
LBFGS: 13 16:06:22 -75.046940 0.263583
LBFGS: 14 16:06:23 -75.053793 0.167268
LBFGS: 15 16:06:23 -75.058922 0.148490
LBFGS: 16 16:06:23 -75.062275 0.224148
LBFGS: 17 16:06:23 -75.065986 0.217836
LBFGS: 18 16:06:23 -75.069368 0.133980
LBFGS: 19 16:06:23 -75.072034 0.123155
LBFGS: 20 16:06:23 -75.075654 0.223730
LBFGS: 21 16:06:23 -75.079793 0.284257
LBFGS: 22 16:06:23 -75.082180 0.312831
LBFGS: 23 16:06:23 -75.085640 0.188553
LBFGS: 24 16:06:23 -75.088211 0.153994
LBFGS: 25 16:06:23 -75.090710 0.146650
LBFGS: 26 16:06:23 -75.092438 0.149503
LBFGS: 27 16:06:23 -75.094338 0.098866
Step Time Energy fmax
LBFGS: 0 16:06:24 -69.878542 6.580558
LBFGS: 1 16:06:24 -70.783687 3.680620
LBFGS: 2 16:06:24 -71.397846 2.663688
LBFGS: 3 16:06:24 -71.974651 2.159790
LBFGS: 4 16:06:24 -72.356200 2.415625
LBFGS: 5 16:06:24 -72.809143 3.771575
LBFGS: 6 16:06:24 -73.018475 1.487707
LBFGS: 7 16:06:24 -73.209614 1.393163
LBFGS: 8 16:06:24 -73.359729 1.350852
LBFGS: 9 16:06:24 -73.541021 0.904684
LBFGS: 10 16:06:24 -73.602625 0.615768
LBFGS: 11 16:06:25 -73.633940 0.510137
LBFGS: 12 16:06:25 -73.656341 0.310855
LBFGS: 13 16:06:25 -73.673779 0.288316
LBFGS: 14 16:06:25 -73.688417 0.317602
LBFGS: 15 16:06:25 -73.705938 0.302659
LBFGS: 16 16:06:25 -73.722209 0.325956
LBFGS: 17 16:06:25 -73.735457 0.385612
LBFGS: 18 16:06:25 -73.748883 0.406121
LBFGS: 19 16:06:25 -73.764926 0.342543
LBFGS: 20 16:06:25 -73.778700 0.295499
LBFGS: 21 16:06:25 -73.786688 0.198987
LBFGS: 22 16:06:25 -73.791164 0.206302
LBFGS: 23 16:06:25 -73.795637 0.172821
LBFGS: 24 16:06:25 -73.800348 0.151887
LBFGS: 25 16:06:26 -73.804514 0.161253
LBFGS: 26 16:06:26 -73.807316 0.168133
LBFGS: 27 16:06:26 -73.811388 0.171065
LBFGS: 28 16:06:26 -73.816693 0.163928
LBFGS: 29 16:06:26 -73.822618 0.172456
LBFGS: 30 16:06:26 -73.827520 0.165491
LBFGS: 31 16:06:26 -73.832298 0.166114
LBFGS: 32 16:06:26 -73.836870 0.235128
LBFGS: 33 16:06:26 -73.840173 0.273686
LBFGS: 34 16:06:26 -73.845364 0.167830
LBFGS: 35 16:06:26 -73.848792 0.134181
LBFGS: 36 16:06:26 -73.851981 0.232467
LBFGS: 37 16:06:26 -73.856400 0.198590
LBFGS: 38 16:06:26 -73.863311 0.310402
LBFGS: 39 16:06:27 -73.870934 0.210044
LBFGS: 40 16:06:27 -73.874591 0.383701
LBFGS: 41 16:06:27 -73.879686 0.167827
LBFGS: 42 16:06:27 -73.883564 0.148803
LBFGS: 43 16:06:27 -73.889836 0.347747
LBFGS: 44 16:06:27 -73.893750 0.217652
LBFGS: 45 16:06:27 -73.897174 0.124079
LBFGS: 46 16:06:27 -73.900613 0.290783
LBFGS: 47 16:06:27 -73.903635 0.144156
LBFGS: 48 16:06:27 -73.908172 0.149590
LBFGS: 49 16:06:27 -73.913325 0.141136
LBFGS: 50 16:06:27 -73.916564 0.195442
LBFGS: 51 16:06:27 -73.921801 0.339487
LBFGS: 52 16:06:28 -73.929054 0.388791
LBFGS: 53 16:06:28 -73.936489 0.343151
LBFGS: 54 16:06:28 -73.942297 0.213389
LBFGS: 55 16:06:28 -73.946415 0.406877
LBFGS: 56 16:06:28 -73.941826 0.956343
LBFGS: 57 16:06:28 -73.953157 0.484563
LBFGS: 58 16:06:28 -73.958003 0.143625
LBFGS: 59 16:06:28 -73.961625 0.140891
LBFGS: 60 16:06:28 -73.963749 0.916458
LBFGS: 61 16:06:28 -73.971495 0.467273
LBFGS: 62 16:06:28 -73.976820 0.142323
LBFGS: 63 16:06:28 -73.980921 0.344159
LBFGS: 64 16:06:29 -73.984855 0.285725
LBFGS: 65 16:06:29 -73.991281 0.156891
LBFGS: 66 16:06:29 -73.992750 0.132084
LBFGS: 67 16:06:29 -73.994371 0.112886
LBFGS: 68 16:06:29 -73.996460 0.158670
LBFGS: 69 16:06:29 -73.998556 0.175966
LBFGS: 70 16:06:29 -74.001821 0.135574
LBFGS: 71 16:06:29 -74.005816 0.151122
LBFGS: 72 16:06:29 -74.008853 0.179225
LBFGS: 73 16:06:29 -74.011311 0.153627
LBFGS: 74 16:06:29 -74.012836 0.112600
LBFGS: 75 16:06:29 -74.014010 0.135949
LBFGS: 76 16:06:29 -74.015207 0.136904
LBFGS: 77 16:06:29 -74.016179 0.140221
LBFGS: 78 16:06:30 -74.017397 0.069750
Step Time fmax
NEBOptimizer[ode]: 0 16:06:32 5.3762
NEBOptimizer[ode]: 1 16:06:32 4.1302
NEBOptimizer[ode]: 2 16:06:33 3.7265
NEBOptimizer[ode]: 3 16:06:34 3.0873
NEBOptimizer[ode]: 4 16:06:35 2.6653
NEBOptimizer[ode]: 5 16:06:36 2.3850
NEBOptimizer[ode]: 6 16:06:37 2.2150
NEBOptimizer[ode]: 7 16:06:38 2.0172
NEBOptimizer[ode]: 8 16:06:39 1.9237
NEBOptimizer[ode]: 9 16:06:39 1.4709
NEBOptimizer[ode]: 10 16:06:40 1.0858
NEBOptimizer[ode]: 11 16:06:41 1.5471
NEBOptimizer[ode]: 12 16:06:42 1.4571
NEBOptimizer[ode]: 13 16:06:43 0.5922
NEBOptimizer[ode]: 14 16:06:43 0.5751
NEBOptimizer[ode]: 15 16:06:44 0.5426
NEBOptimizer[ode]: 16 16:06:45 0.4981
NEBOptimizer[ode]: 17 16:06:46 0.8017
NEBOptimizer[ode]: 18 16:06:47 0.4496
NEBOptimizer[ode]: 19 16:06:48 0.4568
NEBOptimizer[ode]: 20 16:06:49 0.4591
NEBOptimizer[ode]: 21 16:06:49 0.4587
NEBOptimizer[ode]: 22 16:06:50 0.4708
NEBOptimizer[ode]: 23 16:06:52 0.4437
NEBOptimizer[ode]: 24 16:06:53 0.4358
NEBOptimizer[ode]: 25 16:06:53 0.4284
NEBOptimizer[ode]: 26 16:06:54 0.4134
NEBOptimizer[ode]: 27 16:06:56 0.4078
NEBOptimizer[ode]: 28 16:06:57 0.4041
NEBOptimizer[ode]: 29 16:06:57 0.3926
NEBOptimizer[ode]: 30 16:06:58 0.3648
NEBOptimizer[ode]: 31 16:07:00 0.3347
NEBOptimizer[ode]: 32 16:07:01 0.3391
NEBOptimizer[ode]: 33 16:07:02 0.3364
NEBOptimizer[ode]: 34 16:07:03 0.3336
NEBOptimizer[ode]: 35 16:07:03 0.3233
NEBOptimizer[ode]: 36 16:07:04 0.5437
NEBOptimizer[ode]: 37 16:07:06 0.2764
NEBOptimizer[ode]: 38 16:07:06 0.2741
NEBOptimizer[ode]: 39 16:07:07 0.2717
NEBOptimizer[ode]: 40 16:07:08 0.2633
NEBOptimizer[ode]: 41 16:07:09 0.2340
NEBOptimizer[ode]: 42 16:07:10 0.2243
NEBOptimizer[ode]: 43 16:07:11 0.2250
NEBOptimizer[ode]: 44 16:07:12 0.2240
NEBOptimizer[ode]: 45 16:07:13 0.2229
NEBOptimizer[ode]: 46 16:07:13 0.2185
NEBOptimizer[ode]: 47 16:07:14 0.2102
NEBOptimizer[ode]: 48 16:07:16 0.2119
NEBOptimizer[ode]: 49 16:07:16 0.2121
NEBOptimizer[ode]: 50 16:07:17 0.2124
NEBOptimizer[ode]: 51 16:07:18 0.2128
NEBOptimizer[ode]: 52 16:07:19 0.2142
NEBOptimizer[ode]: 53 16:07:20 0.2196
NEBOptimizer[ode]: 54 16:07:21 0.2224
NEBOptimizer[ode]: 55 16:07:22 0.2229
NEBOptimizer[ode]: 56 16:07:23 0.2233
NEBOptimizer[ode]: 57 16:07:23 0.2236
NEBOptimizer[ode]: 58 16:07:24 0.2252
NEBOptimizer[ode]: 59 16:07:25 0.2316
NEBOptimizer[ode]: 60 16:07:26 0.2423
NEBOptimizer[ode]: 61 16:07:26 0.2827
NEBOptimizer[ode]: 62 16:07:27 0.2444
NEBOptimizer[ode]: 63 16:07:28 0.2452
NEBOptimizer[ode]: 64 16:07:29 0.2455
NEBOptimizer[ode]: 65 16:07:30 0.2468
NEBOptimizer[ode]: 66 16:07:30 0.2523
NEBOptimizer[ode]: 67 16:07:31 0.2805
NEBOptimizer[ode]: 68 16:07:33 0.2958
NEBOptimizer[ode]: 69 16:07:33 0.2985
NEBOptimizer[ode]: 70 16:07:34 0.3003
NEBOptimizer[ode]: 71 16:07:35 0.3018
NEBOptimizer[ode]: 72 16:07:36 0.3076
NEBOptimizer[ode]: 73 16:07:37 0.3322
NEBOptimizer[ode]: 74 16:07:38 0.3403
NEBOptimizer[ode]: 75 16:07:39 0.3421
NEBOptimizer[ode]: 76 16:07:40 0.3440
NEBOptimizer[ode]: 77 16:07:40 0.3510
NEBOptimizer[ode]: 78 16:07:41 0.3721
NEBOptimizer[ode]: 79 16:07:43 0.3757
NEBOptimizer[ode]: 80 16:07:43 0.3765
NEBOptimizer[ode]: 81 16:07:44 0.3774
NEBOptimizer[ode]: 82 16:07:45 0.3797
NEBOptimizer[ode]: 83 16:07:46 0.3631
NEBOptimizer[ode]: 84 16:07:47 0.3573
NEBOptimizer[ode]: 85 16:07:48 0.3548
NEBOptimizer[ode]: 86 16:07:49 0.3518
NEBOptimizer[ode]: 87 16:07:50 0.3425
[13]:
display(SVG("janus_results/ethanol_reactants_2water-neb-plot.svg"))
[14]:
from ase.io import read
from weas_widget import WeasWidget
nebp = read("janus_results/ethanol_reactants_2water-neb-band.extxyz", index=":")
w2=WeasWidget()
w2.from_ase(nebp)
w2.avr.model_style = 1
w2.avr.show_hydrogen_bonds = True
w2
[14]:
extra bits¶
analyse the barrier height trend.
consider redoing the same exercise with a different potential… remember if you use mace-off dispersion needs to be off.