A library for coupling (Py)Torch machine learning models to Fortran
This repository contains code, utilities, and examples for directly calling PyTorch ML models from Fortran.
For full API and user documentation please see the online documentation which is significantly more detailed than this README.
To cite use of this code please refer to Atkinson et al., (2025) DOI:10.21105/joss.07602. See acknowledgment below for more details.
- Description
- Installation
- Usage
- GPU Support
- Examples
- License
- Contributions
- Authors and Acknowledgment
- Users
It is desirable to be able to run machine learning (ML) models directly in Fortran. Such models are often trained in some other language (say Python) using popular frameworks (say PyTorch) and saved. We want to run inference on this model without having to call a Python executable. To achieve this we use the existing Torch C++ interface.
This project provides a library enabling a user to directly couple their PyTorch models to Fortran code. We provide installation instructions for the library as well as instructions and examples for performing coupling.
use ftorch
...
type(torch_model) :: model
type(torch_tensor), dimension(n_inputs) :: model_inputs_arr
type(torch_tensor), dimension(n_outputs) :: model_output_arr
...
call torch_model_load(model, "/my/saved/TorchScript/model.pt", torch_kCPU)
call torch_tensor_from_array(model_inputs_arr(1), input_fortran, in_layout, torch_kCPU)
call torch_tensor_from_array(model_output_arr(1), output_fortran, out_layout, torch_kCPU)
call torch_model_forward(model, model_input_arr, model_output_arr)The following presentations provide an introduction and overview of FTorch:
- FTorch: Facilitating Hybrid Modelling
N8-CIR Seminar, Leeds - July 2025
Slides - Recording - Coupling Machine Learning to Numerical (Climate) Models
Platform for Advanced Scientific Computing, Zurich - June 2024
Slides
If you are interested in using this library please get in touch.
For a similar approach to calling TensorFlow models from Fortran please see Fortran-TF-lib.
To install the library requires the following to be installed on the system:
- CMake >= 3.15
- LibTorch* or PyTorch
- Fortran (2008 standard compliant), C++ (must fully support C++17), and C compilers
* The minimal example provided downloads the CPU-only Linux Nightly binary. Alternative versions may be required.
FTorch's test suite has some additional dependencies.
- You will also need to install the unit testing framework pFUnit.
- FTorch's test suite requires that PyTorch has been
installed, as opposed to LibTorch. We recommend installing
torchvisionin the same command (e.g.,pip install torch torchvision)*. Doing so ensures thattorchandtorchvisionare configured in the same way. - Other Python modules are installed automatically upon building the tests.
* For more details, see here.
If building in a Windows environment then you can either use
Windows Subsystem for Linux (WSL)
or Visual Studio and the Intel Fortran Compiler.
For full details on the process see the
online Windows documentation.
Note that LibTorch is not supported for the GNU Fortran compiler with MinGW.
At the time of writing there are issues
building FTorch on Apple Silicon when linking to downloaded LibTorch binaries or
pip-installed PyTorch.
FTorch can successfully be built, including utilising the MPS backend, from inside a
conda environment using the environment files and instructions in
conda/.
Conda is not our preferred approach for managing dependencies, but for users who want
an environment to build FTorch in we provide guidance and environment files in
conda/. Note that these
are not minimal and will install Python, PyTorch, and modules required for running the
tests and examples.
It is possible to try FTorch through an interactive browser session using GitHub
Codespace. Full instructions are in the
codespace/ directory.
For detailed installation instructions please see the online installation documentation.
To build and install the library:
-
Navigate to the location in which you wish to install the source and run:
git clone [email protected]:Cambridge-ICCS/FTorch.gitto clone via ssh, or
git clone https://github.com/Cambridge-ICCS/FTorch.gitto clone via https.
-
Navigate to the root FTorch directory by running:
cd FTorch/ -
Build the library using CMake with the relevant options from the table below:
mkdir build cd build cmake .. -DCMAKE_BUILD_TYPE=ReleaseThe following table of CMake options are available to be passed as arguments to
cmakethrough-D<Option>=<Value>.
It is likely that you will need to provide at leastCMAKE_PREFIX_PATH.Option Value Description CMAKE_Fortran_COMPILERgfortran/ifx/ifortSpecify a Fortran compiler to build the library with. This should match the Fortran compiler you're using to build the code you are calling this library from.1 CMAKE_C_COMPILERgcc/icx/iccSpecify a C compiler to build the library with.1 CMAKE_CXX_COMPILERg++/icx/icpcSpecify a C++ compiler to build the library with.1 CMAKE_PREFIX_PATH</path/to/LibTorch/>Location of Torch installation2 CMAKE_INSTALL_PREFIX</path/to/install/lib/at/>Location at which the library files should be installed. By default this is /usr/localCMAKE_BUILD_TYPERelease/DebugSpecifies build type. The default is Debug, useReleasefor production codeCMAKE_BUILD_TESTSTRUE/FALSESpecifies whether to compile FTorch's test suite as part of the build. GPU_DEVICENONE/CUDA/HIP/XPU/MPSSpecifies the target GPU backend architecture (if any) 3 MULTI_GPUON/OFFSpecifies whether to build the tests that involve multiple GPU devices ( ONby default ifCMAKE_BUILD_TESTSandGPU_DEVICEare set).1 On Windows this may need to be the full path to the compiler if CMake cannot locate it by default.
2 The path to the Torch installation needs to allow CMake to locate the relevant Torch CMake files.
If Torch has been installed as LibTorch then this should be the absolute path to the unzipped LibTorch distribution. If Torch has been installed as PyTorch in a Python venv (virtual environment), e.g. withpip install torch, then this should be</path/to/venv/>lib/python<3.xx>/site-packages/torch/.
You can find the location of your torch install by importing torch from your Python environment (import torch) and runningprint(torch.__file__)3 This is often overridden by PyTorch. When installing with pip, the
index-urlflag can be used to ensure a CPU-only or GPU-enabled version is installed, e.g.pip install torch --index-url https://download.pytorch.org/whl/cpu. URLs for alternative versions can be found here. -
Make and install the library to the desired location with either:
cmake --build . --target installor, if you want to separate these steps:
cmake --build . cmake --install .Note: If using a machine capable of running multiple jobs this can be sped up by adding
--parallel [<jobs>]or-j [<jobs>]to thecmake buildcommand. See the CMake documentation for more information.Installation will place the following directories at the install location:
CMAKE_INSTALL_PREFIX/include/- contains header and mod filesCMAKE_INSTALL_PREFIX/lib/- containscmakedirectory and.sofiles
Note: depending on your system and architecturelibmay belib64, and you may have.dllfiles or similar.
Note: In a Windows environment this will require administrator privileges for the default install location.
In order to use FTorch users will typically need to follow these steps:
- Save a PyTorch model as TorchScript.
- Write Fortran using the FTorch bindings to use the model from within Fortran.
- Build and compile the code, linking against the FTorch library
These steps are described in more detail in the online documentation
To run on GPU requires an installation of LibTorch compatible for the GPU device
you wish to target (using the GPU_DEVICE CMake option as detailed in the
table above) and two main adaptations to the code:
- When saving a TorchScript model, ensure that it is on the appropriate GPU
device type. The
pt2ts.pyscript has a command line argument--device_type, which currently accepts 5 different device types:cpu(default),cuda,hip,xpu, ormps. - When using FTorch in Fortran, set the device for the input
tensor(s) to the appropriate GPU device type, rather than
torch_kCPU. There are currently three options:torch_kCUDA,torch_kHIP,torch_kXPU, ortorch_kMPS.
For detailed guidance about running on GPU, including instructions for using multiple devices, please see the online GPU documentation.
If your code uses large tensors (where large means more than 2,147,483,647 elements
in any one dimension (the maximum value of a 32-bit integer)), you may
need to compile ftorch with 64-bit integers. For information on how to do
this, please see our
FAQ
Examples of how to use this library are provided in the examples directory.
They demonstrate different functionalities of the code and are provided with
instructions to modify, build, and run as necessary.
For information on testing, see the corresponding
webpage
or the README in the test subdirectory.
Copyright © ICCS
FTorch is distributed under the MIT Licence.
Contributions and collaborations are welcome.
For bugs, feature requests, and clear suggestions for improvement please open an issue.
If you have built something upon FTorch that would be useful to others, or can address an open issue, please fork the repository and open a pull request.
Detailed guidelines can be found in the online developer documentation.
Everyone participating in the FTorch project, and in particular in the issue tracker, pull requests, and social media activity, is expected to treat other people with respect and, more generally, to follow the guidelines articulated in the Python Community Code of Conduct.
FTorch is written and maintained by the ICCS
To cite FTorch in research please refer to:
Atkinson et al., (2025). FTorch: a library for coupling PyTorch models to Fortran. Journal of Open Source Software, 10(107), 7602, https://doi.org/10.21105/joss.07602
See the CITATION.cff file or click 'Cite this repository' on the right.
See Contributors for a full list of contributors.
The following projects make use of this code or derivatives in some way:
- DataWave CAM-GW
- DataWave - MiMA ML
See Mansfield and Sheshadri (2024) - DOI: 10.1029/2024MS004292 - Convection parameterisations in ICON
See Heuer et al. (2024) - DOI: 10.1029/2024MS004398 - To replace a BiCGStab bottleneck in the GloSea6 Seasonal Forecasting model
See Park and Chung (2025) - DOI: 10.3390/atmos16010060 - Emulation of cloud resolving models to reduce computational cost in E3SM
See Hu et al. (2025) - DOI: 10.1029/2024MS004618 (and code)
Are we missing anyone? Let us know.