The GPU package was developed by Mike Brown while at SNL and ORNL and his collaborators, particularly Trung Nguyen (now at Northwestern). It provides GPU versions of many pair styles and for parts of the kspace_style pppm for long-range Coulombics. It has the following general features:
Required hardware/software:
To compile and use this package in CUDA mode, you currently need to have an NVIDIA GPU and install the corresponding NVIDIA CUDA toolkit software on your system (this is primarily tested on Linux and completely unsupported on Windows):
To compile and use this package in OpenCL mode, you currently need to have the OpenCL headers and the (vendor neutral) OpenCL library installed. In OpenCL mode, the acceleration depends on having an OpenCL Installable Client Driver (ICD) installed. There can be multiple of them for the same or different hardware (GPUs, CPUs, Accelerators) installed at the same time. OpenCL refers to those as 'platforms'. The GPU library will select the first suitable platform, but this can be overridden using the device option of the package command. run lammps/lib/gpu/ocl_get_devices to get a list of available platforms and devices with a suitable ICD available.
Building LAMMPS with the GPU package:
See the Build extras doc page for instructions.
Run with the GPU package from the command line:
The mpirun or mpiexec command sets the total number of MPI tasks used by LAMMPS (one or multiple per compute node) and the number of MPI tasks used per node. E.g. the mpirun command in MPICH does this via its -np and -ppn switches. Ditto for OpenMPI via -np and -npernode.
When using the GPU package, you cannot assign more than one GPU to a single MPI task. However multiple MPI tasks can share the same GPU, and in many cases it will be more efficient to run this way. Likewise it may be more efficient to use less MPI tasks/node than the available # of CPU cores. Assignment of multiple MPI tasks to a GPU will happen automatically if you create more MPI tasks/node than there are GPUs/mode. E.g. with 8 MPI tasks/node and 2 GPUs, each GPU will be shared by 4 MPI tasks.
Use the "-sf gpu" command-line switch, which will automatically append "gpu" to styles that support it. Use the "-pk gpu Ng" command-line switch to set Ng = # of GPUs/node to use.
lmp_machine -sf gpu -pk gpu 1 -in in.script # 1 MPI task uses 1 GPU mpirun -np 12 lmp_machine -sf gpu -pk gpu 2 -in in.script # 12 MPI tasks share 2 GPUs on a single 16-core (or whatever) node mpirun -np 48 -ppn 12 lmp_machine -sf gpu -pk gpu 2 -in in.script # ditto on 4 16-core nodes
Note that if the "-sf gpu" switch is used, it also issues a default package gpu 1 command, which sets the number of GPUs/node to 1.
Using the "-pk" switch explicitly allows for setting of the number of GPUs/node to use and additional options. Its syntax is the same as same as the "package gpu" command. See the package command doc page for details, including the default values used for all its options if it is not specified.
Note that the default for the package gpu command is to set the Newton flag to "off" pairwise interactions. It does not affect the setting for bonded interactions (LAMMPS default is "on"). The "off" setting for pairwise interaction is currently required for GPU package pair styles.
Or run with the GPU package by editing an input script:
The discussion above for the mpirun/mpiexec command, MPI tasks/node, and use of multiple MPI tasks/GPU is the same.
Use the suffix gpu command, or you can explicitly add an "gpu" suffix to individual styles in your input script, e.g.
pair_style lj/cut/gpu 2.5
You must also use the package gpu command to enable the GPU package, unless the "-sf gpu" or "-pk gpu" command-line switches were used. It specifies the number of GPUs/node to use, as well as other options.
Speed-ups to expect:
The performance of a GPU versus a multi-core CPU is a function of your hardware, which pair style is used, the number of atoms/GPU, and the precision used on the GPU (double, single, mixed). Using the GPU package in OpenCL mode on CPUs (which uses vectorization and multithreading) is usually resulting in inferior performance compared to using LAMMPS' native threading and vectorization support in the USER-OMP and USER-INTEL packages.
See the Benchmark page of the LAMMPS web site for performance of the GPU package on various hardware, including the Titan HPC platform at ORNL.
You should also experiment with how many MPI tasks per GPU to use to give the best performance for your problem and machine. This is also a function of the problem size and the pair style being using. Likewise, you should experiment with the precision setting for the GPU library to see if single or mixed precision will give accurate results, since they will typically be faster.
Guidelines for best performance:
Restrictions:
None.