Group Abstract Group Abstract

Message Boards Message Boards

How to use Mathematica in a high-performance computing (HPC) environment

Posted 9 years ago

I'm going to show you how you can use existing functionality to run a Mathematica script across a managed HPC cluster. Before I start, I must be upfront with you: though the individual commands are documented, this method, as a whole, is not. Thus, support for this procedure is outside the scope of Wolfram Technical Support. However, I'm hoping that once the ground-work has been laid, that Wolfram Community members can work together to fill in the missing details.

My assumptions:

  1. Mathematica is installed and properly licensed on the managed cluster
  2. once your job has been given resources, that you can freely SSH between them

(1) This is up to your local cluster's System Admin to figure out by talking with their organization and a Wolfram Sales Representative, and possibly Wolfram Technical Support (support.wolfram.com). (2) Again, this is up to your local SysAdmin to ensure. It's also known as a public/private key pair between nodes.

In the following, I'm assuming the cluster uses Torque (Torque SysAdmin Guide), but in principle other managers can be used. A generic Mathematica script job submission may look like the following:

#PBS -N Job_name
#PBS -l walltime=10:30
#PBS -l nodes=4:ppn=6
#PBS -m be

math -script hpc.wl

In this example,

  • the job is called "Job_name"
  • the job will finish in 10 and a half minutes
  • it is requesting 4 nodes with 6 processors-per-node, for a total of 24 resources (CPU cores)
  • an email will be sent to the account associated with the username when the job (b)egins and when it (e)nds

If you are not familiar with job submissions to a managed HPC cluster, then I suggest you read any guides provided by your organization

The Wolfram Language script "hpc.wl" does the rest of the work. It generically follows this order:

  1. gather the environment variables associated with the list of provided resources
  2. launch remote subkernels for each CPU core
  3. do the parallel computations
  4. close the subkernels
  5. end the job

    (*get association of resources, name of local host, and remove local host from available resources*)
    hosts = Counts[ReadList[Environment["PBS_NODEFILE"], "String"]];
    local = First[StringSplit[Environment["HOSTNAME"],"."]];
    hosts[local]--;
    
    (*launch subkernels and connect them to the controlling Wolfram Kernel*)
    Needs["SubKernels`RemoteKernels`"];
    Map[If[hosts[#] > 0, LaunchKernels[RemoteMachine[#, hosts[#]]]]&, Keys[hosts]];
    
    (* ===== regular Wolfram Language code goes here ===== *)
    Print[ {$MachineName, $KernelID} ]
    (* ===== end of Wolfram Language program ===== *)
    
    CloseKernels[];
    Quit
    

On Torque there is the environment variable "PBS_NODEFILE" (Torque environment variables) that lists the different nodes that are provided to the job. It is my understanding that the name is repeated for each CPU core. That's why a simple Count of the node list tells us everything. The other piece of information, which is probably not necessary, is "HOSTNAME". This is where the Wolfram controlling kernel is running. In the above, we remove it from the list of available resources, but I don't believe this is necessary. According to the documentation ([3]), this may be known as "PBS_O_HOSTNAME".

The Mathematica script should not need to change save for the code between the commented lines. I'm also assuming that $RemoteCommand (provided by Subkernels`RemoteKernels`) is the same on each node. This is usually the case as most clusters use a cloned file system.

SLURM should be very similar except that the environment variables will be different. It is my understanding that

    headNode = Environment["SLURMD_NODENAME"];
    nodes = ReadList["!scontrol show hostname $SLURM_NODELIST",String]; 

provides the headnode and list of resources.

I encourage discussion.

POSTED BY: Kevin Daily
13 Replies
Posted 4 years ago

Hi,

I was granted SSH access privilege from the cluster manager and successfully ran parallel computing through SLURM. The attached file is copied from my test note then.

Note: By the time I was trying to solve this issue, Mathematica working with SLURM follows this approach "Slurm creates a resource allocation for the job and then mpirun launches tasks using some mechanism other than Slurm, such as SSH or RSH", and alternative approaches that do not require SSH access was not supported.

I think supporting alternative approaches working w/ SLURM w/o requiring SSH access would be a very nice feature to be added to Mathematica's capability, just like being already available in MATLAB.

Hopefully, this could be helpful to you.

POSTED BY: zhe duan

People who are looking at this might find this independent implementation, made for SGE and a specific HPC cluster, useful: https://bitbucket.org/szhorvat/crc/src

An issue specific to this system was that ssh would not work, and rsh (a specific version of rsh!) had to be used.

POSTED BY: Kevin Daily

Could you tell us why do we need to remove the controlling Wolfram kernel ?

Removing it is not necessary. My thought was the CPU core on which the main kernel runs would be too busy managing the intra-kernel communications to do reasonable kernel calculations.

POSTED BY: Kevin Daily
Posted 4 years ago

Dear Kevin,

Thanks for your reply.

Yes, there are 40 cores on atulya049 as well.

Ok, I will do one heavy calculation and get back to you.

With thanks, Sachin.

PS- In your First post, You suggested to remove the Wolfram controlling kernel from the list of available resources. So in my program, Out of requested cores (80), 79 are going into the calculation.

Could you tell us why do we need to remove the controlling Wolfram kernel ?

POSTED BY: Sachin Kumar
POSTED BY: Kevin Daily
Posted 4 years ago

Kindly Help:

I access mathematica (through MobaTerm), which is installed in my user area on Torque managed HPC, to connect multiple nodes assigned to my Job submitted in Queue.

PROBLEM

Following is the program to connect 2 nodes, and each node consists of 40 cores.

Everything looks fine, expect the time taken by ParallelTable at the end of program, which is much higher than time taken by single node (without launching remote kernels)

Note that allotted two nodes atulya095 and atulya049 appear 40 times each in the output.

PROGRAM

nodes=ReadList[Environment["PBS_NODEFILE"], "String"];
Print["alloted node are  ", nodes];    (*get association of resources, name of local host and remove local host from available resources*)
hosts = Counts[nodes];                                             
local = First[StringSplit[Environment["HOSTNAME"],"."]];
Print["local node is ", local];
hosts[local]--;
Needs["SubKernels`RemoteKernels`"];               
Map[If[hosts[#] > 0, LaunchKernels[RemoteMachine[#, "ssh -x -f -l `3` `1` wolfram  -wstp -linkmode Connect `4` -linkname '`2`' -subkernel -noinit", hosts[#]]]]&,    Keys[hosts]];                
Print["kernel count is  ",$KernelCount];           
Print[" machine name is  ", ParallelEvaluate[$MachineName]];
Print[" kernel id is  ",ParallelEvaluate[$KernelID]];
Print["processor count is  ",$ProcessorCount];
Print[AbsoluteTiming[ParallelTable[Exp[Sin[x]]^Sin[x],{x,0.1,200000,0.0001}];][[1]]];   
CloseKernels[];
Quit[];

OUTPUT

alloted node are  {atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049}
local node is atulya095
kernel count is  79
machine name is  {atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya095, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049, atulya049}
kernel id is  {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79}
processor count is  40
189.309181

I will be thankful to your help here.

POSTED BY: Sachin Kumar
Posted 4 years ago

@zhe duan , Hi, Thanks for replying, I will take a look at attachment and get back to you.

POSTED BY: Sachin Kumar
Posted 4 years ago

Your shared link is broken, Could you share it again. It might be helpful to me.

POSTED BY: Sachin Kumar
Posted 6 years ago
POSTED BY: zhe duan

In practice, you only need to log in to the head node via SSH. Having no direct SSH access to the compute nodes from outside the cluster is a good thing!

My understanding is that once a job is running on the cluster, that job's resources (the compute nodes provided to the job) can freely communicate with one another. Considering that MPI parallelization works on the cluster, I would be surprised that you could not also use my described method to have Mathematica run on the SLURM cluster.

You first need to truly test your SSH issues before going any further. I suggest starting an interactive session so you have access to the command line, but request more than one compute node. In the following $> represents the terminal input. Echo the cluster manager commands e.g.

$> !scontrol show hostname $SLURM_JOB_NODELIST
$> echo $SLURM_TASKS_PER_NODE

Then try a simple SSH command from your main node to one of the other compute nodes you've requested. Something like

$> ssh remote-compute-node-name pwd

If password-less SSH is available then this will display the remote compute node's working directory (via the 'pwd' command). If the nodes use a cloned file system then the working directory should be your home directory on the cluster. Resolve any issues before moving on.

POSTED BY: Kevin Daily
Posted 6 years ago
POSTED BY: zhe duan

enter image description here - Congratulations! This post is now Staff Pick! Thank you for your wonderful contributions. Please, keep them coming!

POSTED BY: EDITORIAL BOARD
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard