第一原理分子動力学プログラム STATE Senri Wiki
開始行:
* Smith [#ofcf0768]
"Smith" is a computer cluster based on the Intel and Inte...
#contents
** Login nodes [#b65445bf]
To use the "Smith" system, log in to the following nodes:
-[smith] 133.1.116.161
-[rafiki] 133.1.116.162
-[tiamat] 133.1.116.211
To use the "sb100" system, use the following node:
-[sb100] 133.1.116.165
** How to login the login node [#y3d6531e]
To login "smith" type
$ ssh -l [login_name] 133.1.116.161
or
$ ssh [userID]@133.1.116.161
In a case you allow the X11 forwarding, use
$ ssh -Y -l [login_name] 133.1.116.161
or
$ ssh -Y [login_name]@133.1.116.161
Currently, you get the following message upon login
-bash: /usr/local/g09/D01/g09/bsd/g09.profile: Permissio...
but it does not affect your work mostly.
NOTE: When you log in for the first time, change your ini...
$ yppasswd
** How to compile and run the program [#z7374c65]
In the latest environment (as of October 2020), we are su...
To check the available modules, type
$ module available
and to load the specific modules, type for e.g.
$ module load intel/2020.2.254
$ module load intelmpi/2020.2.254
$ module load python/3.8
Note that these modules are loaded one time and they shou...
module load intel/2020.2.254
module load intelmpi/2020.2.254
module load python/3.8
Make sure that the old setting is deleted and/or commente...
# source /home/opt/settings/2017.4/intel-compiler.sh
# source /home/opt/settings/2017.4/intel-mpi.sh
Also make sure to load the same modules in your job script.
** How to submit your jobs [#pa13b1de]
To execute your program, use the queueing system, usually...
For instance, to execute a script "job.sh" using the node...
$ qsub -q xh1 -l select=1:ncpus=24:mpiprocs=24:ompthread...
Note group and number of cores can be specified in the jo...
To see the job status, type
$ qstat
To see the job status of the specific user, type
$ qstat -u [user ID]
To cancel a job, use
$ qdel [job ID]
where the job ID can be obtained by using qstat (the numb...
If you want to see the status of all nodes and job status...
$ qstat2
*** Examples of job script [#u8b8a717]
In the following, examples for each groups (queues) are l...
$ qsub job.sh
and do not have to specify the queue group and number of ...
- Groups 4
#!/bin/bash
#PBS -q xe1
#PBS -l select=1:ncpus=8:mpiprocs=8:ompthreads=1
#PBS -N JOB_NAME
cd $PBS_O_WORKDIR
module load intel/2020.2.254 intelmpi/2020.2.254
# Above settings should be consistent with those used in...
mpirun ./a.out < input.dat > output.dat
- Groups 5
#!/bin/sh
#PBS -q xe2
#PBS -l select=1:ncpus=12:mpiprocs=12:ompthreads=1
#PBS -N JOB_NAME
cd $PBS_O_WORKDIR
module load intel/2020.2.254 intelmpi/2020.2.254
# Above settings should be consistent with those used in...
mpirun ./a.out < input.dat > output.dat
- Group 7 (sb100)
-- Hybrid parallelization (ex. use 12 cores with 6 thread...
#!/bin/sh
#$ -cwd
#$ -q sb.q
#$ -pe x6 12
#$ -N JOB_NAME
module load intel/2021.2.0
module load intelmpi/2021.2.0
# Above settings should be consistent with those used in...
export OMP_NUM_THREADS=6
mpirun -perhost 1 -np $NHOSTS ./a.out < input.dat > outp...
-- Flat parallelization (12 cores)
#$ -S /bin/bash
#$ -cwd
#$ -q sb.q
#$ -pe x6 12
#$ -N JOB_NAME
module load intel/2021.2.0
module load intelmpi/2021.2.0
# Above settings should be consistent with those used in...
mpirun -np $NSLOTS ./a.out < input.dat > output.dat
- Group 8
#!/bin/bash
#PBS -q xs2
#PBS select=1:ncpus=16:mpiprocs=16:ompthreads=1
#PBS -N JOB_NAME
module load intel/2020.2.254 intelmpi/2020.2.254
# Above settings should be consistent with those used in...
MPI_COMMAND=mpirun
export I_MPI_PIN=1
export I_MPI_ADJUST_ALLGATHERV=2
cat $PE_HOSTFILE | awk '{ print $1":"$2/ENVIRON["OMP_NUM...
$MPI_COMMAND ./a.out < input.dat > output.dat
- Group 9
#!/bin/sh
#PBS -q xi1
#PBS -l select=1:ncpus=16:mpiprocs=16:ompthreads=1
#PBS -N JOB_NAME
cd $PBS_O_WORKDIR
module load intel/2020.2.254 intelmpi/2020.2.254
# Above settings should be consistent with those used in...
MPI_COMMAND=mpirun
export I_MPI_PIN=1
export I_MPI_ADJUST_ALLGATHERV=2
cat $PE_HOSTFILE | awk '{ print $1":"$2/ENVIRON["OMP_NUM...
$MPI_COMMAND ./a.out < input.dat > output.dat
- Groups 10
#!/bin/sh
#PBS -q xh1
#PBS -l select=2:ncpus=48:mpiprocs=48:ompthreads=1
#PBS -N JOB_NAME
#PBS -j oe
module load intel/2020.2.254 intelmpi/2020.2.254
# Above settings should be consistent with those used in...
MPI_COMMAND=mpirun
export I_MPI_PIN=1
export I_MPI_FABRICS=shm:ofa
cat $PE_HOSTFILE | awk '{ print $1":"$2/ENVIRON["OMP_NUM...
$MPI_COMMAND ./a.out < input.dat > output.dat
- Groups 11
#!/bin/sh
#PBS -q xh2
#PBS -l select=2:ncpus=48:mpiprocs=48:ompthreads=1
#PBS -N JOB_NAME
#PBS -j oe
cd $PBS_O_WORKDIR
module load intel/2020.2.254 intelmpi/2020.2.254
# Above settings should be consistent with those used in...
MPI_COMMAND=mpirun
export I_MPI_PIN=1
export I_MPI_FABRICS=shm:ofa
cat $PE_HOSTFILE | awk '{ print $1":"$2/ENVIRON["OMP_NUM...
hostfile.$JOB_ID
$MPI_COMMAND ./a.out < input.dat > output.dat
- Group 13
#!/bin/sh
#PBS -q xb1
#PBS -l select=1:ncpus=32:mpiprocs=32:ompthreads=1
#PBS -N JOB_NAME
#PBS -j oe
cd $PBS_O_WORKDIR
module load intel/2020.2.254 intelmpi/2020.2.254
# Above settings should be consistent with those used in...
MPI_COMMAND=mpirun
export I_MPI_PIN=1
export I_MPI_FABRICS=shm:ofa
cat $PE_HOSTFILE | awk '{ print $1":"$2/ENVIRON["OMP_NUM...
$MPI_COMMAND ./a.out < input.dat > output.dat
- Group 14
#!/bin/sh
#PBS -q x17
#PBS -l select=1:ncpus=32:mpiprocs=32:ompthreads=1
#PBS -N JOB_NAME
#PBS -j oe
cd $PBS_O_WORKDIR
module load intel/2020.2.254 intelmpi/2020.2.254
# Above settings should be consistent with those used in...
MPI_COMMAND=mpirun
export I_MPI_PIN=1
export I_MPI_FABRICS=shm:dapl
cat $PE_HOSTFILE | awk '{ print $1":"$2/ENVIRON["OMP_NUM...
$MPI_COMMAND ./a.out < input.dat > output.dat
**Computer nodes and queues [#gcad41b2]
| Group | Proc. | #CORE/#CPU |#Nodes| Submission node | q...
|4 | xeon | 8/2 |N/A| smith | xe...
|5 | xeon | 12/2 |N/A| smith | x...
|6 | xeon sandy-bridge | 16/2 | 18 | smith | xs2 | |
|7 | xeon ivy-bridge | 16/2 | 13 | smith | xi1 | |
|7a | core i7 sandy-bridge | 6/1 |13| sb100 | all.q | |
|8 | xeon sandy-bridge |16/2 || smith | xs2 | |
|9 | xeon ivy-bridge |16/2|| smith | xi1 | |
|10a | xeon Haswell | 24/2 | 13 | smith | xh1 ...
|10b| xeon Haswell | 24/2 | 13 | smith | xh2 |...
|11 | xeon Haswell | 24/2 | | smith| xh2 | in...
|13 | xeon Broadwell | 32/2 | 14 | smith | xb1 |...
|14 | xeon Skylake | 32/2 | 18 | smith | x17...
|15 | xeon Cascade lake | 40/2 | 6 | smith |...
|16 | xeon Cascade lake | 52/2 | 20 | smith ...
|17 | xeon Ice lake | 64/2 | 33 | smith | x2...
NOTE:
- To submit a job to group 7 nodes, login to sb100 and ex...
- To submit a job to other group nodes, login to smith an...
*** Group 4, 5 "xe" system [#e02a591e]
The "xe" system is composed of the nodes with the Xeon CP...
*** Group 7 "sb100" system [#m1b620b9]
The "sb100" system is based on the Core i7 CPUs with the ...
*** Group 8 "xs" system [#j5c755e2]
The "xs" system is based on the Xeon CPUs with the Sandy-...
*** Group 9 "xi" system [#ldcc860e]
The "xi" system is based on the Xeon CPUs with the Ivy-br...
*** Group 10, 11, 12 "xh" system [#oc5448db]
The "xh" system is composed of the nodes with 2 Xeon CPUs...
*** Group 13 "xb" system [#h9aaaca9]
The "xb" system is composed of the nodes with 2 Xeon Broa...
*** Group 14 "x17" system [#h004b4e7]
The "x17" system is composed of the nodes with 2 Xeon Sky...
** Network structure [#pa8ce3cd]
- ~ "|" indicates a network connection, "[]" name, for th...
+ Engineering intranet, ODINS network
|
| Backbone network( no access outside of engin...
| |
+- [smith] -----+ 133.1.116.161...
+- [rafiki] ----+ 133.1.116.162...
+- [tiamat] ----+ 133.1.116.211...
| |
| +-- [xe00], [xe01] Calc. node, g...
| +-- [xe02]-[xe06] Calc. node, g...
| |
| +-- [xs01]-[xs18] Calc. node, g...
| |
| +-- [xi01]-[xi12] Calc. node, g...
| |
| +-- [xh01]-[xh17]
| +-- [xh19]-[xh34] Calc. node, g...
| +-- [xh18],[xh35]-[xh43] Calc. node, g...
| +-- [xb01]-[xb14] Calc. node, g...
| +-- [x1701]-[x1706] Calc. node, g...
| |
| |
+- [sb100] -----+ 133.1.116.165...
|
+-- [sb101]-[sb120] Calc. node, g...
終了行:
* Smith [#ofcf0768]
"Smith" is a computer cluster based on the Intel and Inte...
#contents
** Login nodes [#b65445bf]
To use the "Smith" system, log in to the following nodes:
-[smith] 133.1.116.161
-[rafiki] 133.1.116.162
-[tiamat] 133.1.116.211
To use the "sb100" system, use the following node:
-[sb100] 133.1.116.165
** How to login the login node [#y3d6531e]
To login "smith" type
$ ssh -l [login_name] 133.1.116.161
or
$ ssh [userID]@133.1.116.161
In a case you allow the X11 forwarding, use
$ ssh -Y -l [login_name] 133.1.116.161
or
$ ssh -Y [login_name]@133.1.116.161
Currently, you get the following message upon login
-bash: /usr/local/g09/D01/g09/bsd/g09.profile: Permissio...
but it does not affect your work mostly.
NOTE: When you log in for the first time, change your ini...
$ yppasswd
** How to compile and run the program [#z7374c65]
In the latest environment (as of October 2020), we are su...
To check the available modules, type
$ module available
and to load the specific modules, type for e.g.
$ module load intel/2020.2.254
$ module load intelmpi/2020.2.254
$ module load python/3.8
Note that these modules are loaded one time and they shou...
module load intel/2020.2.254
module load intelmpi/2020.2.254
module load python/3.8
Make sure that the old setting is deleted and/or commente...
# source /home/opt/settings/2017.4/intel-compiler.sh
# source /home/opt/settings/2017.4/intel-mpi.sh
Also make sure to load the same modules in your job script.
** How to submit your jobs [#pa13b1de]
To execute your program, use the queueing system, usually...
For instance, to execute a script "job.sh" using the node...
$ qsub -q xh1 -l select=1:ncpus=24:mpiprocs=24:ompthread...
Note group and number of cores can be specified in the jo...
To see the job status, type
$ qstat
To see the job status of the specific user, type
$ qstat -u [user ID]
To cancel a job, use
$ qdel [job ID]
where the job ID can be obtained by using qstat (the numb...
If you want to see the status of all nodes and job status...
$ qstat2
*** Examples of job script [#u8b8a717]
In the following, examples for each groups (queues) are l...
$ qsub job.sh
and do not have to specify the queue group and number of ...
- Groups 4
#!/bin/bash
#PBS -q xe1
#PBS -l select=1:ncpus=8:mpiprocs=8:ompthreads=1
#PBS -N JOB_NAME
cd $PBS_O_WORKDIR
module load intel/2020.2.254 intelmpi/2020.2.254
# Above settings should be consistent with those used in...
mpirun ./a.out < input.dat > output.dat
- Groups 5
#!/bin/sh
#PBS -q xe2
#PBS -l select=1:ncpus=12:mpiprocs=12:ompthreads=1
#PBS -N JOB_NAME
cd $PBS_O_WORKDIR
module load intel/2020.2.254 intelmpi/2020.2.254
# Above settings should be consistent with those used in...
mpirun ./a.out < input.dat > output.dat
- Group 7 (sb100)
-- Hybrid parallelization (ex. use 12 cores with 6 thread...
#!/bin/sh
#$ -cwd
#$ -q sb.q
#$ -pe x6 12
#$ -N JOB_NAME
module load intel/2021.2.0
module load intelmpi/2021.2.0
# Above settings should be consistent with those used in...
export OMP_NUM_THREADS=6
mpirun -perhost 1 -np $NHOSTS ./a.out < input.dat > outp...
-- Flat parallelization (12 cores)
#$ -S /bin/bash
#$ -cwd
#$ -q sb.q
#$ -pe x6 12
#$ -N JOB_NAME
module load intel/2021.2.0
module load intelmpi/2021.2.0
# Above settings should be consistent with those used in...
mpirun -np $NSLOTS ./a.out < input.dat > output.dat
- Group 8
#!/bin/bash
#PBS -q xs2
#PBS select=1:ncpus=16:mpiprocs=16:ompthreads=1
#PBS -N JOB_NAME
module load intel/2020.2.254 intelmpi/2020.2.254
# Above settings should be consistent with those used in...
MPI_COMMAND=mpirun
export I_MPI_PIN=1
export I_MPI_ADJUST_ALLGATHERV=2
cat $PE_HOSTFILE | awk '{ print $1":"$2/ENVIRON["OMP_NUM...
$MPI_COMMAND ./a.out < input.dat > output.dat
- Group 9
#!/bin/sh
#PBS -q xi1
#PBS -l select=1:ncpus=16:mpiprocs=16:ompthreads=1
#PBS -N JOB_NAME
cd $PBS_O_WORKDIR
module load intel/2020.2.254 intelmpi/2020.2.254
# Above settings should be consistent with those used in...
MPI_COMMAND=mpirun
export I_MPI_PIN=1
export I_MPI_ADJUST_ALLGATHERV=2
cat $PE_HOSTFILE | awk '{ print $1":"$2/ENVIRON["OMP_NUM...
$MPI_COMMAND ./a.out < input.dat > output.dat
- Groups 10
#!/bin/sh
#PBS -q xh1
#PBS -l select=2:ncpus=48:mpiprocs=48:ompthreads=1
#PBS -N JOB_NAME
#PBS -j oe
module load intel/2020.2.254 intelmpi/2020.2.254
# Above settings should be consistent with those used in...
MPI_COMMAND=mpirun
export I_MPI_PIN=1
export I_MPI_FABRICS=shm:ofa
cat $PE_HOSTFILE | awk '{ print $1":"$2/ENVIRON["OMP_NUM...
$MPI_COMMAND ./a.out < input.dat > output.dat
- Groups 11
#!/bin/sh
#PBS -q xh2
#PBS -l select=2:ncpus=48:mpiprocs=48:ompthreads=1
#PBS -N JOB_NAME
#PBS -j oe
cd $PBS_O_WORKDIR
module load intel/2020.2.254 intelmpi/2020.2.254
# Above settings should be consistent with those used in...
MPI_COMMAND=mpirun
export I_MPI_PIN=1
export I_MPI_FABRICS=shm:ofa
cat $PE_HOSTFILE | awk '{ print $1":"$2/ENVIRON["OMP_NUM...
hostfile.$JOB_ID
$MPI_COMMAND ./a.out < input.dat > output.dat
- Group 13
#!/bin/sh
#PBS -q xb1
#PBS -l select=1:ncpus=32:mpiprocs=32:ompthreads=1
#PBS -N JOB_NAME
#PBS -j oe
cd $PBS_O_WORKDIR
module load intel/2020.2.254 intelmpi/2020.2.254
# Above settings should be consistent with those used in...
MPI_COMMAND=mpirun
export I_MPI_PIN=1
export I_MPI_FABRICS=shm:ofa
cat $PE_HOSTFILE | awk '{ print $1":"$2/ENVIRON["OMP_NUM...
$MPI_COMMAND ./a.out < input.dat > output.dat
- Group 14
#!/bin/sh
#PBS -q x17
#PBS -l select=1:ncpus=32:mpiprocs=32:ompthreads=1
#PBS -N JOB_NAME
#PBS -j oe
cd $PBS_O_WORKDIR
module load intel/2020.2.254 intelmpi/2020.2.254
# Above settings should be consistent with those used in...
MPI_COMMAND=mpirun
export I_MPI_PIN=1
export I_MPI_FABRICS=shm:dapl
cat $PE_HOSTFILE | awk '{ print $1":"$2/ENVIRON["OMP_NUM...
$MPI_COMMAND ./a.out < input.dat > output.dat
**Computer nodes and queues [#gcad41b2]
| Group | Proc. | #CORE/#CPU |#Nodes| Submission node | q...
|4 | xeon | 8/2 |N/A| smith | xe...
|5 | xeon | 12/2 |N/A| smith | x...
|6 | xeon sandy-bridge | 16/2 | 18 | smith | xs2 | |
|7 | xeon ivy-bridge | 16/2 | 13 | smith | xi1 | |
|7a | core i7 sandy-bridge | 6/1 |13| sb100 | all.q | |
|8 | xeon sandy-bridge |16/2 || smith | xs2 | |
|9 | xeon ivy-bridge |16/2|| smith | xi1 | |
|10a | xeon Haswell | 24/2 | 13 | smith | xh1 ...
|10b| xeon Haswell | 24/2 | 13 | smith | xh2 |...
|11 | xeon Haswell | 24/2 | | smith| xh2 | in...
|13 | xeon Broadwell | 32/2 | 14 | smith | xb1 |...
|14 | xeon Skylake | 32/2 | 18 | smith | x17...
|15 | xeon Cascade lake | 40/2 | 6 | smith |...
|16 | xeon Cascade lake | 52/2 | 20 | smith ...
|17 | xeon Ice lake | 64/2 | 33 | smith | x2...
NOTE:
- To submit a job to group 7 nodes, login to sb100 and ex...
- To submit a job to other group nodes, login to smith an...
*** Group 4, 5 "xe" system [#e02a591e]
The "xe" system is composed of the nodes with the Xeon CP...
*** Group 7 "sb100" system [#m1b620b9]
The "sb100" system is based on the Core i7 CPUs with the ...
*** Group 8 "xs" system [#j5c755e2]
The "xs" system is based on the Xeon CPUs with the Sandy-...
*** Group 9 "xi" system [#ldcc860e]
The "xi" system is based on the Xeon CPUs with the Ivy-br...
*** Group 10, 11, 12 "xh" system [#oc5448db]
The "xh" system is composed of the nodes with 2 Xeon CPUs...
*** Group 13 "xb" system [#h9aaaca9]
The "xb" system is composed of the nodes with 2 Xeon Broa...
*** Group 14 "x17" system [#h004b4e7]
The "x17" system is composed of the nodes with 2 Xeon Sky...
** Network structure [#pa8ce3cd]
- ~ "|" indicates a network connection, "[]" name, for th...
+ Engineering intranet, ODINS network
|
| Backbone network( no access outside of engin...
| |
+- [smith] -----+ 133.1.116.161...
+- [rafiki] ----+ 133.1.116.162...
+- [tiamat] ----+ 133.1.116.211...
| |
| +-- [xe00], [xe01] Calc. node, g...
| +-- [xe02]-[xe06] Calc. node, g...
| |
| +-- [xs01]-[xs18] Calc. node, g...
| |
| +-- [xi01]-[xi12] Calc. node, g...
| |
| +-- [xh01]-[xh17]
| +-- [xh19]-[xh34] Calc. node, g...
| +-- [xh18],[xh35]-[xh43] Calc. node, g...
| +-- [xb01]-[xb14] Calc. node, g...
| +-- [x1701]-[x1706] Calc. node, g...
| |
| |
+- [sb100] -----+ 133.1.116.165...
|
+-- [sb101]-[sb120] Calc. node, g...
ページ名: