第一原理分子動力学プログラム STATE Senri Wiki
開始行:
* Smith [#ofcf0768]
"Smith" is a computer cluster based on the Intel and Inte...
#contents
** Login nodes [#b65445bf]
To use the "Smith" system, log in to the following nodes:
-[smith] 133.1.116.161
-[rafiki] 133.1.116.162
-[tiamat] 133.1.116.211
To use the "sb100" system, use the following node:
-[sb100] 133.1.116.165
** How to login the login node [#y3d6531e]
To login "smith" type
$ ssh -l [userID] 133.1.116.161
or
$ ssh [userID]@133.1.116.161
In a case you allow the X11 forwarding, use
$ ssh -Y -l [userID] 133.1.116.161
or
$ ssh -Y [userID]@133.1.116.161
Currently, you get the following message upon login
-bash: /usr/local/g09/D01/g09/bsd/g09.profile: Permissio...
but it does not affect your work mostly.
NOTE: When you log in for the first time, change your ini...
$ yppasswd
** How to compile and run the program [#z7374c65]
In the latest environment (as of October 2020), we are su...
To check the available modules, type
$ module available
and to load the specific modules, type for e.g.
$ module load intel/2020.2.254
$ module load intelmpi/2020.2.254
$ module load python/3.8
Note that these modules are loaded one time and they shou...
module load intel/2020.2.254
module load intelmpi/2020.2.254
module load python/3.8
Make sure that the old setting is deleted and/or commente...
# source /home/opt/settings/2017.4/intel-compiler.sh
# source /home/opt/settings/2017.4/intel-mpi.sh
Also make sure to load the same modules in your job script.
** How to submit your jobs [#pa13b1de]
To execute your program, use the queueing system, usually...
For instance, to execute a script "job.sh" using the node...
$ qsub -q xh1.q -pe x24 24 job.sh
Note group and number of cores can be specified in the jo...
To see the job status, type
$ qstat
To see the job status of the specific user, type
$ qstat -u [user ID]
To cancel a job, use
$ qdel [job ID]
where the job ID can be obtained by using qstat (the numb...
*** Examples of job script [#u8b8a717]
In the following, examples for each groups (queues) are l...
$ qsub job.sh
and do not have to specify the queue group and number of ...
- Groups 4
#$ -S /bin/bash
#$ -cwd
#$ -q xe1.q
#$ -pe x8 8
#$ -N JOB_NAME
module load intel/2020.2.254
module load intelmpi/2020.2.254
# Above settings should be consistent with those used in...
mpirun ./a.out < input.dat > output.dat
- Groups 5
#$ -S /bin/bash
#$ -cwd
#$ -q xe2.q
#$ -pe x12 12
#$ -N JOB_NAME
module load intel/2020.2.254
module load intelmpi/2020.2.254
# Above settings should be consistent with those used in...
mpirun ./a.out < input.dat > output.dat
- Group 7 (sb100)
-- Hybrid parallelization (ex. use 12 cores with 6 thread...
#$ -S /bin/bash
#$ -cwd
#$ -q sb.q
#$ -pe x6 12
#$ -N JOB_NAME
module load intel/2021.2.0
module load intelmpi/2021.2.0
# Above settings should be consistent with those used in...
export OMP_NUM_THREADS=6
mpirun -perhost 1 -np $NHOSTS ./a.out < input.dat > outp...
-- Flat parallelization (12 cores)
#$ -S /bin/bash
#$ -cwd
#$ -q sb.q
#$ -pe x6 12
#$ -N JOB_NAME
module load intel/2021.2.0
module load intelmpi/2021.2.0
# Above settings should be consistent with those used in...
mpirun -np $NSLOTS ./a.out < input.dat > output.dat
- Group 8
#$ -S /bin/bash
#$ -cwd
#$ -q xs2.q
#$ -pe x16 16
#$ -N JOB_NAME
module load intel/2020.2.254
module load intelmpi/2020.2.254
# Above settings should be consistent with those used in...
MPI_COMMAND=mpirun
export I_MPI_PIN=1
export I_MPI_ADJUST_ALLGATHERV=2
export OMP_NUM_THREADS=1
cat $PE_HOSTFILE | awk '{ print $1":"$2/ENVIRON["OMP_NUM...
hostfile.$JOB_ID
$MPI_COMMAND ./a.out < input.dat > output.dat
- Group 9
#$ -S /bin/bash
#$ -cwd
#$ -q xi1.q
#$ -pe x16 16
#$ -N JOB_NAME
module load intel/2020.2.254
module load intelmpi/2020.2.254
# Above settings should be consistent with those used in...
MPI_COMMAND=mpirun
export I_MPI_PIN=1
export I_MPI_ADJUST_ALLGATHERV=2
export OMP_NUM_THREADS=1
cat $PE_HOSTFILE | awk '{ print $1":"$2/ENVIRON["OMP_NUM...
$MPI_COMMAND ./a.out < input.dat > output.dat
- Groups 10
#$ -S /bin/bash
#$ -cwd
#$ -q xh1.q
#$ -pe x24 48
#$ -N JOB_NAME
#$ -j y
module load intel/2020.2.254
module load intelmpi/2020.2.254
# Above settings should be consistent with those used in...
MPI_COMMAND=mpirun
export I_MPI_PIN=1
export I_MPI_FABRICS=shm:ofa
export OMP_NUM_THREADS=1
cat $PE_HOSTFILE | awk '{ print $1":"$2/ENVIRON["OMP_NUM...
hostfile.$JOB_ID
$MPI_COMMAND ./a.out < input.dat > output.dat
- Groups 11
#$ -S /bin/bash
#$ -cwd
#$ -q xh2.q
#$ -pe x24 48
#$ -N JOB_NAME
#$ -j y
module load intel/2020.2.254
module load intelmpi/2020.2.254
# Above settings should be consistent with those used in...
MPI_COMMAND=mpirun
export I_MPI_PIN=1
export I_MPI_FABRICS=shm:ofa
export OMP_NUM_THREADS=1
cat $PE_HOSTFILE | awk '{ print $1":"$2/ENVIRON["OMP_NUM...
hostfile.$JOB_ID
$MPI_COMMAND ./a.out < input.dat > output.dat
- Group 13
#$ -S /bin/bash
#$ -cwd
#$ -q xb1.q
#$ -pe x32 32
#$ -N JOB_NAME
#$ -j y
module load intel/2020.2.254
module load intelmpi/2020.2.254
# Above settings should be consistent with those used in...
MPI_COMMAND=mpirun
export I_MPI_PIN=1
export I_MPI_FABRICS=shm:ofa
export OMP_NUM_THREADS=1
cat $PE_HOSTFILE | awk '{ print $1":"$2/ENVIRON["OMP_NUM...
hostfile.$JOB_ID
$MPI_COMMAND ./a.out < input.dat > output.dat
- Group 14
#$ -S /bin/bash
#$ -cwd
#$ -q x17.q
#$ -pe x32 32
#$ -N JOB_NAME
#$ -j y
module load intel/2020.2.254
module load intelmpi/2020.2.254
# Above settings should be consistent with those used in...
MPI_COMMAND=mpirun
export I_MPI_PIN=1
export I_MPI_FABRICS=shm:dapl
export OMP_NUM_THREADS=1
cat $PE_HOSTFILE | awk '{ print $1":"$2/ENVIRON["OMP_NUM...
hostfile.$JOB_ID
$MPI_COMMAND ./a.out < input.dat > output.dat
**Computer nodes and queues [#gcad41b2]
| Group | Proc. | #CORE/#CPU | Submission node | queue |...
|4 | xeon | 8/2 | smith/rafiki/t...
|5 | xeon | 12/2 | smith/rafiki/...
|7 | core i7 sandy-bridge | 6/1 | sb100 | all.q | x6 | |
|8 | xeon sandy-bridge |16/2 | smith/rafiki/tiamat | x...
|9 | xeon ivy-bridge |16/2 | smith/rafiki/tiamat ...
|10 | xeon Haswell | 24/2 | smith/rafiki/tiama...
|11 | xeon Haswell | 24/2 | smith/rafiki/tiam...
|13 | xeon Broadwell | 32/2 | smith/rafiki/tiama...
|14 | xeon Skylake | 32/2 | smith/rafiki/tia...
NOTE:
- To submit a job to group 8 nodes, login to sb100 and ex...
- To submit a job to other group nodes, login to smith an...
*** Group 4, 5 "xe" system [#e02a591e]
The "xe" system is composed of the nodes with the Xeon CP...
*** Group 7 "sb100" system [#m1b620b9]
The "sb100" system is based on the Core i7 CPUs with the ...
*** Group 8 "xs" system [#j5c755e2]
The "xs" system is based on the Xeon CPUs with the Sandy-...
*** Group 9 "xi" system [#ldcc860e]
The "xi" system is based on the Xeon CPUs with the Ivy-br...
*** Group 10, 11, 12 "xh" system [#oc5448db]
The "xh" system is composed of the nodes with 2 Xeon CPUs...
*** Group 13 "xb" system [#h9aaaca9]
The "xb" system is composed of the nodes with 2 Xeon Broa...
*** Group 14 "x17" system [#h004b4e7]
The "x17" system is composed of the nodes with 2 Xeon Sky...
** Network structure [#pa8ce3cd]
- ~ "|" indicates a network connection, "[]" name, for th...
+ Engineering intranet, ODINS network
|
| Backbone network( no access outside of engin...
| |
+- [smith] -----+ 133.1.116.161...
+- [rafiki] ----+ 133.1.116.162...
+- [tiamat] ----+ 133.1.116.211...
| |
| +-- [xe00], [xe01] Calc. node, g...
| +-- [xe02]-[xe06] Calc. node, g...
| |
| +-- [xs01]-[xs18] Calc. node, g...
| |
| +-- [xi01]-[xi12] Calc. node, g...
| |
| +-- [xh01]-[xh17]
| +-- [xh19]-[xh34] Calc. node, g...
| +-- [xh18],[xh35]-[xh43] Calc. node, g...
| +-- [xb01]-[xb14] Calc. node, g...
| +-- [x1701]-[x1706] Calc. node, g...
| |
| |
+- [sb100] -----+ 133.1.116.165...
|
+-- [sb101]-[sb120] Calc. node, g...
終了行:
* Smith [#ofcf0768]
"Smith" is a computer cluster based on the Intel and Inte...
#contents
** Login nodes [#b65445bf]
To use the "Smith" system, log in to the following nodes:
-[smith] 133.1.116.161
-[rafiki] 133.1.116.162
-[tiamat] 133.1.116.211
To use the "sb100" system, use the following node:
-[sb100] 133.1.116.165
** How to login the login node [#y3d6531e]
To login "smith" type
$ ssh -l [userID] 133.1.116.161
or
$ ssh [userID]@133.1.116.161
In a case you allow the X11 forwarding, use
$ ssh -Y -l [userID] 133.1.116.161
or
$ ssh -Y [userID]@133.1.116.161
Currently, you get the following message upon login
-bash: /usr/local/g09/D01/g09/bsd/g09.profile: Permissio...
but it does not affect your work mostly.
NOTE: When you log in for the first time, change your ini...
$ yppasswd
** How to compile and run the program [#z7374c65]
In the latest environment (as of October 2020), we are su...
To check the available modules, type
$ module available
and to load the specific modules, type for e.g.
$ module load intel/2020.2.254
$ module load intelmpi/2020.2.254
$ module load python/3.8
Note that these modules are loaded one time and they shou...
module load intel/2020.2.254
module load intelmpi/2020.2.254
module load python/3.8
Make sure that the old setting is deleted and/or commente...
# source /home/opt/settings/2017.4/intel-compiler.sh
# source /home/opt/settings/2017.4/intel-mpi.sh
Also make sure to load the same modules in your job script.
** How to submit your jobs [#pa13b1de]
To execute your program, use the queueing system, usually...
For instance, to execute a script "job.sh" using the node...
$ qsub -q xh1.q -pe x24 24 job.sh
Note group and number of cores can be specified in the jo...
To see the job status, type
$ qstat
To see the job status of the specific user, type
$ qstat -u [user ID]
To cancel a job, use
$ qdel [job ID]
where the job ID can be obtained by using qstat (the numb...
*** Examples of job script [#u8b8a717]
In the following, examples for each groups (queues) are l...
$ qsub job.sh
and do not have to specify the queue group and number of ...
- Groups 4
#$ -S /bin/bash
#$ -cwd
#$ -q xe1.q
#$ -pe x8 8
#$ -N JOB_NAME
module load intel/2020.2.254
module load intelmpi/2020.2.254
# Above settings should be consistent with those used in...
mpirun ./a.out < input.dat > output.dat
- Groups 5
#$ -S /bin/bash
#$ -cwd
#$ -q xe2.q
#$ -pe x12 12
#$ -N JOB_NAME
module load intel/2020.2.254
module load intelmpi/2020.2.254
# Above settings should be consistent with those used in...
mpirun ./a.out < input.dat > output.dat
- Group 7 (sb100)
-- Hybrid parallelization (ex. use 12 cores with 6 thread...
#$ -S /bin/bash
#$ -cwd
#$ -q sb.q
#$ -pe x6 12
#$ -N JOB_NAME
module load intel/2021.2.0
module load intelmpi/2021.2.0
# Above settings should be consistent with those used in...
export OMP_NUM_THREADS=6
mpirun -perhost 1 -np $NHOSTS ./a.out < input.dat > outp...
-- Flat parallelization (12 cores)
#$ -S /bin/bash
#$ -cwd
#$ -q sb.q
#$ -pe x6 12
#$ -N JOB_NAME
module load intel/2021.2.0
module load intelmpi/2021.2.0
# Above settings should be consistent with those used in...
mpirun -np $NSLOTS ./a.out < input.dat > output.dat
- Group 8
#$ -S /bin/bash
#$ -cwd
#$ -q xs2.q
#$ -pe x16 16
#$ -N JOB_NAME
module load intel/2020.2.254
module load intelmpi/2020.2.254
# Above settings should be consistent with those used in...
MPI_COMMAND=mpirun
export I_MPI_PIN=1
export I_MPI_ADJUST_ALLGATHERV=2
export OMP_NUM_THREADS=1
cat $PE_HOSTFILE | awk '{ print $1":"$2/ENVIRON["OMP_NUM...
hostfile.$JOB_ID
$MPI_COMMAND ./a.out < input.dat > output.dat
- Group 9
#$ -S /bin/bash
#$ -cwd
#$ -q xi1.q
#$ -pe x16 16
#$ -N JOB_NAME
module load intel/2020.2.254
module load intelmpi/2020.2.254
# Above settings should be consistent with those used in...
MPI_COMMAND=mpirun
export I_MPI_PIN=1
export I_MPI_ADJUST_ALLGATHERV=2
export OMP_NUM_THREADS=1
cat $PE_HOSTFILE | awk '{ print $1":"$2/ENVIRON["OMP_NUM...
$MPI_COMMAND ./a.out < input.dat > output.dat
- Groups 10
#$ -S /bin/bash
#$ -cwd
#$ -q xh1.q
#$ -pe x24 48
#$ -N JOB_NAME
#$ -j y
module load intel/2020.2.254
module load intelmpi/2020.2.254
# Above settings should be consistent with those used in...
MPI_COMMAND=mpirun
export I_MPI_PIN=1
export I_MPI_FABRICS=shm:ofa
export OMP_NUM_THREADS=1
cat $PE_HOSTFILE | awk '{ print $1":"$2/ENVIRON["OMP_NUM...
hostfile.$JOB_ID
$MPI_COMMAND ./a.out < input.dat > output.dat
- Groups 11
#$ -S /bin/bash
#$ -cwd
#$ -q xh2.q
#$ -pe x24 48
#$ -N JOB_NAME
#$ -j y
module load intel/2020.2.254
module load intelmpi/2020.2.254
# Above settings should be consistent with those used in...
MPI_COMMAND=mpirun
export I_MPI_PIN=1
export I_MPI_FABRICS=shm:ofa
export OMP_NUM_THREADS=1
cat $PE_HOSTFILE | awk '{ print $1":"$2/ENVIRON["OMP_NUM...
hostfile.$JOB_ID
$MPI_COMMAND ./a.out < input.dat > output.dat
- Group 13
#$ -S /bin/bash
#$ -cwd
#$ -q xb1.q
#$ -pe x32 32
#$ -N JOB_NAME
#$ -j y
module load intel/2020.2.254
module load intelmpi/2020.2.254
# Above settings should be consistent with those used in...
MPI_COMMAND=mpirun
export I_MPI_PIN=1
export I_MPI_FABRICS=shm:ofa
export OMP_NUM_THREADS=1
cat $PE_HOSTFILE | awk '{ print $1":"$2/ENVIRON["OMP_NUM...
hostfile.$JOB_ID
$MPI_COMMAND ./a.out < input.dat > output.dat
- Group 14
#$ -S /bin/bash
#$ -cwd
#$ -q x17.q
#$ -pe x32 32
#$ -N JOB_NAME
#$ -j y
module load intel/2020.2.254
module load intelmpi/2020.2.254
# Above settings should be consistent with those used in...
MPI_COMMAND=mpirun
export I_MPI_PIN=1
export I_MPI_FABRICS=shm:dapl
export OMP_NUM_THREADS=1
cat $PE_HOSTFILE | awk '{ print $1":"$2/ENVIRON["OMP_NUM...
hostfile.$JOB_ID
$MPI_COMMAND ./a.out < input.dat > output.dat
**Computer nodes and queues [#gcad41b2]
| Group | Proc. | #CORE/#CPU | Submission node | queue |...
|4 | xeon | 8/2 | smith/rafiki/t...
|5 | xeon | 12/2 | smith/rafiki/...
|7 | core i7 sandy-bridge | 6/1 | sb100 | all.q | x6 | |
|8 | xeon sandy-bridge |16/2 | smith/rafiki/tiamat | x...
|9 | xeon ivy-bridge |16/2 | smith/rafiki/tiamat ...
|10 | xeon Haswell | 24/2 | smith/rafiki/tiama...
|11 | xeon Haswell | 24/2 | smith/rafiki/tiam...
|13 | xeon Broadwell | 32/2 | smith/rafiki/tiama...
|14 | xeon Skylake | 32/2 | smith/rafiki/tia...
NOTE:
- To submit a job to group 8 nodes, login to sb100 and ex...
- To submit a job to other group nodes, login to smith an...
*** Group 4, 5 "xe" system [#e02a591e]
The "xe" system is composed of the nodes with the Xeon CP...
*** Group 7 "sb100" system [#m1b620b9]
The "sb100" system is based on the Core i7 CPUs with the ...
*** Group 8 "xs" system [#j5c755e2]
The "xs" system is based on the Xeon CPUs with the Sandy-...
*** Group 9 "xi" system [#ldcc860e]
The "xi" system is based on the Xeon CPUs with the Ivy-br...
*** Group 10, 11, 12 "xh" system [#oc5448db]
The "xh" system is composed of the nodes with 2 Xeon CPUs...
*** Group 13 "xb" system [#h9aaaca9]
The "xb" system is composed of the nodes with 2 Xeon Broa...
*** Group 14 "x17" system [#h004b4e7]
The "x17" system is composed of the nodes with 2 Xeon Sky...
** Network structure [#pa8ce3cd]
- ~ "|" indicates a network connection, "[]" name, for th...
+ Engineering intranet, ODINS network
|
| Backbone network( no access outside of engin...
| |
+- [smith] -----+ 133.1.116.161...
+- [rafiki] ----+ 133.1.116.162...
+- [tiamat] ----+ 133.1.116.211...
| |
| +-- [xe00], [xe01] Calc. node, g...
| +-- [xe02]-[xe06] Calc. node, g...
| |
| +-- [xs01]-[xs18] Calc. node, g...
| |
| +-- [xi01]-[xi12] Calc. node, g...
| |
| +-- [xh01]-[xh17]
| +-- [xh19]-[xh34] Calc. node, g...
| +-- [xh18],[xh35]-[xh43] Calc. node, g...
| +-- [xb01]-[xb14] Calc. node, g...
| +-- [x1701]-[x1706] Calc. node, g...
| |
| |
+- [sb100] -----+ 133.1.116.165...
|
+-- [sb101]-[sb120] Calc. node, g...
ページ名: