Smith †"Smith" is a computer cluster based on the Intel and Intel-compatible CPUs. Login nodes †To use the "Smith" system, log in to the following nodes:
To use the "sb100" system, use the following node:
How to login the login node †To login "smith" type $ ssh -l [userID] 133.1.116.161 or $ ssh [userID]@133.1.116.161 In a case you allow the X11 forwarding, use $ ssh -Y -l [userID] 133.1.116.161 or $ ssh -Y [userID]@133.1.116.161 Currently, you get the following message upon login -bash: /usr/local/g09/D01/g09/bsd/g09.profile: Permission denied but it does not affect your work mostly. NOTE: When you log in for the first time, change your initial password by typing $ yppasswd How to compile and run the program †In the latest environment (as of October 2020), we are supposed to use the module: To check the available modules, type $ module available and to load the specific modules, type for e.g. $ module load intel/2020.2.254 $ module load intelmpi/2020.2.254 $ module load python/3.8 Note that these modules are loaded one time and they should be added to ~/.bashrc as: module load intel/2020.2.254 module load intelmpi/2020.2.254 module load python/3.8 Make sure that the old setting is deleted and/or commented out as: # source /home/opt/settings/2017.4/intel-compiler.sh # source /home/opt/settings/2017.4/intel-mpi.sh Also make sure to load the same modules in your job script. How to submit your jobs †To execute your program, use the queueing system, usually using a job script (see below). For instance, to execute a script "job.sh" using the node (24 cores) in the group 10, type $ qsub -q xh1.q -pe x24 24 job.sh Note group and number of cores can be specified in the job script. To see the job status, type $ qstat To see the job status of the specific user, type $ qstat -u [user ID] To cancel a job, use $ qdel [job ID] where the job ID can be obtained by using qstat (the number appearing in the first column). Examples of job script †In the following, examples for each groups (queues) are listed. In this case, you just type $ qsub job.sh and do not have to specify the queue group and number of processors explicitly.
Computer nodes and queues †
NOTE:
Group 4, 5 "xe" system †The "xe" system is composed of the nodes with the Xeon CPU, which have 2 CPUs (8 or 12 cores) per node. The parallel environment is x8 and x12. Group 7 "sb100" system †The "sb100" system is based on the Core i7 CPUs with the Sandy-bridge architecture. Each node has 1 CPU (6cores) with 16 GB memory. Fast calculations are possible thanks to the AVX function. The parallel environment is x6. Group 8 "xs" system †The "xs" system is based on the Xeon CPUs with the Sandy-bridge architecture. Each node has 1 CPU (6cores) with 32 GB memory. Fast calculations are possible thanks to the AVX function. The parallel environment is x16. Group 9 "xi" system †The "xi" system is based on the Xeon CPUs with the Ivy-bridge architecture. Each node has 2 CPUs (16cores) with 128 GB memory. Fast calculations are possible thanks to the AVX function. It is recommend to use this system for Gaussian calculations. The parallel environment is x16. Group 10, 11, 12 "xh" system †The "xh" system is composed of the nodes with 2 Xeon CPUs (24 cores in total) and 64 GB memory. The parallel environment is x24. Group 13 "xb" system †The "xb" system is composed of the nodes with 2 Xeon Broadwell CPUs (32 cores in total) and 64 GB memory. The parallel environment is x32. Group 14 "x17" system †The "x17" system is composed of the nodes with 2 Xeon Skylake CPUs (32 cores in total) and 64 GB memory. The parallel environment is x32. Network structure †
|