If you have problems during the execution of MRCC, please attach the output with an adequate description of your case as well as the followings:
  • the way mrcc was invoked
  • the way build.mrcc was invoked
  • the output of build.mrcc
  • compiler version (for example: ifort -V, gfortran -v)
  • blas/lapack versions
  • as well as gcc and glibc versions

This information really helps us during troubleshooting :)

memory size for an openmp or openmpi job

  • yanmeiyu
  • Topic Author
  • Offline
  • Senior Member
  • Senior Member
More
3 years 9 months ago #929 by yanmeiyu
Dear Mrcc experts,

I am doing a openmp job with np cores from the same one node, and I defined mem=80GB, but the code runing is asking for 80GB* np memory, a value far larger than one-core calculation needs. Is it how mrcc runs if in openmp or openmpi way, or I just made wrong with my pbs shell? Thank you very much!

Best regards,

Yanme

Please Log in or Create an account to join the conversation.

  • nagypeter
  • Offline
  • Premium Member
  • Premium Member
  • MRCC developer
More
3 years 9 months ago #930 by nagypeter
Replied by nagypeter on topic memory size for an openmp or openmpi job
Dear Yanme,

There is some minor increase with the memory demand for OpenMP, but much less than np*mem. Most of the large memory blocks are shared by the OpenMP threads. You should post more details (e.g. input file, PBS error msg, output...) for us to understand better what you need.

For OpenMPI the memory is mostly replicated, so the total memory allocation per node indeed increases almost linearly with the number of MPI processes on the SAME NODE, but not with all MPI processes. Usually, 1 or 2 MPI processes per node is good in combination with OpenMP to use all cores of a node.

Let us know more if this does not resolve.
Best wishes,
Peter

Please Log in or Create an account to join the conversation.

  • yanmeiyu
  • Topic Author
  • Offline
  • Senior Member
  • Senior Member
More
3 years 9 months ago #931 by yanmeiyu
Replied by yanmeiyu on topic memory size for an openmp or openmpi job
Dear MRCC experts,

Thanking you for your quick reply. Here are our attachments for mrcc out file and pbs file. We tried serveral ways, small mem with appropriate number of np so that np*mem is under the limit of hard memory (192) G of our machine, in this case the code have to run with out-of-core algorithm, or using large mem but the runing stopped due to lack of memory. All these information are seeing in Ar.outN1G800Err.txt

Best regards,

Yanmei

File Attachment:

File Name: mrcc.pbs.txt
File Size:0 KB

File Attachment:

File Name: Ar.outN1G800Err.txt
File Size:28 KB

Please Log in or Create an account to join the conversation.

  • nagypeter
  • Offline
  • Premium Member
  • Premium Member
  • MRCC developer
More
3 years 9 months ago #932 by nagypeter
Replied by nagypeter on topic memory size for an openmp or openmpi job
Dear Yanmei,

we do not have hands on experience with PBS, but some of our users run MRCC successfully with PBS, so that should work.

Again, based on only a quick search and minimal PBS experience
nodes=1:ppn=1
appears to request only 1 core on 1 node. For a job with 10 OpenMP thread, do not you need to allocate 10 cores?
You should probably communicate the 80GB mem request to PBS too, which seems to be missing from the pbs file.

Have you tried to run MRCC without the scheduler? E.g. directly on a compute node?
If that works, you need to set in your PBS file, that mrcc will request 1 node, 10 core, 80 GB mem.

Best wishes,
Peter

Please Log in or Create an account to join the conversation.

  • yanmeiyu
  • Topic Author
  • Offline
  • Senior Member
  • Senior Member
More
3 years 9 months ago #933 by yanmeiyu
Replied by yanmeiyu on topic memory size for an openmp or openmpi job
Dear Peter,

we ever asked for nodes=1:ppn=10 or other core numbers. Attached pbs just is one of our ever tried, taken for example. We are using a public HPC machine, which requires user to sumbit job using pbs shell. We have no this problem in our local machine that we can submit job directly under bash shell. Thanking you!

Best regards,

Yanmei

Please Log in or Create an account to join the conversation.

  • nagypeter
  • Offline
  • Premium Member
  • Premium Member
  • MRCC developer
More
3 years 9 months ago #935 by nagypeter
Replied by nagypeter on topic memory size for an openmp or openmpi job
Dear Yanmei,

It is quite clear that your problem is with the PBS setup.
I suspect that you have to request the proper resources at the HPC machine via PBS, otherwise the job is not allowed to run properly. Your PBS input should specifically request 10 cores and 80 GB (or a bit more) memory. I guess there is more than 8 GB/core available in the machines, which explains the success of the first few jobs with mem=8gb. But 80 gb/core is probably not allowed and you need to request the 10 cores for the total 80 GB memory.

As this is not an problem with MRCC, please, contact the system administrator of the HPC machine for help with the PBS setup, if you cannot solve it based on the above.

Best wishes,
Peter

Please Log in or Create an account to join the conversation.

Time to create page: 0.042 seconds
Powered by Kunena Forum