Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

machines: eccc: update our machine files #725

Merged
merged 1 commit into from
Jun 2, 2022

Conversation

phil-blain
Copy link
Member

PR checklist

  • Suggest PR reviewers from list in the column to the right.
  • Please copy the PR test results link or provide a summary of testing completed below.
    base suite passes on all 4 new machines.
  • How much do the PR code changes differ from the unmodified code? è
    no changes to code itself.
    • bit for bit
    • different at roundoff level
    • more substantial
  • Does this PR create or have dependencies on Icepack or any other models?
    • Yes
    • No
  • Does this PR add any new test cases?
    • Yes
    • No
  • Is the documentation being updated? ("Documentation" includes information on the wiki or in the .rst files from doc/source/, which are used to create the online technical docs at https://readthedocs.org/projects/cice-consortium-cice/. A test build of the technical docs will be performed as part of the PR testing.)
    • Yes
    • No, does the documentation need to be updated at a later time?
      • Yes
      • No
  • Please provide any additional information or relevant details below:

A few notes:
- We do need to request memory even on machines where we have excluseive
node access. 20 GB was chosen rather arbitrarily.
- We set umask to 022 to make the <jobname>.o and <jobname>.e files
readable by group and others.
- We use the minimal SSM package for the compiler and Intel MPI, but we
keep the setup using environment modules commented if ever we need to
weak things (i.e. I_MPI_LIBRARY_KIND)
- We set OMP_STACKSIZE. Since d1e972a (Update OMP (CICE-Consortium#680),
2022-02-18), OpenMP threading is active in 'ice_transport_remap.F90',
and the default OpenMP stack size needs to be adjusted to avoid stack
overflows. We set it to a 64 Mb size as used for other machines.

Also, remove dead code setting 'CICE_ACCT'. This variable was last used
in 98e0307 (Update scripts, rename variables from CICE_ to ICE_ to be
more reusable in icepack., 2017-09-15), and so did not do anything for
any of the machines that were using it after that commit. Remove code in
machines env files that was setting it based on '~/.cice_proj'.

A few notes specific to 'gpsc3':

- Since we use an '--export' directive to choose which environment
variables are exported to the job environment by SLURM, SSMUSE_BASE and
SSMUSE_PATH are not present in the environnement and loading domains
without their full paths fails on csh, so use a full path.
- We use the compiler package from main/opt instead of eccc/all/opt
since we do not need the EC-specific variables to be set (and it also
leads to job failures since BASE_ARCH is not defined).
@phil-blain phil-blain requested a review from apcraig May 30, 2022 18:57
@apcraig apcraig merged commit c334aee into CICE-Consortium:main Jun 2, 2022
dabail10 pushed a commit to ESCOMP/CICE that referenced this pull request Oct 4, 2022
A few notes:
- We do need to request memory even on machines where we have excluseive
node access. 20 GB was chosen rather arbitrarily.
- We set umask to 022 to make the <jobname>.o and <jobname>.e files
readable by group and others.
- We use the minimal SSM package for the compiler and Intel MPI, but we
keep the setup using environment modules commented if ever we need to
weak things (i.e. I_MPI_LIBRARY_KIND)
- We set OMP_STACKSIZE. Since d1e972a (Update OMP (CICE-Consortium#680),
2022-02-18), OpenMP threading is active in 'ice_transport_remap.F90',
and the default OpenMP stack size needs to be adjusted to avoid stack
overflows. We set it to a 64 Mb size as used for other machines.

Also, remove dead code setting 'CICE_ACCT'. This variable was last used
in 98e0307 (Update scripts, rename variables from CICE_ to ICE_ to be
more reusable in icepack., 2017-09-15), and so did not do anything for
any of the machines that were using it after that commit. Remove code in
machines env files that was setting it based on '~/.cice_proj'.

A few notes specific to 'gpsc3':

- Since we use an '--export' directive to choose which environment
variables are exported to the job environment by SLURM, SSMUSE_BASE and
SSMUSE_PATH are not present in the environnement and loading domains
without their full paths fails on csh, so use a full path.
- We use the compiler package from main/opt instead of eccc/all/opt
since we do not need the EC-specific variables to be set (and it also
leads to job failures since BASE_ARCH is not defined).
phil-blain added a commit to phil-blain/CICE that referenced this pull request Dec 12, 2022
A few notes:
- We do need to request memory even on machines where we have excluseive
node access. 20 GB was chosen rather arbitrarily.
- We set umask to 022 to make the <jobname>.o and <jobname>.e files
readable by group and others.
- We use the minimal SSM package for the compiler and Intel MPI, but we
keep the setup using environment modules commented if ever we need to
weak things (i.e. I_MPI_LIBRARY_KIND)
- We set OMP_STACKSIZE. Since d1e972a (Update OMP (CICE-Consortium#680),
2022-02-18), OpenMP threading is active in 'ice_transport_remap.F90',
and the default OpenMP stack size needs to be adjusted to avoid stack
overflows. We set it to a 64 Mb size as used for other machines.

Also, remove dead code setting 'CICE_ACCT'. This variable was last used
in 98e0307 (Update scripts, rename variables from CICE_ to ICE_ to be
more reusable in icepack., 2017-09-15), and so did not do anything for
any of the machines that were using it after that commit. Remove code in
machines env files that was setting it based on '~/.cice_proj'.

A few notes specific to 'gpsc3':

- Since we use an '--export' directive to choose which environment
variables are exported to the job environment by SLURM, SSMUSE_BASE and
SSMUSE_PATH are not present in the environnement and loading domains
without their full paths fails on csh, so use a full path.
- We use the compiler package from main/opt instead of eccc/all/opt
since we do not need the EC-specific variables to be set (and it also
leads to job failures since BASE_ARCH is not defined).

(cherry picked from commit c334aee)
phil-blain added a commit to phil-blain/CICE that referenced this pull request Feb 2, 2024
A few notes:
- We do need to request memory even on machines where we have excluseive
node access. 20 GB was chosen rather arbitrarily.
- We set umask to 022 to make the <jobname>.o and <jobname>.e files
readable by group and others.
- We use the minimal SSM package for the compiler and Intel MPI, but we
keep the setup using environment modules commented if ever we need to
weak things (i.e. I_MPI_LIBRARY_KIND)
- We set OMP_STACKSIZE. Since d1e972a (Update OMP (CICE-Consortium#680),
2022-02-18), OpenMP threading is active in 'ice_transport_remap.F90',
and the default OpenMP stack size needs to be adjusted to avoid stack
overflows. We set it to a 64 Mb size as used for other machines.

Also, remove dead code setting 'CICE_ACCT'. This variable was last used
in 98e0307 (Update scripts, rename variables from CICE_ to ICE_ to be
more reusable in icepack., 2017-09-15), and so did not do anything for
any of the machines that were using it after that commit. Remove code in
machines env files that was setting it based on '~/.cice_proj'.

A few notes specific to 'gpsc3':

- Since we use an '--export' directive to choose which environment
variables are exported to the job environment by SLURM, SSMUSE_BASE and
SSMUSE_PATH are not present in the environnement and loading domains
without their full paths fails on csh, so use a full path.
- We use the compiler package from main/opt instead of eccc/all/opt
since we do not need the EC-specific variables to be set (and it also
leads to job failures since BASE_ARCH is not defined).

(cherry picked from commit c334aee)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants