Skip to content

scripts to make working with the cime testing infrastructure more productive

License

Notifications You must be signed in to change notification settings

bandre-ucar/cime-testing-tools

Repository files navigation

Tools to make working with cime testing simpler.

Installation

To install the tools into ${HOME}/local/bin as symbolic links to this directory, 'cd' into the cime-testing-tools directory and type:

make install
make user-config

The tools require python >= 2.7.x. You will need to load python modules for your system. In addition you will need to add ${HOME}/local/bin to your path. This can be done on a per-shell basis, or one time for all shells.

In bash, edit ${HOME}/.bashrc

module load python
export PATH=${PATH}:${HOME}/local/bin

In csh, edit ${HOME}/.cshrc

module load python
setenv PATH ${PATH}:${HOME}/local/bin

Then close and reopen your terminal.

Run tests

To see the test options, run:

cime-tests.py --help

There are three test suites available: clm, clm_short, and pop. The clm_short suite is a subset of the full clm suite, consisting of three simple tests of progressing complexity and replicated on all compilers. The same tests are run for clm45 and clm50. clm_short should be run and passing before the running full suite.

To launch a test suite:

#cd to your copy of the code
cime-tests.py --test-suite clm_short --baseline BASELINE_TAG

This will launch the clm_short test suite as defined in the configuration file saved to ${HOME}/.cime/cime-tests.cfg. Where BASELINE_TAG should be replaced with the baseline tag you want to compare your branch to. It will generally be a clm trunk tag, for example: clm4_5_7_r163.

Note this command can be run from any directory in the cesm/clm source tree.

To see what commands will be run without actually launching the tests, append --dry-run to the above command.

Check test results

cime-tests.py sets the test root to ${SCRATCH}/tests-${test_suite}-${date_stamp). For example, if you ran the 'clm_short' test suite on September 10, 2015 at 5:23pm, the test root would be 'tests-clm_short-20150910-1723'.

To check test results, cd in the appropriate test root directory. Type which cs.status. If the result isn't ~/local/bin/cs.status, then you will need to replace all 'cs.status' command below with the full path.

To see the status of all tests, in the test root, run:

cs.status -terse -all

This will output just the 'interesting' test results, by removing all the passes and expected failures. If you are expecting additional failures because of changes you made, you can use grep to remove the 'unintersting' results. For example, to remove namelist comparison failures because you changed a namelist flag:

cs.status -terse -all | grep -v nlcomp

Cleaning up test results

The cime test suite sprays files all over the file system and doesn't provide any method of cleaning them up. Relying on rm -rf *some_glob* is increadibly dangerous and can lead to accidently deleting important files. One can quickly bump up against file system quotas unless the test files are cleaned up regularly.

clobber-cime-tests.py is an interactive tool that will attempt to remove all files generated by a test suite. It errors on the side of safety, leaving files in place if user input isn't exactly as requested.

To clean up a test suite run:

clobber-cime-tests.py --test-spec ${SCRATCH}/tests-${test_suite}-${date_stamp}/testspec-*

The script will interactively walk through deleting files associated with each testspec file in the test root. The user is prompted before removing files associated with each testspec. The user is also prompted to remove the test-root directory. This should not be done unless all testspec and test case directories have been removed. If any test files remain, the corresponding test run directories and archives can not be removed automatically.

Adding a new test

There are several steps to add a new test to the clm test suite are:

  1. create the 'test mods' that modify a compset to test different functionality.

    Test mods are directories that contain xmlchange commands and user_nl_clm commands. Test mods for clm are at: ${sandbox}/components/clm/cime_config/testdefs/testmods_dirs/clm

    All testmods can 'inherit' changes from other test mods (this allows us to avoid duplication and manually changing a bunch of boiler plate settings).

    1. Create a new directory for your testmod. Do not use any hyphens in the name. Pick a short and meaningful name related to the functionality you are testing. For this example newtestmod is used.

          cd ${sandbox}/components/clm/cimetest/testmods_dirs/clm
          mkdir newtestmod
          cd newtestmod
      
    2. All new test mods should inherit from 'default'. Do this by creating file 'include_user_mods' in your directory. It should contain one line, the path to the default directory: ../default.

    3. If your test requires any xmlchange commands, add them to the file 'shell_commands' in your directory. Note that all calls to xmlchange must begin with a './'.

    4. If your test requires namelist settings, add them to the file 'user_nl_clm' in your directory.

    In the following steps your new test mod will be clm-newtestmod

  2. Decide what kind of test you want to add.

    1. Good tests are short and fast but still test the new functionality.

      1. Pick the coarsest grid that will test your new functionality. Usually this should be f10_f10 unless you know the code needs a finer grid.

      2. Pick the shortest simulation time that will exercise the new code. Usually a 3 day test is good enough unless there is some functionality that only is active after a set period of time, .e.g. crops.

    2. Decide on the type of test. Most new tests should be exact restart tests with a pe layout change. Debug tests turn on extra error checking for floating point issues. Unless the test runs very long because of a grid or time length issue, use an "ERP_D_P15x2_Ld3"

    3. Choose the compset you want to base your test on. This will usually be an ICLM45BGC, ICLM45BGCCROP or ICRUCLM50BGC.

    4. Choose the machine and compiler. Unless you have a specific reason to prefer a machine or compiler, choose yellowstone intel.

    5. The final test you want to add based on the above will be something like:

      ERP_D_P15x2_Ld3.f10_f10.ICRUCLM50BGC.yellowstone_intel.clm-newtestmod

      This is:

      • exact restart test with
      • debugging turned on
      • threading/pe layout change
      • three day run
      • f10_f10 grid
      • base compset ICRUCL50BGC
      • modified by clm-newtestmod
      • running on yellowstone_intel
  3. Create your test manually to verify it works:

        cd ${sandbox}/cime/scripts
        ./create_test -testname ERP_D_P15x2_Ld3.f10_f10.ICRUCLM50BGC.yellowstone_intel.clm-newtestmod -testid debug-new-test
        cd ERP_D_P15x2_Ld3.f10_f10.ICRUCLM50BGC.yellowstone_intel.clm-newtestmod.debug-new-test
        # check the case docs to verify that the case is setup as you intended
        execca ./*.test_build
        ./*.submit
        # when the test is finished
        cat Teststatus
        # look at the simulation results to verify they are what you expected.
        # This may be hard since the test is so short.
    
  4. Add your new test to the test list so it is run automatically every time the tests are run.

    NOTE: The current cime xml format for tests is horrible and not really human editable. If something goes wrong and requires manually editing the list, it is simplest to ask an SE for help.

    There is a tool in cime/scripts that is used to manage test lists in text format.

    1. Dump the test list to text:

          cd ${sandbox}/cime/scripts
          ./manage_testlists -machine yellowstone -compiler intel -category aux_clm45 -component clm \
              -query -outputlist > aux_clm45-yellowstone_intel.txt
      
    2. Add your new test to the end of the file. It is the same string you used in the manual step after '-testname'

          cat >> aux_clm45-yellowstone_intel.txt <<EOF
          ERP_D_P15x2_Ld3.f10_f10.ICRUCLM50BGC.yellowstone_intel.clm-newtestmod
          EOF
      
    3. Import the test list back into xml.

          ./manage_testlists -machine yellowstone -compiler intel -category aux_clm45 -component clm \
              -synclist -file aux_clm45-yellowstone_intel.txt
      
    4. Read the screen output to verify the correct number of new tests were added. Maybe diff the temporary file and then old file to verify that your new test is there.

    5. Copy the new testlist to the correct location according to the directions on the screen.

About

scripts to make working with the cime testing infrastructure more productive

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages