Skip to content

Latest commit

 

History

History
405 lines (292 loc) · 14.6 KB

tests.txt

File metadata and controls

405 lines (292 loc) · 14.6 KB

Tests

Tag: design.mps.tests
Author: Richard Kistruck
Date: 2008-12-04
Status: incomplete design
Revision: $Id$
Copyright: See Copyright and License.
Index terms: pair: tests; design

Introduction

.intro: This document contains a guide to the Memory Pool System tests.

.readership: This document is intended for any MPS developer.

Running tests

.run: Run these commands:

cd code
make -f <makefile> VARIETY=<variety> <target>  # Unix
nmake /f <makefile> VARIETY=<variety> <target> # Windows

where <makefile> is the appropriate makefile for the platform (see manual/build.txt), <variety> is the variety (see design.mps.config.var) and <target> is the collection of tests (see .target below). For example:

make -f lii6ll VARIETY=cool testrun

If <variety> is omitted, tests are run in both the cool and hot varieties.

Test targets

.target: The makefiles provide the following targets for common sets of tests:

.target.testall: The testall target runs all test cases (even if known to fail).

.target.testrun: The testrun target runs the "smoke tests". This subset of tests are quick checks that the MPS is working. They run quickly enough for it to be practical to run them every time the MPS is built.

.target.testci: The testci target runs the continuous integration tests, the subset of tests that are expected to pass in full-featured build configurations.

.target.testansi: The testansi target runs the subset of the tests that are expected to pass in the generic ("ANSI") build configuration (see design.mps.config.opt.ansi).

.target.testpollnone: The testpollnone target runs the subset of the tests that are expected to pass in the generic ("ANSI") build configuration (see design.mps.config.opt.ansi) with the option CONFIG_POLL_NONE (see design.mps.config.opt.poll).

.target.testratio: The testratio target compares the performance of the HOT and RASH varieties. See .ratio.

.target.testscheme: The testscheme target builds the example Scheme interpreter (example/scheme) and runs its test suite.

.target.testmmqa: The testmmqa target runs the tests in the MMQA test suite. See .mmqa.

Test features

.randomize: Each time a test case is run, it randomly chooses some of its parameters (for example, the sizes of objects, or how many links to create in a graph of references). This allows a fast test to cover many cases over time.

.randomize.seed: The random numbers are chosen pseudo-randomly based on a seed initialized from environmental data (the time and the processor cycle count). The seed is reported at test startup, for example:

code$ xci6ll/cool/apss
xci6ll/cool/apss: randomize(): choosing initial state (v3): 2116709187.
...
xci6ll/cool/apss: Conclusion: Failed to find any defects.

Here, the number 2116709187 is the random seed.

.randomize.specific-seed Each test can be run with a specified seed by passing the seed on the command line, for example:

code$ xci6ll/cool/apss 2116709187
xci6ll/cool/apss: randomize(): resetting initial state (v3) to: 2116709187.
...
xci6ll/cool/apss: Conclusion: Failed to find any defects.

.randomize.repeatable: This ensures that the single-threaded tests are repeatable. (Multi-threaded tests are not repeatable even if the same seed is used; see job003719.)

Test list

See manual/code-index for the full list of automated test cases.

.test.finalcv: Registers objects for finalization, makes them unreachable, deregisters them, etc. Churns to provoke minor (nursery) collection.

.test.finaltest: Creates a large binary tree, and registers every node. Drops the top reference, requests collection, and counts the finalization messages.

.test.zcoll: Collection scheduling, and collection feedback.

.test.zmess: Message lifecycle and finalization messages.

Test database

.db: The automated tests are described in the test database (tool/testcases.txt).

.db.format: This is a self-documenting plain-text database which gives for each test case its name and an optional set of features. For example the feature =P means that the test case requires polling to succeed, and therefore is expected to fail in build configurations without polling (see design.mps.config.opt.poll).

.db.format.simple: The format must be very simple because the test runner on Windows is written as a batch file (.bat), in order to avoid having to depend on any tools that are did not come as standard with Windows XP, and batch files are inflexible. (But note that we no longer support Windows XP, so it would now be possible to rewrite the test runner in PowerShell if we thought that made sense.)

.db.testrun: The test runner (tool/testrun.sh on Unix or tool/testrun.bat on Windows) parses the test database to work out which tests to run according to the target. For example the testpollnone target must skip all test cases with the P feature.

Test runner

.runner.req.automated: The test runner must execute without user interaction, so that it can be used for continuous integration.

.runner.req.output.pass: Test cases are expected to pass nearly all the time, and in these cases we almost never want to see the output, so the test runner must suppress the output for passing tests.

.runner.req.output.fail: However, if a test case fails then the test runner must preserve the output from the failing test, including the random seed (see .randomize.seed), so that this can be analyzed and the test repeated. Moreover, it must print the output from the failing test, so that if the test is being run on a continuous integration system like Travis, then the output of the failing tests is included in the failure report. (See job003489.)

Performance test

.ratio: The testratio target checks that the hot variety is not too much slower than the rash variety. A failure of this test usually is expected to indicate that there are assertions on the critical path using AVER instead of AVER_CRITICAL (and so on). This works by running gcbench for the AMC pool class and djbench for the MVFF pool class, in the hot variety and the rash variety, computing the ratio of CPU time taken in the two varieties, and testing that this falls under an acceptable limit.

.ratio.cpu-time: Note that we use the CPU time (reported by /usr/bin/time) and not the elapsed time (as reported by the benchmark) because we want to be able to run this test on continuous integration machines that might be heavily loaded.

.ratio.platform: This target is currently supported only on Unix platforms using GNU Makefiles.

Adding a new test

To add a new test to the MPS, carry out the following steps. (The procedure uses the name "newtest" throughout but you should of course replace this with the name of your test case.)

.new.source: Create a C source file in the code directory, typically named "newtest.c". In additional to the usual copyright boilerplate, it should contain a call to testlib_init() (this ensures reproducibility of pseudo-random numbers), and a printf() reporting the absence of defects (this output is recognized by the test runner):

#include <stdio.h>
#include "testlib.h"

int main(int argc, char *argv[])
{
  testlib_init(argc, argv);
  /* test happens here */
  printf("%s: Conclusion: Failed to find any defects.\n", argv[0]);
  return 0;
}

.new.unix: If the test case builds on the Unix platforms (FreeBSD, Linux and macOS), edit code/comm.gmk adding the test case to the TEST_TARGETS macro, and adding a rule describing how to build it, typically:

$(PFM)/$(VARIETY)/newtest: $(PFM)/$(VARIETY)/newtest.o \
        $(TESTLIBOBJ) $(PFM)/$(VARIETY)/mps.a

.new.windows: If the test case builds on Windows, edit code/commpre.nmk adding the test case to the TEST_TARGETS macro, and edit code/commpost.nmk adding a rule describing how to build it, typically:

$(PFM)\$(VARIETY)\newtest.exe: $(PFM)\$(VARIETY)\newtest.obj \
        $(PFM)\$(VARIETY)\mps.lib $(FMTTESTOBJ) $(TESTLIBOBJ)

.new.macos: If the test case builds on macOS, open code/mps.xcodeproj/project.pbxproj for edit and open this project in Xcode. If the project navigator is not visible at the left, select View → Navigators → Show Project Navigator (⌘1). Right click on the Tests folder and choose Add Files to "mps"…. Select code/newtest.c and then click Add. Move the new file into alphabetical order in the Tests folder. Click on "mps" at the top of the project navigator to reveal the targets. Select a test target that is similar to the one you have just created. Right click on that target and select Duplicate (⌘D). Select the new target and change its name to "newtest". Select the "Build Phases" tab and check that "Dependencies" contains the mps library, and that "Compile Sources" contains newtest.c and testlib.c. Close the project.

.new.database: Edit tool/testcases.txt and add the new test case to the database. Use the appropriate flags to indicate the properties of the test case. These flags are used by the test runner to select the appropriate sets of test cases. For example tests marked =P are expected to fail in build configurations without polling (see design.mps.config.opt.poll).

.new.manual: Edit manual/source/code-index.rst and add the new test case to the "Automated test cases" section.

Continuous integration

.ci: The MPS uses the following systems for continuous integration:

.ci.travis: The commercial Travis continuous integration service runs ./configure && make install && make test (which exercises the testci target for platforms lii6gc, lii6ll, xci6ll, and testansi and testpollnone for platforms anangc and ananll) whenever there is a commit to the mps GitHub repository. See the Ravenbrook/mps project on Travis for a build history.

.ci.travis.config: Travis is configured using the .travis.yml file at top level of the project.

.ci.jenkins: An instance of Jenkins running on "gannet" (a Windows PC at the Ravenbrook office) runs the testci target on platforms w3i3mv and w3i6mv whenever there is a commit to the mps GitHub repository.

.ci.jenkins.config: There are instructions for installing and configuring Jenkins in [GDR_2016-04-14], [GDR_2016-04-15] and [GDR_2016-04-20].

MMQA tests

.mmqa: The Memory Management Quality Assurance test suite is another suite of test cases.

.mmqa.why: The existence of two test suites originates in the departmental structure at Harlequin Ltd where the MPS was originally developed. Tests written by members of the Memory Management Group went into the code directory along with the MPS itself, while tests written by members of the Quality Assurance Group went into the test directory. (Conway's Law states that "organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations" [Conway_1968].)

.mmqa.run: See test/README for how to run the MMQA tests.

Other tests

.coverage: The program tool/testcoverage compiles the MPS with coverage enabled, runs the smoke tests (.target.testrun) and outputs a coverage report.

.opendylan: The program tool/testopendylan pulls Open Dylan from GitHub and builds it against the MPS.

References

[Conway_1968]"How do Committees Invent?"; Melvin E. Conway; Datamation 14:5, pp. 28–31; April 1968; <http://www.melconway.com/Home/Committees_Paper.html>
[GDR_2016-04-14]"Jenkins setup and configuration procedure"; Gareth Rees; Ravenbrook Limited; 2016-04-14; <https://info.ravenbrook.com/mail/2016/04/14/17-19-08/0/>.
[GDR_2016-04-15]"Re: Jenkins setup and configuration procedure"; Gareth Rees; Ravenbrook Limited; 2016-04-15; <https://info.ravenbrook.com/mail/2016/04/15/09-03-53/0/>.
[GDR_2016-04-20]"Re: Jenkins setup and configuration procedure"; Gareth Rees; Ravenbrook Limited; 2016-04-20; <https://info.ravenbrook.com/mail/2016/04/20/13-26-33/0/>.

Document History

  • 2008-12-04 Richard Kistruck. Create. Describe finalization tests.
  • 2010-03-03 Richard Kistruck. Correction: it's fin1658a.c and job001658, not 1638.
  • 2010-03-03 Richard Kistruck. Add zmess.c, zcoll.c. zmess.c subsumes and replaces fin1658a.c.
  • 2013-05-23 GDR Converted to reStructuredText.
  • 2018-06-15 GDR Procedure for adding a new smoke test.

Copyright and License

Copyright © 2013–2020 Ravenbrook Limited.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

  1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
  2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.