X-Git-Url: https://asedeno.scripts.mit.edu/gitweb/?a=blobdiff_plain;f=t%2FREADME;h=892d443f63428aea6d6b92458bdaa4575bc46da0;hb=b3ff808b714fd8fc5e4d2770720398e5dc7d27f9;hp=410499a09645634349d7685e728228f238021c34;hpb=aca35505db3706c87d391dd213e856f73edfd42c;p=git.git diff --git a/t/README b/t/README index 410499a09..892d443f6 100644 --- a/t/README +++ b/t/README @@ -50,6 +50,12 @@ prove and other harnesses come with a lot of useful options. The # Repeat until no more failures $ prove -j 15 --state=failed,save ./t[0-9]*.sh +You can give DEFAULT_TEST_TARGET=prove on the make command (or define it +in config.mak) to cause "make test" to run tests under prove. +GIT_PROVE_OPTS can be used to pass additional options, e.g. + + $ make DEFAULT_TEST_TARGET=prove GIT_PROVE_OPTS='--timer --jobs 16' test + You can also run each test individually from command line, like this: $ sh ./t3010-ls-files-killed-modified.sh @@ -259,14 +265,23 @@ Do: test ... That way all of the commands in your tests will succeed or fail. If - you must ignore the return value of something (e.g., the return - after unsetting a variable that was already unset is unportable) it's - best to indicate so explicitly with a semicolon: + you must ignore the return value of something, consider using a + helper function (e.g. use sane_unset instead of unset, in order + to avoid unportable return value for unsetting a variable that was + already unset), or prepending the command with test_might_fail or + test_must_fail. - unset HLAGH; - git merge hla && - git push gh && - test ... + - Check the test coverage for your tests. See the "Test coverage" + below. + + Don't blindly follow test coverage metrics, they're a good way to + spot if you've missed something. If a new function you added + doesn't have any coverage you're probably doing something wrong, + but having 100% coverage doesn't necessarily mean that you tested + everything. + + Tests that are likely to smoke out future regressions are better + than tests that just inflate the coverage metrics. Don't: @@ -307,9 +322,21 @@ Keep in mind: Skipping tests -------------- -If you need to skip all the remaining tests you should set skip_all -and immediately call test_done. The string you give to skip_all will -be used as an explanation for why the test was skipped. for instance: +If you need to skip tests you should do so be using the three-arg form +of the test_* functions (see the "Test harness library" section +below), e.g.: + + test_expect_success PERL 'I need Perl' " + '$PERL_PATH' -e 'hlagh() if unf_unf()' + " + +The advantage of skipping tests like this is that platforms that don't +have the PERL and other optional dependencies get an indication of how +many tests they're missing. + +If the test code is too hairy for that (i.e. does a lot of setup work +outside test assertions) you can also skip all remaining tests by +setting skip_all and immediately call test_done: if ! test_have_prereq PERL then @@ -317,6 +344,9 @@ be used as an explanation for why the test was skipped. for instance: test_done fi +The string you give to skip_all will be used as an explanation for why +the test was skipped. + End with test_done ------------------ @@ -350,6 +380,12 @@ library for your script to use. test_expect_success TTY 'git --paginate rev-list uses a pager' \ ' ... ' + You can also supply a comma-separated list of prerequisites, in the + rare case where your test depends on more than one: + + test_expect_success PERL,PYTHON 'yo dawg' \ + ' test $(perl -E 'print eval "1 +" . qx[python -c "print 2"]') == "4" ' + - test_expect_failure []