Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FIx #400 Stdout feedback if test skipped #440

Merged
merged 16 commits into from
Feb 12, 2024
Merged

Conversation

Benbenbenin0
Copy link
Contributor

Instead of nothing:

 pass
passed=1

Now prints for each skipped case:

tests/pkcli/sim_test.pytests/pkcli/sim_test.py::test_init_and_run SKIPPED (need to be able ...)
tests/pkcli/sim_test.pytests/pkcli/sim_test.py::test_skip SKIPPED (testing skipped ...)
tests/pkcli/sim_test.pytests/pkcli/sim_test.py::test_skip2 SKIPPED (unconditional skip)
 pass
passed=1

Tested with modified tests/pkcli/sim_test.py, catches skip() with/without reason and skipif().

@mkeilman wasn't sure between just a warning or actually printing cases, wondering what you think.

@robnagler
Copy link
Member

tests/pkcli/sim_test.pytests/pkcli/sim_test.py::test_init_and_run SKIPPED (need to be able ...)

Maybe construct in a way that looks like other log entryies:

tests/pkcli/sim_test.pytests/pkcli/sim_test.py:555:test_init_and_run

Copy link
Member

@robnagler robnagler left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Butting in with some ideas.

pykern/pkcli/test.py Outdated Show resolved Hide resolved
pykern/pkcli/test.py Outdated Show resolved Hide resolved
pykern/pkcli/test.py Outdated Show resolved Hide resolved
@robnagler
Copy link
Member

Also would be good to write a test.

@Benbenbenin0
Copy link
Contributor Author

Benbenbenin0 commented Feb 6, 2024

New output:

tests/pkcli/sim_test.py pass
tests/pkcli/sim_test.py::test_init_and_run SKIPPED (need to be able ...)
tests/pkcli/sim_test.py::test_skip SKIPPED (unconditional skip)
tests/pkcli/sim_test.py::test_skipif SKIPPED (with reason)
passed=1

Thanks @robnagler for the notes, 2 questions:

  1. Do you have a suggestion for testing these stdout messages? Anything inside the skipped test is, well, skipped and I'm unsure whether to try capturing output in another test.
  2. test1_test.py already has pytest imported out of the test_test's, is that the right file for this test?

@robnagler
Copy link
Member

Do you have a suggestion for testing these stdout messages? Anything inside the skipped test is, well, skipped and I'm unsure whether to try capturing output in another test.

Hard to prove a negative, but if you see a specific output from a test that is skipped, then it wasn't skipped. The test could be like:

def test_skip():
    from pykern import pkunit

    pkunit.pkfail("test_skip not skipped")

Then you search for "test_skip not skipped" and fail if it is. You shouldn't see a failure either.

test1_test.py already has pytest imported out of the test_test's, is that the right file for this test?

Create a separate test file (test_data/3_test.py) for this and write a separate case test_skip_output. You probably want a passing test in that file so you see passing and skipped.

@Benbenbenin0
Copy link
Contributor Author

Benbenbenin0 commented Feb 7, 2024

@robnagler two more things:

Then you search for "test_skip not skipped" and fail if it is. You shouldn't see a failure either.

  • I assumed this meant search the test_log, am I getting the path correctly?
with open(py.path.local(__file__).dirpath("3_test.log"), 'r') as file:
  • pkfail doesn't seem to print it's msg in the test log until after all test complete, searching for "FAILED" instead.
if "FAILED" in file.read():
            pkunit.pkfail("test_skip failed")

@robnagler
Copy link
Member

You should be able to capsys. See how test_test does that.

@Benbenbenin0
Copy link
Contributor Author

@robnagler seems like capsys only captures from the output stream within a test case, meaning I can't see the pkfail output from test_skip in a following case.

I'm unsure where you want to do the checking, did you mean:

  1. add a file test_data/tests/3_test.py containing a skipped case that pkfails with a unique msg
  2. add another case to the same file that passes so 3_test.py has passing and skipped
  3. in test_test.py or somewhere else add a case test_skip_output that fails if capsys output contains the unique msg

@robnagler
Copy link
Member

f29013f fixes the test. Look at the diffs to understand.

Running tests of test infrastructure is very confusing. What I did first was break down test_simple to individual parts to see which part was throwing the exception (incorrectly). The test for exceptions being thrown during conformance clutters things so I like having one pkunit conditional at a time until I understand what's going on.

@robnagler
Copy link
Member

@Benbenbenin0 7f8133a fixes an interesting problem. Like I said, testing test infrastructure is hard.

@Benbenbenin0
Copy link
Contributor Author

Benbenbenin0 commented Feb 12, 2024

@robnagler I reviewed f29013f, makes sense just two questions:

  • Why test 1 & 3 then 2 & 3? Is this to test skips within a failing and passing test sequence?
  • Why test 1 & 3 individually as well? These are all testing the basic function of pytest, is this to cover individual testing or something more redundant?

I also looked at 7f8133a, makes sense why the CI was failing.

Need changes approved before merging.

@robnagler
Copy link
Member

Why test 1 & 3 then 2 & 3? Is this to test skips within a failing and passing test sequence?

Yes.

Why test 1 & 3 individually as well? These are all testing the basic function of pytest, is this to cover individual testing or something more redundant?

We're making sure that calling default_command() with a single test (vs a directory) will produce different results. This definitely testing workflow inside pkcli.test.

@robnagler robnagler merged commit 1b65864 into master Feb 12, 2024
2 checks passed
@robnagler robnagler deleted the 400-skip-test-feedback branch February 12, 2024 23:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants