-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FIx #400 Stdout feedback if test skipped #440
Conversation
Maybe construct in a way that looks like other log entryies: tests/pkcli/sim_test.pytests/pkcli/sim_test.py:555:test_init_and_run |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Butting in with some ideas.
Also would be good to write a test. |
New output:
Thanks @robnagler for the notes, 2 questions:
|
Hard to prove a negative, but if you see a specific output from a test that is skipped, then it wasn't skipped. The test could be like: def test_skip():
from pykern import pkunit
pkunit.pkfail("test_skip not skipped") Then you search for "test_skip not skipped" and fail if it is. You shouldn't see a failure either.
Create a separate test file (test_data/3_test.py) for this and write a separate case test_skip_output. You probably want a passing test in that file so you see passing and skipped. |
@robnagler two more things:
|
You should be able to capsys. See how test_test does that. |
@robnagler seems like capsys only captures from the output stream within a test case, meaning I can't see the pkfail output from I'm unsure where you want to do the checking, did you mean:
|
f29013f fixes the test. Look at the diffs to understand. Running tests of test infrastructure is very confusing. What I did first was break down test_simple to individual parts to see which part was throwing the exception (incorrectly). The test for exceptions being thrown during conformance clutters things so I like having one pkunit conditional at a time until I understand what's going on. |
@Benbenbenin0 7f8133a fixes an interesting problem. Like I said, testing test infrastructure is hard. |
@robnagler I reviewed f29013f, makes sense just two questions:
I also looked at 7f8133a, makes sense why the CI was failing. Need changes approved before merging. |
Yes.
We're making sure that calling default_command() with a single test (vs a directory) will produce different results. This definitely testing workflow inside pkcli.test. |
Instead of nothing:
Now prints for each skipped case:
Tested with modified
tests/pkcli/sim_test.py
, catches skip() with/without reason and skipif().@mkeilman wasn't sure between just a warning or actually printing cases, wondering what you think.