Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix --run-slow test option #330

Closed
wants to merge 1 commit into from
Closed

Conversation

smurfix
Copy link
Contributor

@smurfix smurfix commented Sep 26, 2017

Symptom:

ERROR: trio._core.tests.test_ki (unittest.loader._FailedTest)
----------------------------------------------------------------------
ImportError: Failed to import test module: trio._core.tests.test_ki
Traceback (most recent call last):
  File "/usr/lib/python3.6/unittest/loader.py", line 428, in _find_test_path
    module = self._get_module_from_name(name)
  File "/usr/lib/python3.6/unittest/loader.py", line 369, in _get_module_from_name
    __import__(name)
  File "/home/smurf/src/trio/.pybuild/pythonX.Y_3.6/build/trio/_core/tests/test_ki.py", line 15, in <module>
    from .tutil import slow
  File "/home/smurf/src/trio/.pybuild/pythonX.Y_3.6/build/trio/_core/tests/tutil.py", line 9, in <module>
    not pytest.config.getoption("--run-slow", True),
AttributeError: module 'pytest' has no attribute 'config'

Fix:
Use the pattern described in
https://docs.pytest.org/en/latest/example/simple.html

@njsmith
Copy link
Member

njsmith commented Sep 26, 2017 via email

@njsmith
Copy link
Member

njsmith commented Sep 26, 2017

The travis failures are #332 and #334, nothing to do with this PR (though we do need to get them sorted out before we can merge anything). In any case this PR is waiting for me to understand better what the underlying problem is :-)

@smurfix
Copy link
Contributor Author

smurfix commented Sep 26, 2017

The trigger was that I was playing naïve and started "setup.py test".

That doesn't work even after applying this patch (there's a way to teach setup.py to use pytest instead …) for some interesting reasons; however, I've seen variations of this sort of problem with pytest in other projects – which could easily be fixed by switching to the canonical way of handling this usecase. Thus I'd suggest applying this change.

@njsmith
Copy link
Member

njsmith commented Sep 30, 2017

Okay, finally had a little more time to look at this. Sorry for the wait, and thanks for taking the time to put this together!

The details are, like everything involving testing and especially pytest, bizarrely complicated...

First, the patch currently causes a regression: a hook in trio/tests/conftest.py does not apply to tests in trio/_core/tests/. This means that slow tests in trio/_core/tests will be executed unconditionally, regardless of whether --run-slow is passed. This thing with the two conftest.py files is a preexisting mess that we should fix anyway (see #274), but it's still a regression here and now that would need to be sorted out somehow.

I'm also uncertain whether this works if you use pytest --pyargs trio to test an installed trio. The problem is that in this mode, pytest doesn't find conftest.py until after it's parsed the command line, so it doesn't run pytest_addoption hooks (I think?), so config.getoption("--run-slow") might crash due to not being a recognized option. Even if it doesn't crash, it probably doesn't return True to cause slow tests to be run, which is what we do now and maybe superior when running in this degraded mode where it's impossible to choose whether to run them or not?

Before we get too far into these details though there's the question of whether this is useful.

I think we could fix that specific error message by just adding an import pytest.config somewhere?

Also I pretty much 100% don't care about supporting setup.py test. It's a "nice idea" that never took off, it will never become standard in practice because setup.py itself is starting the long-term process of being deprecated (see PEP 517), and fundamentally the only reason it's useful is because it potentially gives users a standard one-button way to run the tests without having to check the docs... but it just can't accomplish this goal, because you still need to check the docs to find out what packages you need to install into the venv that you use to run the tests. I think having better docs about how to do things like run the tests is useful (I have a local branch here with contributing docs that I'm going to try to polish up and post soon), and I think it might be useful to have a custom script that actually solves the problem of running the tests with one button press (so the docs become "run python tools/test-trio.py" and that automatically sets up a venv if necessary etc.), but setup.py test is irrelevant to both of these.

Well, I guess making setup.py test print "run tools/test-trio.py" might be useful in case anyone else decides to play naive :-)

OTOH, this change is pretty harmless (if the specific issues mentioned above are sorted out) and maybe using the mark mechanism is independently nice in some rare corner cases like doing -m slow? So in summary, I'm not necessarily opposed, but it's not feeling like an obvious win either. What do you think? Is it still valuable if we take setup.py test off the table? And in particular is it sufficiently valuable to justify figuring out the annoying issues I described?

@njsmith
Copy link
Member

njsmith commented Sep 30, 2017

Closing/re-opening to redo CI now that #336 is merged.

@njsmith njsmith closed this Sep 30, 2017
@njsmith njsmith reopened this Sep 30, 2017
@codecov
Copy link

codecov bot commented Sep 30, 2017

Codecov Report

Merging #330 into master will decrease coverage by 0.04%.
The diff coverage is 61.53%.

Impacted file tree graph

@@            Coverage Diff            @@
##           master    #330      +/-   ##
=========================================
- Coverage   99.24%   99.2%   -0.05%     
=========================================
  Files          87      87              
  Lines       10384   10385       +1     
  Branches      730     733       +3     
=========================================
- Hits        10306   10302       -4     
- Misses         61      65       +4     
- Partials       17      18       +1
Impacted Files Coverage Δ
trio/_core/tests/tutil.py 100% <ø> (ø) ⬆️
trio/tests/test_threads.py 100% <ø> (ø) ⬆️
trio/tests/test_ssl.py 100% <100%> (ø) ⬆️
trio/tests/test_highlevel_open_tcp_listeners.py 100% <100%> (ø) ⬆️
trio/_core/tests/test_ki.py 100% <100%> (ø) ⬆️
trio/_core/tests/test_multierror.py 100% <100%> (ø) ⬆️
trio/tests/conftest.py 73.68% <28.57%> (-26.32%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 4d41a6d...0fd3e0d. Read the comment docs.

@smurfix
Copy link
Contributor Author

smurfix commented Oct 2, 2017

OK, alternate solution: Writing Plug-ins says:

The config object is passed around on many internal objects via the .config attribute or can be retrieved as the pytestconfig fixture or accessed via (deprecated) pytest.config.

@smurfix
Copy link
Contributor Author

smurfix commented Oct 2, 2017

… and getting setup.py to run pytest when calling setup.py test is possible too, I'll fix up a patch to do that.

One of the reasons I want that is that some build systems (like Debian's) auto-call setup.py test when building and installing a Python module. Another is sheer laziness on my part. ;-)

@njsmith
Copy link
Member

njsmith commented Apr 22, 2018

This has stalled out and it's not clear to me whether there's anything here, so I'm going to close and we can re-open later if appropriate.

@njsmith njsmith closed this Apr 22, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants