Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Q: Categorizing and grouping dynamically-generated tests? #3587

Closed
mvxt opened this issue Jun 15, 2018 · 4 comments
Closed

Q: Categorizing and grouping dynamically-generated tests? #3587

mvxt opened this issue Jun 15, 2018 · 4 comments
Labels
type: question general question, might be closed after 2 weeks of inactivity

Comments

@mvxt
Copy link

mvxt commented Jun 15, 2018

So I have a testing scenario as follows:

I have use cases I am testing, and all of them use the same "steps" that need to be tested, with each subsequent step depending on the success of the previous. My requirements are as follows:

  1. Dynamically pass in use case name to test class and/or functions
  2. Test functions must be run in order and have a dependency on previous test passing
  3. Each use case must run through all functions before moving on to next use case
  4. Output must show results by use case

For example, if I have use cases A and B, and test methods 1, 2, and 3 for each, I expect things to happen in this order:
A1 -> A2 -> A3, then
B1 -> B2 -> B3

OR B#... then A#.

Order of use_case execution doesn't matter. But I cannot have test 1 running for both A and B. All of As tests have to run then all of Bs. And the output needs to be organized this way as well (I want to see all tests for A together, followed by all tests for B, etc.).

So I've created a parametrized test fixture, and I've created a test class marked as incremental. It looks like this:

@pytest.fixture()
def use_case(arg):
  return arg

@pytest.mark.incremental
class TestUseCase(object):
  def test_1(self, use_case):
    print use_case + ": 1"
  def test_2(self, use_case):
    print use_case + ": 2"
  def test_3(self, use_case):
    print use_case + ": 3"

In my conftest.py file, I've generated my needed use cases dynamically and I pass them in as follows:

def pytest_generate_tests(metafunc):
  metafunc.parametrize("arg", ["A", "B"])

However, when I execute it, the output looks like this:

test_use_case.py::TestUseCase::test_1[A] A: 1
PASSED
test_use_case.py::TestUseCase::test_1[B] B: 1
PASSED
test_use_case.py::TestUseCase::test_2[A] A: 2
PASSED
test_use_case.py::TestUseCase::test_2[B] B: 2
PASSED
test_use_case.py::TestUseCase::test_3[A] A: 3
PASSED
test_use_case.py::TestUseCase::test_3[B] B: 3
PASSED

To verify it wasn't just the pytest output, I modified the tests to print those lines to a shared file. The file looks like this:

A: 1
B: 1
A: 2
B: 2
A: 3
B: 3

I purposefully generated a failure in test function 2 and function 3 auto-fails for both use cases A and B as expected.

But as it stands, I've only met two (2) of my above four (4) conditions.

How do I enforce all tests (functions) for a use case (class) to run before another class can run?

Machine Specs:
platform darwin -- Python 2.7.15, pytest-3.6.1, py-1.5.3, pluggy-0.6.0 -- /usr/local/opt/python@2/bin/python2.7

@pytestbot pytestbot added the type: question general question, might be closed after 2 weeks of inactivity label Jun 15, 2018
@pytestbot
Copy link
Contributor

GitMate.io thinks possibly related issues are #2424 (dynamically generated fixtures), #3100 (Cannot dynamically mark a test as xfail), #2550 (Parametrized tests are grouped across files/modules), #3070 (Generate tests parametrizing many cases), and #2519 (Dynamically generated test methods not distinguishable on failure).

@pytestbot pytestbot added the platform: mac mac platform-specific problem label Jun 15, 2018
@RonnyPfannschmidt
Copy link
Member

pytest currently has no concept of tests that form chains

you can alter this in part based on pytest_modifyiteems

if you have a reliable way to identify use-cases, you can use that to reorder the items after pytest did the fixture optimization

@mvxt
Copy link
Author

mvxt commented Jun 15, 2018

Thanks for the response. I ended up doing the following.

First I needed to figure out exactly what was in the 'Function' objects:

from pprint import pprint
def pytest_collection_modifyitems(config, items):
  for item in items:
    pprint(vars(item))

This ended up spitting out all of the contents of each 'Function' object that was the tests being collected, and then after identifying the correct attribute, I sorted on it:

def pytest_collection_modifyitems(config, items):
  items.sort(key = lambda x: x._genid)

The _genid was the attribute containing the dynamically generated use_case names I was passing into the fixture.

So now the output/execution order is correct at least:

test_use_case.py::TestUseCase::test_1[A] A: 1
PASSED
test_use_case.py::TestUseCase::test_2[A] A: 2
PASSED
test_use_case.py::TestUseCase::test_3[A] A: 3
PASSED
test_use_case.py::TestUseCase::test_1[B] B: 1
PASSED
test_use_case.py::TestUseCase::test_2[B] B: 2
PASSED
test_use_case.py::TestUseCase::test_3[B] B: 3
PASSED

I'll keep looking at the docs, but now my next goal is to separate / organize the output of the tests by use_case as well.

For example, in JavaScript mocha tests, you have the following syntax:

describe('SOME PARENT CATEGORY', function() {
  it('should do whatever', function(){
    assert(true).is.true;
  });
  it('should do whatever else', function(){
    assert(true).is.true;
  });
});

And the result off running that file comes out to be:

SOME PARENT CATEGORY
    ✓ should do whatever
    ✓ should do whatever else

2 passing (14ms)

Is there an easy way to achieve this kind of output? Could you point me to the relevant docs? (there are a lot and I'm totally new to this framework).

My ideal output for my tests would be something like below:

A
  1 (PASSED)
  2 (PASSED)
  3 (PASSED)
B
  1 (PASSED)
  2 (PASSED)
  3 (PASSED)

Thanks for your time, and excellent work on PyTest to the whole team.

@nicoddemus
Copy link
Member

Thanks @mvxt, I believe we can close this now.

@nicoddemus nicoddemus removed the platform: mac mac platform-specific problem label Jun 19, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: question general question, might be closed after 2 weeks of inactivity
Projects
None yet
Development

No branches or pull requests

4 participants