Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Avoid expectFail in the test suite #4130

Closed
fendor opened this issue Mar 10, 2024 · 10 comments
Closed

Avoid expectFail in the test suite #4130

fendor opened this issue Mar 10, 2024 · 10 comments
Labels
type: enhancement New feature or request

Comments

@fendor
Copy link
Collaborator

fendor commented Mar 10, 2024

expectFail interprets a test failure for any reason as a success. But since we don't check why exactly the test is failing, we lose valuable information.
Thus, many of the tests that use expectFail are likely not really testing what they are supposed to be.

In my opinion, we should replace all occurrences of expectFail with an assertion that shows what is failing precisely, or delete/ignore the respective test case.

@sgillespie
Copy link
Contributor

Most of the expectFail usages are to "disable" tests known to be broken. Is this a valid use, or should they all be updated/removed?

@fendor
Copy link
Collaborator Author

fendor commented Sep 6, 2024

I think expectFail is fine to temporarily disable certain tests on certain platforms.
The issue is with unconditionally disabling a test with expectFail, as we lose over time the information why the test is supposed to be failing. In these situation, as long as it isn't too flaky, we should have an assertion that documents the current behviour, with a comment explaining what is going wrong!

@sgillespie
Copy link
Contributor

I updated a few tests in #4402 for discussion. Is this basically what you were imagining?

@fendor
Copy link
Collaborator Author

fendor commented Sep 14, 2024

Yes, that's basically what I am imagining, thanks!
I think it would be nice to also document what the intended behaviour would be, preferably in the code. E.g., adding a hoverTestExpectedFail which takes the expected but incorrect value, and also the value we would like to have as input parameters.

Do you think that would make sense?

@sgillespie
Copy link
Contributor

sgillespie commented Sep 14, 2024

I like the idea of correctly typing both possibilities. Say,

hoverTestExpectFail :: TestName -> Position -> T.Text -> T.Text -> TestTree
hoverTestExpectFail name pos _ideal expected = hoverTest name pos expected

EDIT: My other thought is to use something at the type level to make it a bit easier to read, eg:

, hoverTestExpectFail'
        "import"
        (Position 2 18)
        (IdealBehavior "Control.Monad**")  -- Ensure no extra newlines
        (CurrentBehavior "Control.Monad\n\n")

-- <-- Snip -->
hoverTestExpectFail' :: TestName -> Position -> ExpectBroken BrokenIdeal -> ExpectBroken BrokenCurrent -> TestTree
hoverTestExpectFail' name pos _ expected = hoverTest name pos (unCurrent expected)

@fendor
Copy link
Collaborator Author

fendor commented Sep 15, 2024

This looks nice to me!

@sgillespie
Copy link
Contributor

This looks nice to me!

Which one?

@fendor
Copy link
Collaborator Author

fendor commented Sep 16, 2024

Both look good to me, the second proposal sounds better, and I'd be curious what the tests look like in practice with this change.

@sgillespie
Copy link
Contributor

I've updated most of the tests. I'm not really sure what to do with the golden tests, so I left those alone. Otherwise, they all make use of a simple GADT that's meant to distinguish between current and ideal behaviors.

@fendor
Copy link
Collaborator Author

fendor commented Sep 29, 2024

Thanks, closed by #4402

@fendor fendor closed this as completed Sep 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants