Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Have some independent tests for GOOL hooked in to Drasil #3387

Open
JacquesCarette opened this issue May 8, 2023 · 2 comments
Open

Have some independent tests for GOOL hooked in to Drasil #3387

JacquesCarette opened this issue May 8, 2023 · 2 comments

Comments

@JacquesCarette
Copy link
Owner

As per #3372 , @bmaclach comments that there exists GOOL tests - but these are not part of the Drasil test suite right now.

This should be fixed. And likely the GOOL test suite should be expanded too.

@B-rando1
Copy link
Collaborator

As of the two PRs linked above, the GOOL tests are much better hooked into Drasil's testing.

What is still somewhat lacking is a consistent, comprehensive test suite of everything in GOOL. @bmaclach had a solid start with HelloWorld.hs and the other files in drasil-code/test, but I think there's more we can do. These test files provide a good summary of the most used features in GOOL, but they aren't exhaustive of all of GOOL's features. They also aren't very systematic when dealing with edge-cases.

Another issue is that these files rarely test the actual values returned by GOOL code. Many values aren't even printed to the screen, and many more are only printed to the screen. @Xinlu-Y's work in #3911 is improving that, however.

I think there are two ways we can move forward here:

  • Design a new suite of tests that covers GOOL in a more systematic way. @samm82 do you have any advice for this?
  • Increase the helpfulness of the tests by continuing to add assert statements wherever it makes sense. As I mentioned in Assert statements in GOOL #3911, listSlice is one obvious choice to add assert to; and I'm guessing there are others.

@samm82
Copy link
Collaborator

samm82 commented Aug 23, 2024

A really good technique for designing test cases is "equivalence partitioning": essentially dividing the (likely) infinite set of possible test values/configurations into "partitions" that behave similarly, then only testing a handful of values from these partitions (usually one from the middle and a few from the edges). For example, some good list values to use for testing list indexing are an empty list, a list with one item, and a list with many items. A list with 7 items and a list with 8 items are likely going to behave similarly, while a list with 0 items and with 1 item will behave quite differently, even though they both have a difference of 1 item. My guess is that many (if not all) of the existing test cases will be useful in this and will only need to be augmented with more test cases for some of the edge-cases (as you mentioned). I think that most, if not all, values that get printed to the screen (or that probably should be) could be replaced with an assert statement to further automate this testing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants