Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Epic: Refactoring #4

Open
eduncan911 opened this issue Mar 14, 2016 · 0 comments
Open

Epic: Refactoring #4

eduncan911 opened this issue Mar 14, 2016 · 0 comments
Assignees

Comments

@eduncan911
Copy link
Owner

As I started this project a few years ago, I've since adopted best practices and principals in Go that I desperately want to apply here.

An overview of what to achieve during the refactoring is below. I'll most likely create individual issues to address each of these and will mark them off as they are developed.

  • Callback patterns: removing the spaghetti code of all the callback hell.
  • Concurrently execute tests.
  • Race Condition in stdlib tests if using t.Parallel()
  • Abstracting the BDD parts out to a shared package. E.g. github.com/eduncan911/mspec/bdd
  • Adding cmd line runner to parse user-written BDD Stories into stubbed mspec code. E.g. spec, mspec, bdd, etc.
  • Add more Story & Scenario overview states, keeping backwards compatibility. (see above for cmd line parser, and requires the BDD parts to be abstracted out)
  • Add HTML, Stderr and other output types. (this requires the BDD parts to be abstracted out, as noted above)
  • Benchmarking: gotta ensure no slow downs during the refactoring.
  • Refresh testify's implementation of assert tests here (it's about 2 years old).

Callback patterns

At the time I wrote this, I was doing a lot of nodejs. So naturally, it was originally written in kind of a nodejs call-back style.

I have moved to more standard practices of using channels for that of communication to keep the methods and code legible.

Concurrently execute tests

Since Go 1.5, GOMAXPROCS is now set to the number of cores on the machine. When you run go test, Go actually compiles a temp binary that runs each func Test(t *testing.T) in a goroutine. But note that default behavior for Go tests is synchronously - that is, it still runs 1 test at a time.

You've always had the ability to call t.Parallel() before 1.5; but, this was of little use without setting GOMAXPROCS properly. Well now that GOMAXPROCS is set to the number of virtual cores by default, this means t.Parallel() in tests will now make the tests run concurrently.

But using this package you defeat the advantage of running 100s of unit tests concurrently and instead is forced to run each Test harness for a group of Specs - those Specs will run synchronously and thus be slower overall.

The refactoring should make heavy use of goroutines where appropriate, without slowing down simple tests. By moving to use goroutines for most inner loops, we should get the added benefit of increasing performance overall - even without the t.Parallel() flag being set!

Race Condition in stdlib tests if using t.Parallel()

As mentioned above, Go 1.5+ now executes all tests concurrently since GOMAXPROCS is set to the number of cores on the machine and if you set the t.Parallel() flag.

This package was originally written with a global state to keep track of the current context being run synchronously.

I have not seen the race issue come up with about a dozen projects I use this package on testing on an 8 core (16 thread) machine, with the largest one using about 80 specs (tests). Nor I have received reports from others I have asked to test it. But it is coded in such a way that it could cause an issue with tests running concurrently with the t.Parallel() flag set.

Therefore, a refactor is required to move the state up into each Given() instead of globally and to make use of channels where items must remain global (I frown on mutex.Lock()s everything).

It will most likely loosen the ability to track the Feature context with this change, and the output may be out of order. But if you are using the t.Parallel() flag, you are already accepting things to be out of order.

In the interim, I could slap a big ol' a mutex lock on the global context which would resolve the issue if it ever comes up. But, that would greatly slow things down and I would prefer to refactor around it as mentioned above.

Abstracting the BDD parts out to a shared package.

In order to support a number of new features, I will need to represent the core BDD principals in code.

I do not want to pollute the root mspec package. Instead, I would prefer to abstract, create and move the core BDD parts out to a dedicated package. Most likely:

github.com/eduncan911/mspec/bdd

This could have some nice side effects, besides supporting other code and runtimes I plan on building. For example, someone could import that package directly and use it to create their own test runner, web-driven BDD framework, etc.

Adding cmd line runner to parse user-written BDD Stories into stubbed mspec code. E.g. spec, mspec, bdd, etc.

A conversation I had with a user recently gave me an idea. I have always asked the user to write general BDD-style ticket and bug reports. For example:

Story: Registering a new user
  As an user visiting the site
  I want to signup
  So that I can use the system

Scenario 1: User is unregistered
Given the user is not signed in
  and their email address does not exist in our system
  and their username does not exists in our system
  and they do not uncheck Subscribe to our Newsletter
 When the User registers
 Then a new User is created
  and a welcome email is sent to the User
  and a verify email address email is sent to the User
  and they are subscribed to our newsletter

While usually not that pretty, I have gotten 100s of tickets and bugs like this in the past.

So, I had a thought:

why not take this, verbatim, copy-n-paste it into a text file, and execute a command line parser that generates the actual mspec code that is all stubbed out?

Makes total sense. Using go generate, just parsing raw text and generating all the Given, When and Thens that are needed. The entire workflow (from scratch) would be:

go install -v github.com/eduncan911/mspec/cmd
cd $GOPATH/src/company/www/
mspec jim-user-story.txt > registration_test.go
go test

It is boilerplate code that I have typed 100s of times by hand. This utility will generate and output the complete file, ready to go test and will output the NOT IMPLEMENTED we all love about the Stubbing feature of mspec. Lovely!

GoConvey actually follows this paradigm as well with their browser. But the focus of this utility will be lightweight copy-n-pasting and parsing. Oh, that's another idea: reading from the Clipboard of the OS directly.

mspec -from clipboard > registration_test.go

Of course it isn't a replacement for double-loop unit and bdd testing that you should continue doing. But, it gets the Acceptance Testing coded out for the developer, leaving them free to iterate with normal TDD quickly.

Add more Story & Scenario overview states, keeping backwards compatibility.

These should exist in the exported bdd package. This refactoring would be to utilize some of those parts in the mspec code, in order to properly represent objects in code.

Add HTML, Stderr and other output types.

Originally this refactoring was needed because I have attempted to implement the HTML output a few times and failed due to the ugliness of the code.

This will be a primary focus during the refactoring, to allow for configurable outputs (and even no output! just run the tests!).

Benchmarking: gotta ensure no slow downs during the refactoring.

This is already done; but, a note here is required.

I've executed close to 700 specs in under 1.8s using mspec as-is today. That is not slow - in terms of all other BDD test runners I've used in the past (Java, C#, NodeJS, Ruby, Python, etc).

But, it is slow in regards to Go! Especially when compared to raw unit testing of Go.

To prove this, I've added a number of benchmarks already to compare mspec to normal Go tests. The results are:

BenchmarkGivenStub-8     1000000              1280 ns/op
BenchmarkWhenStub-8      1000000              1334 ns/op
BenchmarkThenStub-8      1000000              1464 ns/op
BenchmarkError-8           50000             38173 ns/op
BenchmarkSimpleMspec-8   1000000              1850 ns/op
BenchmarkSimpleTest-8   200000000                8.11 ns/op
BenchmarkComplexMspec-8   500000              3832 ns/op
BenchmarkComplexTest-8  30000000                42.9 ns/op

Focus on the last 4. You can see that mspec as it stands today is about 220x slower on simple tests, and about 90x slower on more complex unit tests written in raw Go tests.

That's not too bad, considering you still can run a 1000 specs in under 2 seconds. Also, remember you can make Go even faster by specifying which test to run like go test -run Test_Registration. But it's fast enough.

These benchmarks were written to set a baseline before the refactoring. That way, during refactoring to using channels and goroutines, we can see if there are any regressions.

I predict a number of regressions (since we'll be adding a lot more objects for the GC, more context, and optional runners) for simple tests. But, if creating complex specs with multiple Whens and multiple Thens, it should be a significant speed up with the built-in channels we'll have.

Refresh testify's implementation of assert tests here (it's about 2 years old).

Lastly, it's time to refresh the testify implementation of our assert package. I originally chose to embed it into the mspec package for several reasons:

  • To remove dependencies from the code.
  • The author said that testify wasn't stable yet and could change at that time.
  • I didn't want to increase complexity with package management, just to run tests.
  • Some assert tests could not be ported as-is.
  • Needed to override their error handling.

That last one was the key reasons. I could have mocked up a fake context to override their version of testing.T, but that felt like a lot of bloat given the other concerns above. So in the end, I just included a slightly modified version with mspec.

Since then, I've found a few small bugs and even a broken test or two with that old testify version. I've also noticed additional comparisons that testify now has.

So, it's time to refresh this package with their latest. And also, review the license to see if it has changed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant