Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

QuantifiedCode makes PRs have unsuccessful checks #583

Closed
mnmelo opened this issue Dec 13, 2015 · 13 comments
Closed

QuantifiedCode makes PRs have unsuccessful checks #583

mnmelo opened this issue Dec 13, 2015 · 13 comments

Comments

@mnmelo
Copy link
Member

mnmelo commented Dec 13, 2015

With the recent integration of quantifiedcode PRs started being labeled with unsuccessful checks. QC tests the diff of develop to the branch being merged and reports 'bad code' in those parts only. Whenever a code issue is found the PR is labeled with an unsuccessful check.

The problem(s):
Most of the checks boil down to guidelines we don't want to follow. Having a PR look 'unsuccessful' because of that is, well, ugly (#563, #580).
In addition, @tylerjereddy opened a PR (#580) where QC complained about code issues in parts of code that hadn't been touched. I'm assuming this is a QC bug and will bring it up with them.

What can be done?
The QC tests are fairly customizable. We can disable as many as we want, individually. I'd suggest that every time we bump into a test we don't agree with we disable it. We'll end up with a selected set of tests that are important to us, and then an 'unsuccessful check' will be much more meaningful. (admins of the QC MDAnalysis project should be able to disable tests directly on the issue list).

Still, the above solution won't be perfect. After all, we're talking about coding guidelines, not absolutely required syntax. There will be times when we will want to break rules. In such cases we'll always have that annoying unsuccessful check (and possibly a discussion in the PR on why the guideline wasn't followed).

In other cases it might happen that a check covers too much: both cases we want to avoid and to keep. We might then think about disabling that test, and rolling our own similar test adapted to our style.

I can also look into having QC report something more neutral than 'unsuccessful', or see if it's possible to have the success depend on the severity of the found issues (there's 4 levels, from 'Recommendation' to 'Critical').

Finally, if you think this isn't worth the hassle we can stop QC from giving feedback on PRs. Reviewers could then go to quantifiedcode.com to check the issues that were found (an analysis is triggered on every commit).

I vote that for the upcoming PRs we take the time to prune the tests we don't want, knowing that we'll have plenty of unsuccessfully checked PRs (a discussion about some checks might be useful; maybe we can do it in this issue's conversation?). If we see it's leading nowhere we can disable PR feedback. In the meantime I can try to silence the unsuccessful labels per the ideas above.

What say you?

@orbeckst
Copy link
Member

Sounds like a sensible approach. There's also a wiki page on quantifiedcode.

I am not sure if I ever was successful in disabling a test (I think I tried it for the "non pythonic methods names"). The ones I would disable:

  • anything on "non-pythonic names" (although admittedly, we want fairly pythonic names in the Style Guide)... maybe look at it in detail?
  • "consider documenting your *" --- in many cases (eg derived classes or in test cases) it's perfectly fine not to have explicit doc strings

As you suggest, it would be nice if one could tell QC which tests should contribute to a "QC failure" and which ones should be treated more as guidelines than as rules.

Tangential: Have you been able to have QC/cody submit a PR to us? I haven't.

@mnmelo
Copy link
Member Author

mnmelo commented Dec 14, 2015

I'll summarize here the exchange I've just had with @programmdesign, one of QC's developers:
(let me know if you'd rather continue this in the devel mailing list):

It's not possible to have the checks 'pass' or 'fail' based on some severity threshold. The recommended way to deal with unwanted fails is indeed to disable the checks we don't like. I haven't had the time to do that properly yet, but I let them know that @orbeckst was having trouble disabling some tests. If problems persist there might be a bug in QC there.

Given this, let me know if you'd rather disable the PR checking, at least while we fine-tune the analysis (so that we no longer have unsuccessful checks).

Cody seems to work intermittently. I issued a fix when I was on the phone with QC, and -- of course -- then it happened to work cleanly, and PR #586 was submitted almost immediately. @programmdesign mentioned there might be some queuing issues during busier times (I wouldn't really mind if the PR takes half an hour to show up, but it either shows up immediately or never). Again, I'll keep in touch with them if we see that this inconsistency remains.

On @kain88-de 's problem getting his fork to be checked, I was told that they're looking into that (maybe PRs from forked repos isn't a very common usage pattern in their user base, but I certainly wouldn't want to discourage @kain88-de from working like that). We might also expect some inconsistency on checks of PRs opened before QC analysis was set up (which might also explain @tylerjereddy 's observation that code checks in #580 failed on code he didn't touch).

@programmdesign
Copy link

@mnmelo: Just to clarify: you can have a 'severity' threshold by just turning e.g. all "Recommendations" of. You can within a minute in you projects settings. We've decided to go this way, as you are in full control of what shows up in you Pull Request status. Let me know if this doesn't work for. Should be working fine though.

@mnmelo
Copy link
Member Author

mnmelo commented Dec 14, 2015

@programmdesign: yes, but we might want to keep some less severe checks, only not be so visually alarmed by them.

@programmdesign
Copy link

Ok. Just to confirm: You are saying you want to see all the issue on QC, but you'd like to disable all checks of a certain severity for PR statues only?
If so, would you prefer to be able to turn whole categories of entirely or, like now, be able to still choose each check one by one.

@kain88-de
Copy link
Member

@mnmelo I think my problem was only that I opened the PR before we had QC controlls. In my new PR's it works. I think it worked with the other old PR's because the push to the MDAnalysis repo triggered the QC-check.

So for new PR's everything seems to be fine.

@mnmelo
Copy link
Member Author

mnmelo commented Dec 14, 2015

@programmdesign: yes, you got the idea right.
Passing/failing based on issue class severity is the simplest approach, and I think for now would be a good start. Still, we might bump into the case where we feel that an issue class should have a different severity (for instance, the untyped exception catcher is a 'minor issue' but we'd consider it of 'potential bug' severity). For those cases it'd be great to be able to individually switch them on/off for PR status purposes. As UI goes, this could be as simple as replacing the current ON/OFF button for enabling/disabling checks with a three-way button with 'OFF', 'ON-Non-Serious', and 'ON-Serious'.

Finally, stating in the status message how many issues were found would be very useful (maybe discriminating by severity, or between 'serious' and 'non-serious' type).

But this is my interpretation. Guys, do give your feedback on how the ideal PR check would look like.

That's good news, @kain88-de, all is well, then.

@orbeckst
Copy link
Member

@mnmelo , you summed up my view nicely.

@orbeckst
Copy link
Member

@programmdesign , it would actually be nice if quantifiedcode could submit issues for problems that Cody cannot fix (instead of a PR). At the moment, I have to do this manually, see e.g. #564.

For that to work nicely, I would also like to have a link that goes directly to the QC issue (at the moment I have to get a link to one of the flagged files).

EDIT: removed link to #589 example issue, which Cody can fix (I don't have to raise the issue manually)– I just wasn't logged into QC and therefore the "fix it" link wasn't there...

@orbeckst
Copy link
Member

orbeckst commented Mar 1, 2016

I suggest we keep quantfiedcode but for right now, disable all the checks that we (mostly) don't care about. For me this would be

  • anything related to "pythonic" naming
  • anything related to naming in tests

Any other suggestions for the banned list?

Rationale: I want to see a green checkmark if the PR passes Travis....

@richardjgowers
Copy link
Member

Yeah I've already disabled a few that I thought were unnecessary, so I can't really object

@kain88-de
Copy link
Member

I would like to keep the pythonic naming. It is useful for new code and doesn't pop up that much for changes in existing code.

lohani2280 pushed a commit to lohani2280/mdanalysis that referenced this issue Feb 17, 2017
@orbeckst
Copy link
Member

I dont't think that this is still an issue for us; I haven't really heard any complaints about QC recently.

I am closing and if you disagree please re-open (or ask for it to be re-opened).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants