Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Testing against sensitive data #87

Open
danielballan opened this issue Sep 9, 2017 · 5 comments
Open

Testing against sensitive data #87

danielballan opened this issue Sep 9, 2017 · 5 comments
Milestone

Comments

@danielballan
Copy link
Contributor

We will want to run tests against real lists of changes that were flagged for review. Some of the elements of these lists are already public because they are the subject of old EDGI reports. Some are not public -- they were interesting enough to trigger careful review, but ultimately did not generate a report. Finally, there will be an incoming stream of new entries to the list that are not yet sorted into one of those two categories.

After a conversation with @trinberg, I propose the following system:

  • Make a public a list of changes that generated old reports so that any interested developer can use them for testing.
  • Maintain a private more complete list. Only distribute this to established contributors.
  • Use a secure token to make the complete available to CI for testing. Apply "skiptest" features on these tests so that developers can run all the tests locally even if they don't have access to the complete list.

attn @janakrajchadha

@janakrajchadha
Copy link
Contributor

I had a couple of questions here.

  • Do we have an existing list of changes which generated old reports?
  • "skiptest" features would also skip the tests in CI and accessing the complete list would not be useful. Am I missing something here?

@danielballan
Copy link
Contributor Author

Apologies for my overdue reply, @janakrajchadha.

Do we have an existing list of changes which generated old reports?

I'm not sure that we have uuids for them, but we have Versionista links which we can convert into uuids.

"skiptest" features would also skip the tests in CI and accessing the complete list would not be useful. Am I missing something here?

I'm imagining a conditional skip (such as with pytest.mark.skipif) that checks for an env variable containing authentication info.

@danielballan danielballan added this to the v0.1 milestone Oct 9, 2017
@stale
Copy link

stale bot commented Jan 10, 2019

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in seven days if no further activity occurs. If it should not be closed, please comment! Thank you for your contributions.

@stale stale bot added the stale label Jan 10, 2019
@Mr0grog
Copy link
Member

Mr0grog commented Jan 10, 2019

This is still relevant — outside contributors really need decent data they can grapple with. However, is the long trudge towards a more public staging server the better solution here? (See also edgi-govdata-archiving/web-monitoring-db#34 and edgi-govdata-archiving/web-monitoring-ui#220)

@stale stale bot removed the stale label Jan 10, 2019
@stale
Copy link

stale bot commented Jul 9, 2019

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in seven days if no further activity occurs. If it should not be closed, please comment! Thank you for your contributions.

@stale stale bot added the stale label Jul 9, 2019
@stale stale bot removed the stale label Jul 15, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants