Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Round out diagnostic parameters #157

Merged
merged 1 commit into from
Mar 5, 2021
Merged

Conversation

hannahhoward
Copy link
Collaborator

Goals

Add some final parameters and diagnostics to the graphsync testplan, and document exactly what all the parameters do

Implementation

  • Add a parameter that turns the http comparison into a libp2p over http comparison
  • Add a parameter that allows you to configure graphsync's maximum number of simultaneous requests the responder will process (this is something passed to the constructor of graphsync)
  • Add block diagnostics - enables precise loggings of time it takes to get to each block (block queued & block sent on responder, response received and blocked processed on the requestor)
  • There are a lot of parameters, so add a README that lays out all the possible parameters you can mess with and explanations for why you might want to change some of them.

Add a libp2p over http test. Add max request in progress configurable parameter. Add block
diagnostics. Add readme explaining parameters
@raulk raulk merged commit 478a6f1 into master Mar 5, 2021
@raulk raulk deleted the feat/additional-diagnostics branch March 5, 2021 14:04
Comment on lines +414 to +420
var resp *http.Response
var err error
if useLibP2p {
resp, err = client.Get(fmt.Sprintf("libp2p://%s/%s", p.peerAddr.ID.String(), c.String()))
} else {
resp, err = client.Get(fmt.Sprintf("http://%s:8080/%s", p.ip.String(), c.String()))
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will make the HTTP case run in parallel with the Graphsync nodes that haven't finished yet, thus making the results unreliable.

We could synchronize on a barrier, then call runtime.GC() to try to free resources. But I generally advise against running (a) the subject of test and (b) the control in the same process, as results won't be reliable. It's best to run these completely isolated from one another.

@aarshkshah1992 will be doing a large refactor in this test plan, turning it into a generic data transfer plan that:

  • runs only one transport: graphsync, tcp, http, http-over-libp2p, raw-libp2p.
  • allows you to select a muxer and a secure channel to instantiate the host with, for libp2p transports.

Copy link
Collaborator Author

@hannahhoward hannahhoward Mar 5, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@raulk a couple things:

  1. There already is a generic transfer plan -- Beyond Bitswap -- I'm not sure it's the right thing for your use case, but you should at least give it a look before writing from scratch, as it actually supports much more complex data transfers (real system directories, etc)
  2. If you want to write a large refactor that's fine, but I think a generic transfer plan should live outside the graphsync repo
  3. I don't believe the parallel cases make that much of a difference in the over all conclusion. I've posted in thread as to why I think this and also wrote a quick hack to run the HTTP/ HTTP over libp2p seperately, which produced the same end result. I think that conclusively proves the problem is not in graphsync, and I'd like to ideally move the work out of the repo to reflect that.

@aschmahmann aschmahmann mentioned this pull request May 14, 2021
71 tasks
marten-seemann pushed a commit that referenced this pull request Mar 2, 2023
)

* test: add tests for auto-restarting a push channel

* fix: push channel monitor watch for accept and complete events
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants