-
Notifications
You must be signed in to change notification settings - Fork 18
Conversation
Just updated dependencies to include the following unreleased changes:
All tests pass locally now. Just waiting to see what happens on CI. 🤞 |
@backkem all the tests pass if we disable the race detector. I think the sheer amount of goroutines we are spinning up is just too much for the race detector to handle, especially on the TravisCI machines, which are not particularly powerful. I vote that we disable the race detector for the stress tests and only use it for the other tests. What do you think? https://github.com/libp2p/go-libp2p-transport/blob/045097a6a962aeccac5e06d672a9b2489cf4b68c/test/stream.go#L158 suggests the race detector only supports ~8k funcs (I think they mean goroutines) and the stress tests themselves spin up 5k. We probably run much much more than that with everything going on under the hood. |
Ok, it would be nice if we could do this while also using the test suite directly again:
|
Yes, indeed. The race detector gives up as soon as you spin up more than 8192 concurrent goroutines. It's easy to detect when we're running with the race checker via build tags, we tend to use https://github.com/ipfs/go-detect-race to avoid reinventing the wheel; the upcoming The pattern is:
|
The update to webrtc v2.0.16 and go-libp2p-core seems to cause a deadlock of some sort. I don't have the time to look into this ATM :( |
Figured out the problem currently being faced: a pion/datachannel.DataChannel is packet-based and go-mplex expects a stream based net.Conn which has slightly different characteristics (while having the same interface since they are both connection-oriented). Overview of different types of connections as I see them:
Side-note: Properties a connection may or may not have:
It feels like libp2p should model for that. |
@albrow Last commit makes the entire test suite green. I opted to use a fixed buffer to overcome the stream vs packet based issue. This point definitely needs further investigation and/or discussion with the libp2p guys. However, I think it's mergable in the mean time. Do you mind reviewing? |
|
||
if w.bufEnd == 0 { | ||
n := 0 | ||
n, err = w.channel.Read(w.buf) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@backkem I must be missing something. We read from w.buf
here but when do we ever write to it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This line means we read from the datachannel into the packet buffer.
The idea is to allow an upper layer to read the packet in chunks. This is allowed when reading from a stream but it causes you to lose data when reading on a packet based connection. I'm not sure if there can be better solutions without the upper layer taking into account that it's using a packet based connection. This also ties into a discussion of support for packet based transports in libp2p, e.g. UDP.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah I was thrown off by the method name Read
. But that line is actually "reading into" w.buf
aka writing to it.
@backkem LGTM I think we should go ahead and merge this PR since it passes all the tests. We can separately start a broader discussion about how to handle stream vs. packet based connections. (I can't leave a review in GitHub because I'm the original author of the PR. If you hit the approve button I think GitHub will let us merge). |
The latest release might fix the performance problems we were experiencing. Waiting to see what happens in CI in order to confirm.