Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: make client to wait when no free connections #764

Merged
merged 10 commits into from
Apr 3, 2020

Conversation

chanjarster
Copy link
Contributor

Add a new field MaxConnWaitTimeout on Client.

  • If MaxConnWaitTimeout == 0, when all connections are busy, will return ErrNoFreeConns immediately (just like what it already does).
  • If MaxConnWaitTimeout > 0, when all connections are busy, Client will wait at most MaxConnWaitTimeout for a chance to get a connection, if Client failed, will return ErrNoFreeConns

Copy link
Collaborator

@erikdubbelboer erikdubbelboer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First thing I notice is a lot of potential heap allocations which can slow things down. Did you try doing any benchmarks or checking the number of allocations with a test like this?:

func TestAllocationClient(t *testing.T) {
ln, err := net.Listen("tcp4", "127.0.0.1:0")
if err != nil {
t.Fatalf("cannot listen: %s", err)
}
defer ln.Close()
s := &Server{
Handler: func(ctx *RequestCtx) {
},
}
go s.Serve(ln) //nolint:errcheck
c := &Client{}
url := "http://test:test@" + ln.Addr().String() + "/foo?bar=baz"
n := testing.AllocsPerRun(100, func() {
req := AcquireRequest()
res := AcquireResponse()
req.SetRequestURI(url)
if err := c.Do(req, res); err != nil {
t.Fatal(err)
}
ReleaseRequest(req)
ReleaseResponse(res)
})
if n != 0 {
t.Fatalf("expected 0 allocations, got %f", n)
}
}

client.go Show resolved Hide resolved
use AcquireTimer to do timeout instead of using context
@chanjarster
Copy link
Contributor Author

chanjarster commented Mar 14, 2020

First thing I notice is a lot of potential heap allocations which can slow things down. Did you try doing any benchmarks or checking the number of allocations with a test like this?:

func TestAllocationClient(t *testing.T) {
ln, err := net.Listen("tcp4", "127.0.0.1:0")
if err != nil {
t.Fatalf("cannot listen: %s", err)
}
defer ln.Close()
s := &Server{
Handler: func(ctx *RequestCtx) {
},
}
go s.Serve(ln) //nolint:errcheck
c := &Client{}
url := "http://test:test@" + ln.Addr().String() + "/foo?bar=baz"
n := testing.AllocsPerRun(100, func() {
req := AcquireRequest()
res := AcquireResponse()
req.SetRequestURI(url)
if err := c.Do(req, res); err != nil {
t.Fatal(err)
}
ReleaseRequest(req)
ReleaseResponse(res)
})
if n != 0 {
t.Fatalf("expected 0 allocations, got %f", n)
}
}

I've noticed that the current benchmark and allocation test are based on enough MaxConnsPerHost.

By default (which MaxConnWaitTimeout=0) this implementation doesn't queue the request and return ErrNoFreeConns immediately.

I'll provide an allocation test based on slow server and enable MaxConnWaitTimeout later. But I suspect that queueing the request will definitely cause heap allocations.

Add BenchmarkClientGetEndToEndWaitConn* to test heap allocation
in waiting for free connection situation
Add BenchmarkHTTPClientGetEndToEndWaitConn* to test heap allocation
in waiting for free connection situation
@chanjarster
Copy link
Contributor Author

chanjarster commented Mar 14, 2020

Hi @erikdubbelboer , I add some benchmarks to show heap allocations in situation of waiting for a free connection, below is my test result:

GOMAXPROCS=4 go test -bench='EndToEndWaitConn' -benchmem -benchtime=10s
goos: darwin
goarch: amd64
pkg: github.com/valyala/fasthttp
BenchmarkClientGetEndToEndWaitConn1Inmemory-4             	     237	  50512742 ns/op	     904 B/op	       4 allocs/op
BenchmarkClientGetEndToEndWaitConn10Inmemory-4            	     237	  50466510 ns/op	     551 B/op	       3 allocs/op
BenchmarkClientGetEndToEndWaitConn100Inmemory-4           	     566	  17675896 ns/op	     912 B/op	       8 allocs/op
BenchmarkClientGetEndToEndWaitConn1000Inmemory-4          	    7539	   1332690 ns/op	     525 B/op	       5 allocs/op
BenchmarkClientGetEndToEndWaitConn10kInmemory-4           	   41928	    239216 ns/op	    1643 B/op	      14 allocs/op
BenchmarkNetHTTPClientGetEndToEndWaitConn1Inmemory-4      	     237	  50459617 ns/op	    6788 B/op	      77 allocs/op
BenchmarkNetHTTPClientGetEndToEndWaitConn10Inmemory-4     	     237	  50427900 ns/op	    6840 B/op	      78 allocs/op
BenchmarkNetHTTPClientGetEndToEndWaitConn100Inmemory-4    	     570	  17549629 ns/op	    4550 B/op	      54 allocs/op
BenchmarkNetHTTPClientGetEndToEndWaitConn1000Inmemory-4   	    7824	   1283183 ns/op	    3637 B/op	      41 allocs/op
PASS
ok  	github.com/valyala/fasthttp	229.573s

fix bug in BenchmarkHTTPClientGetEndToEndWaitConn*
fix bug in TestHostClientMaxConnWaitTimeoutSuccess make it wait
longer to avoid ErrNoFreeConns on travis-ci
client_timing_test.go Outdated Show resolved Hide resolved
fix do not compile benchmark(NetHTTP?)ClientGetEndToEndWaitConn
if go version < 1.11.x
fix the bug that if deadline is earlier than MaxConnWaitTimeout,
still wait MaxConnWaitTimeout which later than deadline.
fix race condition in TestHostClientMaxConnWaitTimeoutError
fix bug in TestHostClientMaxConnWaitTimeoutWithEarlierDeadline
if q := c.connsWait; q != nil && q.len() > 0 {
for q.len() > 0 {
w := q.popFront()
if w.waiting() {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

popFront can return nil in which case this will panic.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since q.len() > 0, so popFront is impossible return nil


w := &wantConn{
ready: make(chan struct{}, 1),
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be possible to reuse wantConn or the channel? Or is that going to make the code so complicated that its not worth it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the first question, if we want to reuse the wantConn, then we must reuse the channel, but AFAIK there's no way to reopen a closed channel (maybe).
In terms of the heap allocation, yes, this will cause heap allocation and slow things down, but if we want to waiting for a free connection, the waiting dominates the slow, not gc.

For the second question, many http clients provide connection pooling and waiting mechanism (by blocking). This waiting mechanism is very useful, it can simplify the code on the caller side, otherwise the caller have two options:

  1. busy waiting, try again immediately at some max try count, which consumes cpu time.
  2. sleep waiting, sleep for a while and try again, which can't get connection in time.

Both are not perfect. So I think the waiting mechanism is worth it.

For the complication of this PR, em ..., actually this PR is based on net/http/transport.go, maybe there is a more simple way, we can have a discussion about that :)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have thought about it and I agree. Let's merge this!

@erikdubbelboer erikdubbelboer merged commit 70b1d3b into valyala:master Apr 3, 2020
@chanjarster chanjarster deleted the feature/blocking-wait-conn branch April 6, 2020 12:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants