-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Address field in listener not working (upstream connect error or disconnect/reset before headers) #815
Comments
"upstream connect error or disconnect/reset before headers" means that Envoy cannot connect to the upstream that is being routed to. Your listener config is probably fine. I would use a combination of the /stats and /clusters admin endpoint output to debug further, and verify that you can connect to your backend services from within the Envoy container. |
@mattklein123 Thanks for taking a look! The text below is a bit long owing to the outputs I've pasted - thanks in advance for reading through them! When I look at the /clusters output, I see service1 and service 2 there, with a series of entries for 127.0.0.1:9001 (the python backend service), but for it, I see the
With the /stats output, I see some parameters that seem to apply here, pasting only them below (and the complete output in a separate excerpt below that) -
The membership_healthy value shows 1 , which I infer means that envoy is able to see the backend service1 in the cluster - is that the case? What are the update attempts referring to? They also seem to have gone through successfully 100% of the time (49 attempts). complete output -
Thing is, I'm able to curl the backend service (it runs in the same envoy container) from within the envoy container on both VIPs and localhost without any issues - Here's the netstat output to begin with -
Here's ps -
Now, I'm pinging the backend python simpleHTTP server directly on its 9001 port via one of the VIPs (192.45.67.89) -
Next, I'm pinging the backend python simpleHTTP server directly on its 9001 port via the other VIP (192.45.67.90) again from within the envoy container -
But when I try to go via the VIP on port 80 -
Why is envoy getting a 503 as a response when it should be able to reach the backend service? Finally, the admin access log doesn't show up any new entry when I issue a curl on the VIP/service/1 path, I'm guessing that is expected. Are there any other logs that I can enable to view envoy connection activity?
|
I skimmed through this quickly and I don't see any call to service1 at all in the stats above, so they are probably going to service2. I can't tell without seeing the full config, full dump of stats, and full dump of clusters output. I won't be able to help you further in this issue. If someone else doesn't help I would try Gitter for more interactive help. This is a configuration or docker setup issue. |
Np, thanks @mattklein123 ! I'll post this on gitter. |
@mattklein123 This issue was being caused because I plumbed subinterfaces but didn't configure any routing on them. Going via the docker network create and connect commands resolved connectivity issues and we were able to bring up multiple listeners. Thanks for your help on this! |
@vijayendrabvs I am running into the same problem. My golang service is accessible from within the service container on port 9096 but not accessible through the envoy front-proxy container, with exactly the same response as you reported. Can you provide any details on the resolution please? |
I'm running into the same issue today. I can access the service from the container using curl but not able to accesss through the envoy container via http://localhost:10000/symphony tatic_resources:
|
Same issue here, is there some way to solve this? |
I resolved my issue by removing http2_protocol_options: {} |
Where did you change that option? |
Share your Envoy config file. I will take a look. |
Envoy is used in my Istio container. But I don't know where to find that config file. |
@danesavot , |
@AmerbankDavd Had you resolved this? |
In my istio 0.5.1, there is no http2_protocol_options: {} at all. kubectl exec -ti istio-pilot-676d495bf8-9c2px -c istio-proxy -n istio-system -- cat /etc/istio/proxy/envoy_pilot.json |
I have added all the changes recommended to get the hello world example to run into this repo https://github.com/oinke/gprc-hello The terminal still shows: Running on macOS Mojave 10.14.2 with Docker version 18.09.2, build 6247962 |
@oinke seems like your issue is not related to this issue. I have posted a PR (oinke/gprc-hello#1) to your repo. |
@danesavot I also resolved by commenting out the empty http2 options. huge thanks! # http2_protocol_options: {} Outside of that, for everyone else, if you're running containers on the host, checkout networking: https://docs.docker.com/network/network-tutorial-standalone/ I created a custom docker bridge network, had the other containers run with --network and the jumped into the envoy container and ensured I could curl to those by name. the empty http2 options was from the envoy tutorial |
I had this error and was able to connect with ping, curl, grpcurl etc no problem. The issue turned out to be the line connect_timeout: 0.25s which is present in pretty much all envoy yaml demos. In my case I was experimenting with envoy configuration locally (in New Zealand) and connecting to a grpc service in eu-west-1. which fundamentally has a higher connection time than quarter of a second. Upping that timeout fixes the issue. Hope that helps someone else! |
Signed-off-by: JP Simard <jp@jpsim.com>
Signed-off-by: JP Simard <jp@jpsim.com>
Am not sure if this is related to #326 , still referencing the issue since it has the same error message, but on the face of it they seem to be different.
I'm trying to get the address field in a listener to work and I'm unable to do that. I've written a simple shell script as a test harness that will do the following - (this will work only on linux) -
envoyct1
with a default config and installs curl and python packages in it.service/1
backend.When I docker exec into envoyct1, and fire
curl <VIP1>/service/1
, I expect to get a 404. But I see this error -If I spin up a python server on a different port, and curl the IP:port, it works -
So this doesn't look like a network configuration issue (the curl is being issued from inside the envoy container).
Is this an envoy config issue? Or some other?
When I tried to debug this using gdb and a debug envoy build, it looked like a worker thread that handles the connection request somewhere in the connection_manager_impl.cc chain sees a socket close event and so spits out this error. I'm not sure why it should see a socket close event..
Am I doing something wrong with the config? Can someone please take a look?
BTW, it doesn't matter if I have one or two listeners in my config file. It's the same result. Also, it doesn't matter whether I plumb the VIPs or not - using a simple 127.0.0.10 loopback IP yields the same result.
I'm attaching the harness as a zip file. Unzip it and simply run ./setup_ifaces.sh, and it'll spin up an envoy alpine container and do the rest of the plumbing. If you fire
./setup_ifaces.sh ubuntu
, it will pull the lyft/envoy ubuntu image instead and do the same stuff there.So basically, this happens across ubuntu/alpine, loopback/eth0. Any pointers/help would be much appreciated.
Thanks!
setup_envoy_multiple_listener.zip
The text was updated successfully, but these errors were encountered: