Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for TLS Server Name Indication (SNI) extension #32517

Closed
jaymode opened this issue Jul 31, 2018 · 16 comments
Closed

Add support for TLS Server Name Indication (SNI) extension #32517

jaymode opened this issue Jul 31, 2018 · 16 comments
Assignees
Labels
:Distributed/Network Http and internode communication implementations >enhancement :Security/TLS SSL/TLS, Certificates

Comments

@jaymode
Copy link
Member

jaymode commented Jul 31, 2018

Java 8+ supports the TLS server name indication extension, which we should add support for. SNI will enable remote connections through a proxy to be routed to the correct endpoint. This is especially important for scenarios like cross cluster search where the connections from one cluster to another need to go through a proxy to establish a connection.

@jaymode jaymode added >enhancement :Security/TLS SSL/TLS, Certificates labels Jul 31, 2018
@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-security

@robgil
Copy link

robgil commented Jul 31, 2018

subscribe

@s1monw
Copy link
Contributor

s1monw commented Aug 13, 2018

@jaymode @tbrooks8 my plan for this was to add SNI support to ConnectionProfiles and then pass it down to TcpTransport#initiateChannel that would allow us to hand over the SNI to the right places. does this sound reasonable or do I miss something?

@jaymode
Copy link
Member Author

jaymode commented Aug 13, 2018

That sounds reasonable to me

@Tim-Brooks
Copy link
Contributor

This is only relevant for security transports right? Is the plaintext transport supposed to throw exceptions if a connection profile that requests SNI features is passed down?

Also I think this is actually (maybe) going to be a little tricky? Is the SNI name that is used always the hostname? Or can it be something else? In both NIO and Netty here is how security channels are created:

        @Override
        public void connect(ChannelHandlerContext ctx, SocketAddress remoteAddress,
                            SocketAddress localAddress, ChannelPromise promise) throws Exception {
            final SSLEngine sslEngine;
            if (hostnameVerificationEnabled) {
                InetSocketAddress inetSocketAddress = (InetSocketAddress) remoteAddress;
                // we create the socket based on the name given. don't reverse DNS
                sslEngine = sslService.createSSLEngine(sslConfiguration, inetSocketAddress.getHostString(),
                        inetSocketAddress.getPort());
            } else {
                sslEngine = sslService.createSSLEngine(sslConfiguration, null, -1);
            }

            sslEngine.setUseClientMode(true);
            ctx.pipeline().replace(this, "ssl", new SslHandler(sslEngine));
            super.connect(ctx, remoteAddress, localAddress, promise);
        }
        @Override
        public NioTcpChannel createChannel(NioSelector selector, SocketChannel channel) throws IOException {
            NioTcpChannel nioChannel = new NioTcpChannel(profileName, channel);
            Supplier<InboundChannelBuffer.Page> pageSupplier = () -> {
                Recycler.V<byte[]> bytes = pageCacheRecycler.bytePage(false);
                return new InboundChannelBuffer.Page(ByteBuffer.wrap(bytes.v()), bytes::close);
            };
            TcpReadWriteHandler readWriteHandler = new TcpReadWriteHandler(nioChannel, SecurityNioTransport.this);
            InboundChannelBuffer buffer = new InboundChannelBuffer(pageSupplier);
            Consumer<Exception> exceptionHandler = (e) -> onException(nioChannel, e);

            SocketChannelContext context;
            if (sslEnabled) {
                SSLEngine sslEngine;
                SSLConfiguration defaultConfig = profileConfiguration.get(TcpTransport.DEFAULT_PROFILE);
                SSLConfiguration sslConfig = profileConfiguration.getOrDefault(profileName, defaultConfig);
                boolean hostnameVerificationEnabled = sslConfig.verificationMode().isHostnameVerificationEnabled();
                if (hostnameVerificationEnabled) {
                    InetSocketAddress inetSocketAddress = (InetSocketAddress) channel.getRemoteAddress();
                    // we create the socket based on the name given. don't reverse DNS
                    sslEngine = sslService.createSSLEngine(sslConfig, inetSocketAddress.getHostString(), inetSocketAddress.getPort());
                } else {
                    sslEngine = sslService.createSSLEngine(sslConfig, null, -1);
                }
                SSLDriver sslDriver = new SSLDriver(sslEngine, isClient);
                context = new SSLChannelContext(nioChannel, selector, exceptionHandler, sslDriver, readWriteHandler, buffer, ipFilter);
            } else {
                context = new BytesChannelContext(nioChannel, selector, exceptionHandler, readWriteHandler, buffer, ipFilter);
            }
            nioChannel.setContext(context);

            return nioChannel;
        }

It is somewhere around there that we will need to set the SNI parameters on the SSLEngine. We just need to have the hostname in there somewhere. We don't have the connection profile at that step. Maybe we will set the SNI parameters in the SSLService?

@s1monw
Copy link
Contributor

s1monw commented Aug 14, 2018

@tbrooks8 these are lots of questions. The most relevant one is this I guess:

Is the SNI name that is used always the hostname? Or can it be something else?

@jaymode @alexbrasetvik is the hostname enough for iteration 1 or do we need to do anything else here? I also wonder if we need to set this parameter conditionally ie. only for CCR / CCS connections but not for with-in cluster connections?

@alexbrasetvik
Copy link
Contributor

I think a hostname should be enough for iteration 1. That would not let us route to specific nodes, but that is fine for now.

It could be useful to have an optional hostname to override what's presented as the SNI-hostname, as otherwise we'll have to have the publicly visible (and actually resolving) hostname be a SAN in the targe.

cc @nkvoll @AlexP-Elastic @franekrichardson to keep me honest

@s1monw
Copy link
Contributor

s1monw commented Aug 16, 2018

I had a chat with @alexbrasetvik and subsequently with @tbrooks8 and it seems doable to pass down a per connection SNI to the transport. My idea for this on the API end is this:

  • we extend TcpTransport#initiateChannel to take DiscoveryNode as an argument instead of only the address. Or maybe we can just pass down the DiscoveryNode.
  • if SSL is configured to pass on SNI values we read them from DiscoveryNode and by default only use the hostname.
  • we let users configure SNI names for a DiscoveryNode via node.attr.server_names or some other attr since these are available in the discovery node and we can access them inside the security code.

this would also work for transport clients if they use sniffing through a proxy which is nice. I hope this makes sense

Tim-Brooks added a commit to Tim-Brooks/elasticsearch that referenced this issue Aug 17, 2018
This is related to elastic#32517. This commit passes the DiscoveryNode to the
initiateChannel method for different Transport implementation. This
will allow additional attributes (besides just the socket address) to be
used when opening channels.
Tim-Brooks added a commit that referenced this issue Aug 20, 2018
This is related to #32517. This commit passes the DiscoveryNode to the
initiateChannel method for different Transport implementation. This
will allow additional attributes (besides just the socket address) to be
used when opening channels.
@Tim-Brooks
Copy link
Contributor

Tim-Brooks commented Aug 21, 2018

  • DiscoveryNode as an argument instead of only the address
  • SSL is configured to pass on SNI values we read them from DiscoveryNode and by default only use the hostname
  • SNI names for a DiscoveryNode via node.attr.server_names or some other attr since these are available in the discovery node

@Tim-Brooks
Copy link
Contributor

Hi @s1monw next I am going to add a optional SNI server_name to the discovery node and if it is present add it to the SSLEngine in the security netty transport. Do we want a setting that must be enabled to use the server_name if it is present?

@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-core-infra

@jasontedor jasontedor added the :Distributed/Network Http and internode communication implementations label Aug 21, 2018
@Tim-Brooks
Copy link
Contributor

Tim-Brooks commented Aug 21, 2018

Oh - I see that there is an attribute map already. So you want the SNI server name to go in the map with a key sni_server_name?

@s1monw
Copy link
Contributor

s1monw commented Aug 22, 2018

Oh - I see that there is an attribute map already. So you want the SNI server name to go in the map with a key sni_server_name?

yeah I'd make this attribute optional. ie by default we would just use the hostname and if the attr is present we use it as well.

s1monw added a commit to s1monw/elasticsearch that referenced this issue Aug 22, 2018
This adds support for connecting to a remote cluster through
a tcp proxy. A remote cluster can configured with an additional
`search.remote.$clustername.proxy` setting. This proxy will be used
to connect to remote nodes for every node connection established.
We still try to sniff the remote clsuter and connect to nodes directly
through the proxy which has to support some kind of routing to these nodes.
Yet, this routing mechanism requires the handshake request to include some
kind of information where to route to which is not yet implemented. The effort
to use the hostname and an optional node attribute for routing is tracked
in elastic#32517

Closes elastic#31840
s1monw added a commit that referenced this issue Aug 25, 2018
This adds support for connecting to a remote cluster through
a tcp proxy. A remote cluster can configured with an additional
`search.remote.$clustername.proxy` setting. This proxy will be used
to connect to remote nodes for every node connection established.
We still try to sniff the remote clsuter and connect to nodes directly
through the proxy which has to support some kind of routing to these nodes.
Yet, this routing mechanism requires the handshake request to include some
kind of information where to route to which is not yet implemented. The effort
to use the hostname and an optional node attribute for routing is tracked
in #32517

Closes #31840
s1monw added a commit that referenced this issue Aug 25, 2018
This adds support for connecting to a remote cluster through
a tcp proxy. A remote cluster can configured with an additional
`search.remote.$clustername.proxy` setting. This proxy will be used
to connect to remote nodes for every node connection established.
We still try to sniff the remote clsuter and connect to nodes directly
through the proxy which has to support some kind of routing to these nodes.
Yet, this routing mechanism requires the handshake request to include some
kind of information where to route to which is not yet implemented. The effort
to use the hostname and an optional node attribute for routing is tracked
in #32517

Closes #31840
Tim-Brooks added a commit that referenced this issue Sep 5, 2018
This commit is related to #32517. It allows an "server_name"
attribute on a DiscoveryNode to be propagated to the server using
the TLS SNI extentsion. This functionality is only implemented for
the netty security transport.
Tim-Brooks added a commit to Tim-Brooks/elasticsearch that referenced this issue Sep 6, 2018
This is related to elastic#32517. This commit passes the DiscoveryNode to the
initiateChannel method for different Transport implementation. This
will allow additional attributes (besides just the socket address) to be
used when opening channels.
Tim-Brooks added a commit that referenced this issue Sep 7, 2018
This is related to #32517. This commit passes the DiscoveryNode to the
initiateChannel method for different Transport implementation. This
will allow additional attributes (besides just the socket address) to be
used when opening channels.
Tim-Brooks added a commit to Tim-Brooks/elasticsearch that referenced this issue Sep 7, 2018
This commit is related to elastic#32517. It allows an "server_name"
attribute on a DiscoveryNode to be propagated to the server using
the TLS SNI extentsion. This functionality is only implemented for
the netty security transport.
@s1monw
Copy link
Contributor

s1monw commented Sep 12, 2018

@tbrooks8 can we close this or do we wait until SNI is implemeted on the other secure transport? Do we have an issue for the latter?

@Tim-Brooks
Copy link
Contributor

We can close it after the back port to 6.x (which is just waiting on the build to pass).

Tim-Brooks added a commit that referenced this issue Sep 13, 2018
This commit is related to #32517. It allows an "server_name"
attribute on a DiscoveryNode to be propagated to the server using
the TLS SNI extentsion. This functionality is only implemented for
the netty security transport.
@Tim-Brooks
Copy link
Contributor

Closing as the work has been merged

Tim-Brooks added a commit to Tim-Brooks/elasticsearch that referenced this issue Nov 26, 2018
This commit is related to elastic#32517. It allows an "sni_server_name"
attribute on a DiscoveryNode to be propagated to the server using
the TLS SNI extentsion. Prior to this commit, this functionality
was only support for the netty transport. This commit adds this
functionality to the security nio transport.
Tim-Brooks added a commit that referenced this issue Nov 27, 2018
This commit is related to #32517. It allows an "sni_server_name"
attribute on a DiscoveryNode to be propagated to the server using
the TLS SNI extentsion. Prior to this commit, this functionality
was only support for the netty transport. This commit adds this
functionality to the security nio transport.
DaveCTurner added a commit to DaveCTurner/elasticsearch that referenced this issue Mar 20, 2019
In elastic#33062 we introduced the `cluster.remote.*.proxy` setting for proxied
connections to remote clusters, but left it deliberately undocumented since it
needed followup work so that it could work with SNI. However, since elastic#32517 is
now closed we can add this documentation and remove the comment about its lack
of documentation.
DaveCTurner added a commit to DaveCTurner/elasticsearch that referenced this issue Mar 28, 2019
In elastic#33062 we introduced the `cluster.remote.*.proxy` setting for proxied
connections to remote clusters, but left it deliberately undocumented since it
needed followup work so that it could work with SNI. However, since elastic#32517 is
now closed we can add this documentation and remove the comment about its lack
of documentation.
DaveCTurner added a commit that referenced this issue Mar 28, 2019
In #33062 we introduced the `cluster.remote.*.proxy` setting for proxied
connections to remote clusters, but left it deliberately undocumented since it
needed followup work so that it could work with SNI. However, since #32517 is
now closed we can add this documentation and remove the comment about its lack
of documentation.
DaveCTurner added a commit that referenced this issue Mar 28, 2019
In #33062 we introduced the `cluster.remote.*.proxy` setting for proxied
connections to remote clusters, but left it deliberately undocumented since it
needed followup work so that it could work with SNI. However, since #32517 is
now closed we can add this documentation and remove the comment about its lack
of documentation.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
:Distributed/Network Http and internode communication implementations >enhancement :Security/TLS SSL/TLS, Certificates
Projects
None yet
Development

No branches or pull requests

7 participants