Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add new elasticsearch client #69905

Merged
merged 55 commits into from
Jul 8, 2020
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
Show all changes
55 commits
Select commit Hold shift + click to select a range
10f0846
add "@elastic/elasticsearch" to dependencies
pgayvallet Jun 25, 2020
e923ce5
first POC of new client
pgayvallet Jun 25, 2020
175e0cb
add logging
pgayvallet Jun 25, 2020
f56a506
Merge remote-tracking branch 'upstream/master' into kbn-35508-add-new…
pgayvallet Jun 29, 2020
24ac36b
add generation script for client facade API and implementation
pgayvallet Jun 29, 2020
024f3df
add back keepAlive
pgayvallet Jun 30, 2020
cfa547a
Merge remote-tracking branch 'upstream/master' into kbn-35508-add-new…
pgayvallet Jun 30, 2020
eecd780
add exports from client
pgayvallet Jun 30, 2020
8c6c9fc
add new client mocks
pgayvallet Jun 30, 2020
c7a2cf1
add some doc
pgayvallet Jun 30, 2020
193fecc
fix API usages
pgayvallet Jun 30, 2020
b8935d7
rename legacy client to legacy in service
pgayvallet Jun 30, 2020
f268e63
rename currently unused config/client observable
pgayvallet Jun 30, 2020
73f68b8
wire new client to service & update mocks
pgayvallet Jun 30, 2020
2c5a489
fix mock type
pgayvallet Jun 30, 2020
c7ae6aa
export client types
pgayvallet Jun 30, 2020
93d05ee
add transport.request
pgayvallet Jun 30, 2020
4156411
more doc
pgayvallet Jun 30, 2020
5a3fd9c
migrate version_check to new client
pgayvallet Jun 30, 2020
71a27bb
fix default port logic
pgayvallet Jun 30, 2020
e48f3ad
rename legacy client mocks
pgayvallet Jun 30, 2020
bdacda3
move legacy client mocks to legacy folder
pgayvallet Jun 30, 2020
152b6dc
start adding tests
pgayvallet Jun 30, 2020
b265759
add configure_client tests
pgayvallet Jul 1, 2020
9e356e8
add get_client_facade tests
pgayvallet Jul 1, 2020
353253f
Merge remote-tracking branch 'upstream/master' into kbn-35508-add-new…
pgayvallet Jul 1, 2020
7e49f65
bump client to 7.8
pgayvallet Jul 1, 2020
bda204e
add cluster_client tests
pgayvallet Jul 1, 2020
28b8cfc
expose new client on internal contract only
pgayvallet Jul 1, 2020
8cd1ed2
revert using the new client for es version check
pgayvallet Jul 1, 2020
205c100
add service level test for new client
pgayvallet Jul 1, 2020
7819166
update generated API
pgayvallet Jul 1, 2020
d1cdb0a
Merge remote-tracking branch 'upstream/master' into kbn-35508-add-new…
pgayvallet Jul 1, 2020
80bfc97
Merge remote-tracking branch 'upstream/master' into kbn-35508-add-new…
pgayvallet Jul 1, 2020
bf18e61
Revert "rename legacy client mocks"
pgayvallet Jul 1, 2020
6c4ff93
Merge remote-tracking branch 'upstream/master' into kbn-35508-add-new…
pgayvallet Jul 1, 2020
8a397d0
address some review comments
pgayvallet Jul 1, 2020
533212b
Merge remote-tracking branch 'upstream/master' into kbn-35508-add-new…
pgayvallet Jul 2, 2020
0a09aee
revert ts-expect-error from unowned files
pgayvallet Jul 2, 2020
1a68ea1
move response mocks to mocks.ts
pgayvallet Jul 3, 2020
f956fb4
Remove generated facade, use ES Client directly
pgayvallet Jul 3, 2020
56db6c4
log queries even in case of error
pgayvallet Jul 3, 2020
d6f1c9d
nits
pgayvallet Jul 3, 2020
83b7246
use direct properties instead of accessors
pgayvallet Jul 3, 2020
a764b10
handle async closing of client
pgayvallet Jul 3, 2020
9d272ba
Merge remote-tracking branch 'upstream/master' into kbn-35508-add-new…
pgayvallet Jul 3, 2020
3f1d9bd
Merge remote-tracking branch 'upstream/master' into kbn-35508-add-new…
pgayvallet Jul 6, 2020
88ae350
Merge remote-tracking branch 'upstream/master' into kbn-35508-add-new…
pgayvallet Jul 6, 2020
cb242e7
Merge remote-tracking branch 'upstream/master' into kbn-35508-add-new…
pgayvallet Jul 7, 2020
413a97d
review nits
pgayvallet Jul 7, 2020
c0d7f3a
ElasticSearchClient -> ElasticsearchClient
pgayvallet Jul 7, 2020
022e4a2
add test for encoded querystring
pgayvallet Jul 7, 2020
0dc69b4
Merge remote-tracking branch 'upstream/master' into kbn-35508-add-new…
pgayvallet Jul 8, 2020
5acfe96
adapt test file
pgayvallet Jul 8, 2020
f059964
Merge remote-tracking branch 'upstream/master' into kbn-35508-add-new…
pgayvallet Jul 8, 2020
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
Expand Up @@ -125,6 +125,7 @@
"@elastic/apm-rum": "^5.2.0",
"@elastic/charts": "19.5.2",
"@elastic/datemath": "5.0.3",
"@elastic/elasticsearch": "^7.7.1",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The client follows the stack versioning, meaning that using the client 7.x in kibana master will cause issues. You should use the client master branch.
Currently, we are not publishing any 8.x version on npm, but we could do it if it does help you.
Here you can find the compatibility table of the client.
If you want to install the master branch of the client:

npm install elastic/elasticsearch-js#master

Copy link
Contributor

@mshustov mshustov Jun 30, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the client released separately for every Stack release? Should it be another place to sync across the Stack when bumping a Kibana version?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we do a release for every major.minor of the stack, patches are released as soon as it's needed.

Copy link
Contributor Author

@pgayvallet pgayvallet Jun 30, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hum, this may be quite problematic AFAIK, as kibana master is targeting 8.0, but the current branch (7.9 atm for example) is targeting 7.X

That means that we would need to have different versions (with potential differences in APIs) between our kibana master branch and our next-release branch?

This feels like it could become a backport nightmare, doesn't it?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@delvedor Can you elaborate on the typical changes between versions? If newer versions only change to support or remove new or deprecated ES functionality then this shouldn't cause any problems for us that aren't already caused by ES.

But if elasticsearch-js plans to make breaking changes to it's API signatures this adds an additional maintenance burden and we will have to migrate all code within the release timeframe.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The client follows semantic versioning, so there will never be a breaking change between minor or patch releases, but there might be between majors.
Minor releases are always additive, in a generic minor release you will find new ES endpoints and some additional features of the client, for example, in the last 2/3 minors, client helpers have been added.

If the client needs to do a breaking change, which can be dropping the support for a specific version of Node, remove/change a configuration option, or drop an API, that will happen in a major release.

The only parts of the client that could have a breaking change between minors are the helpers and the type definitions, which are still experimental (even if they are stable and not expected to change unless there is a very good reason).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The client does prereleases as soon as there is a feature freeze, if you take a look at the published versions on npm you will see few rcs.

"@elastic/ems-client": "7.9.3",
"@elastic/eui": "24.1.0",
"@elastic/filesaver": "1.1.2",
Expand Down Expand Up @@ -293,7 +294,6 @@
"devDependencies": {
"@babel/parser": "^7.10.2",
"@babel/types": "^7.10.2",
"@elastic/elasticsearch": "^7.4.0",
"@elastic/eslint-config-kibana": "0.15.0",
"@elastic/eslint-plugin-eui": "0.0.2",
"@elastic/github-checks-reporter": "0.0.20b3",
Expand Down
145 changes: 145 additions & 0 deletions src/core/server/elasticsearch/client/client_config.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,145 @@
/*
* Licensed to Elasticsearch B.V. under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch B.V. licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/

import { ConnectionOptions as TlsConnectionOptions } from 'tls';
import { URL } from 'url';
import { Duration } from 'moment';
import { ClientOptions, NodeOptions } from '@elastic/elasticsearch';
import { ElasticsearchConfig } from '../elasticsearch_config';

/**
* @privateRemarks Config that consumers can pass to the Elasticsearch JS client is complex and includes
* not only entries from standard `elasticsearch.*` yaml config, but also some Elasticsearch JS
* client specific options like `keepAlive` or `plugins` (that eventually will be deprecated).
*
* @public
*/
export type ElasticsearchClientConfig = Pick<
ElasticsearchConfig,
| 'customHeaders'
| 'logQueries'
| 'sniffOnStart'
| 'sniffOnConnectionFault'
| 'requestHeadersWhitelist'
| 'sniffInterval'
| 'hosts'
| 'username'
| 'password'
> & {
pingTimeout?: ElasticsearchConfig['pingTimeout'] | ClientOptions['pingTimeout'];
requestTimeout?: ElasticsearchConfig['requestTimeout'] | ClientOptions['requestTimeout'];
ssl?: Partial<ElasticsearchConfig['ssl']>;
};

export function parseClientOptions(
config: ElasticsearchClientConfig,
scoped: boolean
): ClientOptions {
const clientOptions: ClientOptions = {
sniffOnStart: config.sniffOnStart,
sniffOnConnectionFault: config.sniffOnConnectionFault,
headers: config.customHeaders,
pgayvallet marked this conversation as resolved.
Show resolved Hide resolved
};

if (config.pingTimeout != null) {
clientOptions.pingTimeout = getDurationAsMs(config.pingTimeout);
}
if (config.requestTimeout != null) {
clientOptions.requestTimeout = getDurationAsMs(config.requestTimeout);
}
if (config.sniffInterval) {
clientOptions.sniffInterval = getDurationAsMs(config.sniffInterval);
}

// TODO: this can either be done globally here or by host in convertHost.
// Not sure which option is the best.
if (config.username && config.password) {
clientOptions.auth = {
username: config.username,
password: config.password,
};
}

clientOptions.nodes = config.hosts.map((host) => convertHost(host, !scoped, config));

if (config.ssl) {
clientOptions.ssl = generateSslConfig(
config.ssl,
scoped && !config.ssl.alwaysPresentCertificate
);
}

return clientOptions;
}

const generateSslConfig = (
sslConfig: Required<ElasticsearchClientConfig>['ssl'],
ignoreCertAndKey: boolean
): TlsConnectionOptions => {
const ssl: TlsConnectionOptions = {
ca: sslConfig.certificateAuthorities,
};

const verificationMode = sslConfig.verificationMode;
switch (verificationMode) {
case 'none':
ssl.rejectUnauthorized = false;
break;
case 'certificate':
ssl.rejectUnauthorized = true;
// by default, NodeJS is checking the server identify
ssl.checkServerIdentity = () => undefined;
break;
case 'full':
ssl.rejectUnauthorized = true;
break;
default:
throw new Error(`Unknown ssl verificationMode: ${verificationMode}`);
}

// Add client certificate and key if required by elasticsearch
if (!ignoreCertAndKey && sslConfig.certificate && sslConfig.key) {
ssl.cert = sslConfig.certificate;
ssl.key = sslConfig.key;
ssl.passphrase = sslConfig.keyPassphrase;
}

return ssl;
};

const convertHost = (
host: string,
needAuth: boolean,
{ username, password }: ElasticsearchClientConfig
): NodeOptions => {
const url = new URL(host);
const isHTTPS = url.protocol === 'https:';
url.port = url.port ?? isHTTPS ? '443' : '80';
if (needAuth && username && password) {
url.username = username;
url.password = password;
}

return {
url,
};
};

const getDurationAsMs = (duration: number | Duration) =>
typeof duration === 'number' ? duration : duration.asMilliseconds();
57 changes: 57 additions & 0 deletions src/core/server/elasticsearch/client/client_facade.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
/*
* Licensed to Elasticsearch B.V. under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch B.V. licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/

import { ApiResponse } from '@elastic/elasticsearch';
import {
RequestBody,
RequestNDBody,
TransportRequestOptions,
TransportRequestPromise,
} from '@elastic/elasticsearch/lib/Transport';
import * as RequestParams from '@elastic/elasticsearch/api/requestParams';

export interface ClientFacade {
bulk<
TResponse = Record<string, any>,
TRequestBody extends RequestNDBody = Array<Record<string, any>>,
TContext = unknown
>(
params?: RequestParams.Bulk<TRequestBody>,
options?: TransportRequestOptions
): TransportRequestPromise<ApiResponse<TResponse, TContext>>;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the successor of APICaller (only added 3 methods here for the POC. I gonna have fun later copying/adapting the 2000 lines of signatures from node_modules/@elastic/elasticsearch/index.d.ts)

Most important question is: Do we want to expose the options?: TransportRequestOptions to our consumers, should we only expose a subset of the possible transport options, or should our facade simply not expose this second parameter as all.

As a reminder, options is only to override transport related options:

export interface TransportRequestOptions {
  ignore?: number[];
  requestTimeout?: number | string;
  maxRetries?: number;
  asStream?: boolean;
  headers?: Record<string, any>;
  querystring?: Record<string, any>;
  compression?: 'gzip';
  id?: any;
  context?: any;
  warnings?: string[];
  opaqueId?: string;
}

I don't think I have enough knowledge of our usages of the ES client to decide on this one. @rudolf maybe?

Copy link
Contributor

@mshustov mshustov Jun 25, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

only added 3 methods here for the POC. I gonna have fun later copying/adapting the 2000 lines of signatures from

IIRC ll the types are auto-generated. It's error-prone to update them manually every time we bump the library version. Can we just re-use the same typings?

Do we want to expose the options?: TransportRequestOptions

I don't see why we shouldn't. We already provide maxRetries & requestTimeout in legacy client. asStream is not possible to implement without low level support at all.

Copy link
Contributor Author

@pgayvallet pgayvallet Jun 25, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIRC ll the types are auto-generated. It's error-prone to update them manually every time we bump the library version. Can we just re-use the same typings?

The generated types are a mess (take a look at https://github.com/elastic/elasticsearch-js/blob/master/index.d.ts)

I would love to avoid replicating what we did with APICaller by having an exhaustive list on our side, unfortunately (at least imho) we can't (please, prove me wrong here). The strongest argument would be that they define multiple signature for every methods, and we only want one (and AFAIK you can't Pick a single signature of a multi-sign method with TS). Else the ClientWrapper implementation is going to be a nightmare. If we go in that direction, we should probably just expose a (concrete) preconfigured client instead (but there are some things we definitely don't want to open to consumers I think, such as close, transport and things like that).

I.E These are the signatures for asyncSearch.delete. We only want the first one here

delete<TResponse = Record<string, any>, TContext = unknown>(params?: RequestParams.AsyncSearchDelete, options?: TransportRequestOptions): TransportRequestPromise<ApiResponse<TResponse, TContext>>
delete<TResponse = Record<string, any>, TContext = unknown>(callback: callbackFn<TResponse, TContext>): TransportRequestCallback
delete<TResponse = Record<string, any>, TContext = unknown>(params: RequestParams.AsyncSearchDelete, callback: callbackFn<TResponse, TContext>): TransportRequestCallback
delete<TResponse = Record<string, any>, TContext = unknown>(params: RequestParams.AsyncSearchDelete, options: TransportRequestOptions, callback: callbackFn<TResponse, TContext>): TransportRequestCallback

Second point, if we use the client's signatures instead of replicating them, we would never be able to introduce higher level options that are consumed by our wrapper (as that was done with CallAPIOptions in the legacy client). I don't have any example of why we could want that, but using the lib's types directly would just close this door.

Other (minor) point, all the apis are available both in camel and snake case. It would be great to avoid such pollution, and that would also avoid having to grep for two things when searching for usages (this one could be resolved with a Pick based type)

 delete_autoscaling_policy<TResponse = Record<string, any>, TContext = unknown>(params?: RequestParams.AutoscalingDeleteAutoscalingPolicy, options?: TransportRequestOptions): TransportRequestPromise<ApiResponse<TResponse, TContext>>
    delete_autoscaling_policy<TResponse = Record<string, any>, TContext = unknown>(callback: callbackFn<TResponse, TContext>): TransportRequestCallback
    delete_autoscaling_policy<TResponse = Record<string, any>, TContext = unknown>(params: RequestParams.AutoscalingDeleteAutoscalingPolicy, callback: callbackFn<TResponse, TContext>): TransportRequestCallback
    delete_autoscaling_policy<TResponse = Record<string, any>, TContext = unknown>(params: RequestParams.AutoscalingDeleteAutoscalingPolicy, options: TransportRequestOptions, callback: callbackFn<TResponse, TContext>): TransportRequestCallback
    deleteAutoscalingPolicy<TResponse = Record<string, any>, TContext = unknown>(params?: RequestParams.AutoscalingDeleteAutoscalingPolicy, options?: TransportRequestOptions): TransportRequestPromise<ApiResponse<TResponse, TContext>>
    deleteAutoscalingPolicy<TResponse = Record<string, any>, TContext = unknown>(callback: callbackFn<TResponse, TContext>): TransportRequestCallback
    deleteAutoscalingPolicy<TResponse = Record<string, any>, TContext = unknown>(params: RequestParams.AutoscalingDeleteAutoscalingPolicy, callback: callbackFn<TResponse, TContext>): TransportRequestCallback
    deleteAutoscalingPolicy<TResponse = Record<string, any>, TContext = unknown>(params: RequestParams.AutoscalingDeleteAutoscalingPolicy, options: TransportRequestOptions, callback: callbackFn<TResponse, TContext>): TransportRequestCallback

Third point, in my opinion again, in term of Dev experience, A Pick based type is way worse than a 'plain' explicit interface when searching for s specific thing.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The strongest argument would be that they define multiple signature for every methods, and we only want one (and AFAIK you can't Pick a single signature of a multi-sign method with TS).
Other (minor) point, all the apis are available both in camel and snake case. It would be great to avoid such pollution, and that would also avoid having to grep for two things when searching for usages (this one could be resolved with a Pick based type

That's true, the client supports all possible use-cases which we don't want to. I'm still skeptical about manual work required on every update... That's not ideal, but we can adjust type generator script in elasticsearch-js to run it for our use-case.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's not ideal, but we can adjust type generator script in elasticsearch-js to run it for our use-case.

Automated generation could definitely be an option if we are afraid of manual maintenance when we bump the library.

So we would use (and maintain) an edited version of their script to generate our (currently named) ClientFacade type, to only have the camelCase and promise-based version of the APIs? And we will regenerate the type using our script every time we bump the library?

I can give this a try if we want to.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's not ideal, but we can adjust type generator script in elasticsearch-js to run it for our use-case.

Just saw that the script folder of @lastic/elasticsearch is not shipped in the npm module (neither are the source), which mean we can't use the script without checkout-ing the whole module manually.

Maybe AST parsing of node_modules/@elastic/elasticsearch/index.d.ts is a better option then? It would at least allow to generate our type directly from the kibana checkout/repo

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, after 3 5 hours trying both approaches of using AST (with ts and ts-morph) and hacking the library generation scripts, I kinda gave up.

  • hacking their scripts does not seems a viable option. That requires a local @elastic/elasticsearch-js checkout, which also itself has to perform a checkout of @elastic/elastic to build their generated API and documentation. Don't really see how we plug that easily into our repo
  • using TS AST is a pain, but the most problematic thing here is the overloaded signatures the Client API is defining. I did not found any way to properly extract a specific overload from the definition list. Also converting the concrete class definition to an interface is quite tedious, even using ts-morph.

So, instead, I moved on using a (way less sexy but yet effective for our needs) plain regexp-based parsing of their .d.ts file in 24ac36b. The script generates both the ClientFacade API and its wrapper implementation.

I feel like this could do the trick, wdyt?

Copy link
Contributor

@mshustov mshustov Jun 30, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, after 3 5 hours trying

😅 On the whole I'm okay even to have a manual process.

So, instead, I moved on using a (way less sexy but yet effective for our needs) plain regexp-based parsing of their .d.ts file

Ok, as long as it works. I didn't review the whole file. I thought that we could extend the script right in elasticsearch-js repo to generate a separate file for Kibana.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be simpler to change the script in the @elastic/elasticsearch repo to generate a separate type that only includes the Promise-based, camelCase API? Seems like it would be useful for other consumers of this npm package than just us. @delvedor wdyt?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@joshdover sorry, you were not included in the slack discussion between delvedor, restry and myself. A brief summary:

Delvedor did that for us (--kibana flag - elastic/elasticsearch-js#1239). However:

  • to avoid polluting the distributable, the script must be manually launched (the kibana version is not in the distributable), meaning that we need to have a local checkout of the library. If not blocker when developing locally, it could be for [discuss] new elasticsearch client version management #70431 depending on the chosen solution.
  • it's still a type, not an interface. meaning that if we want to use methods such as close ONLY from within core, we still need to have an interface/facade OR expose a proxy of the Client to block access to the 'private' fields/methods
  • We still need to generate the mocked version of the client for our mocks. A https://github.com/elastic/elasticsearch-js-mock lib does exists, but it's more an integration test mock (allow to mock responses for specific endpoints) than a jest-based mock. The divergence from our other testing mocks made me go the generation way (prefer one way to do thing)

Overall, imho, these generation scripts 'just works (tm)' for our need, at least for now. As it's just an implementation detail (it shouldnt impact core' public API), I'd say we could probably go with it on the initial implementation, and eventually change the approach later.


asyncSearch: {
delete<TResponse = Record<string, any>, TContext = unknown>(
params?: RequestParams.AsyncSearchDelete,
options?: TransportRequestOptions
): TransportRequestPromise<ApiResponse<TResponse, TContext>>;
get<TResponse = Record<string, any>, TContext = unknown>(
params?: RequestParams.AsyncSearchGet,
options?: TransportRequestOptions
): TransportRequestPromise<ApiResponse<TResponse, TContext>>;
submit<
Copy link
Contributor Author

@pgayvallet pgayvallet Jun 25, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another 'detail' regarding this typed/structured replacement of APICaller is that it may be a little difficult to migrate the retryCall methods used by the SO client

export function retryCallCluster(apiCaller: APICaller) {
return (endpoint: string, clientParams: Record<string, any> = {}, options?: CallAPIOptions) => {
return defer(() => apiCaller(endpoint, clientParams, options))
.pipe(
retryWhen((errors) =>
errors.pipe(
concatMap((error, i) =>
iif(
() => error instanceof legacyElasticsearch.errors.NoConnections,
timer(1000),
throwError(error)
)
)
)
)
)
.toPromise();
};
}

Previously APICaller was just a method, so wrapping it to retry was rather trivial. This this new typed interface, I'm unsure what would be the correct solution to achieve the same thing.

Maybe someone have an idea?

Copy link
Contributor

@mshustov mshustov Jun 26, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can change signature to accept a function:

export function retryCallCluster(fn: () => Promise<T>): Promise<T> {
    return defer(fn())
      .pipe(
        retryWhen((errors) =>
          errors.pipe(
            concatMap((error, i) =>
              iif(
                () => error instanceof legacyElasticsearch.errors.NoConnections,
                timer(1000),
                throwError(error)
              )
            )
          )
        )
      )
      .toPromise();
  };
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, that's the 'easier' solution I see. However multiple endpoint/API calls are used in the SO client. wrapping the whole APICaller with that allowed to be sure every call was going to be wrapper with the retry logic. If we wrap each individual methods, we would need to adapt all calls in the SO repository that were based on this retry logic.

Not really seeing another option atm though.

TResponse = Record<string, any>,
TRequestBody extends RequestBody = Record<string, any>,
TContext = unknown
>(
params?: RequestParams.AsyncSearchSubmit<TRequestBody>,
options?: TransportRequestOptions
): TransportRequestPromise<ApiResponse<TResponse, TContext>>;
};
}
51 changes: 51 additions & 0 deletions src/core/server/elasticsearch/client/client_wrapper.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
/*
* Licensed to Elasticsearch B.V. under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch B.V. licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/

import { Client } from '@elastic/elasticsearch';
import { TransportRequestOptions } from '@elastic/elasticsearch/lib/Transport';
import { Headers } from '../../http/router';
import { ClientFacade } from './client_facade';

export const getClientFacade = (client: Client, headers: Headers = {}): ClientFacade => {
const addHeaders = (options?: TransportRequestOptions): TransportRequestOptions => {
if (!options) {
return {
headers,
};
}
// TODO: do we need to throw in case of duplicates as it was done
// in legacy? - src/core/server/elasticsearch/scoped_cluster_client.ts:L88
return {
...options,
headers: {
...options.headers,
...headers,
},
};
pgayvallet marked this conversation as resolved.
Show resolved Hide resolved
};

return {
bulk: (params, options) => client.bulk(params, addHeaders(options)),
asyncSearch: {
delete: (params, options) => client.asyncSearch.delete(params, addHeaders(options)),
get: (params, options) => client.asyncSearch.get(params, addHeaders(options)),
submit: (params, options) => client.asyncSearch.submit(params, addHeaders(options)),
},
};
pgayvallet marked this conversation as resolved.
Show resolved Hide resolved
};
71 changes: 71 additions & 0 deletions src/core/server/elasticsearch/client/cluster_client.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
/*
* Licensed to Elasticsearch B.V. under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch B.V. licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/

import { Client } from '@elastic/elasticsearch';
import { getClientFacade } from './client_wrapper';
import { ClientFacade } from './client_facade';
import { configureClient } from './configure_client';
import { Logger } from '../../logging';
import { GetAuthHeaders, isRealRequest } from '../../http';
import { Headers } from '../../http/router';
import { ElasticsearchClientConfig } from './client_config';
import { ScopedClusterClient, IScopedClusterClient } from './scoped_cluster_client';
import { ScopeableRequest } from './types';
import { ensureRawRequest, filterHeaders } from '../../http/router';

const noop = () => undefined;

interface IClusterClient {
asInternalUser: () => ClientFacade;
asScoped: (request: ScopeableRequest) => IScopedClusterClient;
pgayvallet marked this conversation as resolved.
Show resolved Hide resolved
}

export class ClusterClient implements IClusterClient {
private readonly internalWrapper: ClientFacade;
private readonly scopedClient: Client;

constructor(
private readonly config: ElasticsearchClientConfig,
logger: Logger,
private readonly getAuthHeaders: GetAuthHeaders = noop
) {
this.internalWrapper = getClientFacade(configureClient(config, { logger }));
this.scopedClient = configureClient(config, { logger, scoped: true });
pgayvallet marked this conversation as resolved.
Show resolved Hide resolved
}

asInternalUser() {
return this.internalWrapper;
}

asScoped(request: ScopeableRequest) {
const headers = this.getScopedHeaders(request);
const scopedWrapper = getClientFacade(this.scopedClient, headers);
return new ScopedClusterClient(this.internalWrapper, scopedWrapper);
}
mshustov marked this conversation as resolved.
Show resolved Hide resolved

private getScopedHeaders(request: ScopeableRequest): Headers {
if (!isRealRequest(request)) {
return request?.headers ?? {};
}
const authHeaders = this.getAuthHeaders(request);
const headers = ensureRawRequest(request).headers;

return filterHeaders({ ...headers, ...authHeaders }, this.config.requestHeadersWhitelist);
}
}
46 changes: 46 additions & 0 deletions src/core/server/elasticsearch/client/configure_client.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
/*
* Licensed to Elasticsearch B.V. under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch B.V. licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/

import { Client } from '@elastic/elasticsearch';
import { Logger } from '../../logging';
import { parseClientOptions, ElasticsearchClientConfig } from './client_config';

export const configureClient = (
config: ElasticsearchClientConfig,
{ logger, scoped = false }: { logger: Logger; scoped?: boolean }
): Client => {
const clientOptions = parseClientOptions(config, scoped);
const client = new Client(clientOptions);

client.on('response', (err, event) => {
if (err) {
logger.error(`${err.name}: ${err.message}`);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are we okay not to log warnings anymore?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I took #69905 (comment) as a 'yes', as warning are per-request and can now be handled by the consumers. but we can decide to log them on our own. I don't really have an opinion on that one.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’m wondering if the client should log warnings more aggressively, maybe via process.emitWarning.
This is because I think users will never proactively go and read the warnings key. What do you think?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

process.emitWarning seems way too aggressive imho. Being able to get them using the event emitter API is fine.

} else if (config.logQueries) {
const params = event.meta.request.params;
logger.debug(
`${event.statusCode}\n${params.method} ${params.path}\n${params.querystring?.trim() ?? ''}`,
{
tags: ['query'],
}
);
}
});
pgayvallet marked this conversation as resolved.
Show resolved Hide resolved

return client;
};
Loading