Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SIEM][Security Solution][Endpoint] Endpoint Artifact Manifest Management + Artifact Download and Distribution #67707

Merged
merged 133 commits into from
Jul 2, 2020

Conversation

madirey
Copy link
Contributor

@madirey madirey commented May 28, 2020

Summary

This PR manages the entire lifecycle of exception list artifacts which will be delivered to the Elastic Endpoint. Major features include:

  • A periodic task runner that

    • reads the current state of endpoint exception lists
    • builds artifacts for each supported OS, compresses them (compression TBD in next PR), and computes hashes
    • compares the hashes to the ones contained in the last-dispatched manifest
    • writes the artifacts to a Saved Object in ES if new
    • builds a new manifest, removing outdated artifacts, and adding new ones
    • dispatches that manifest to the endpoints by writing to the appropriate Datasource config
    • and finally, commits the manifest to a Saved Object in ES if the dispatch was successful
  • A new Kibana endpoint for downloading the artifacts

    • with authorization via Fleet API token
    • in-memory caching to provide for quicker responses
    • quick ES lookup based on a stable, reproducible doc id if cache lookup fails
    • the URL to each artifact is included in the manifest that is dispatched to the endpoints

Properties

Eventual Consistency from lists to ingest_manager
The feature has been designed to run seamlessly in a multi-Kibana environment. The manifest SO that is committed is intended as a kind of "ack" to indicate that a manifest has been dispatched (written to ingest manager datasource). In the event of an unexpected server crash, it's possible that a manifest can be dispatched more than once, however it should never be possible for a manifest change to be lost.

There is one potential edge case... on the callback for datasource create, we're unable to verify that the change is actually committed at the ingest manager layer. A crash after we return could result in a manifest update being lost. Revisit this?

Consistency of the Artifact Manifest
The TaskManager plugin is leveraged for maintenance of the artifacts and the manifest commit records. This should ensure that only one task is running at once, however we do also have the ingestManager callback that runs when a datasource is created. In order to prevent race conditions and therefore consistency issues on the manifest, we utilize the version that is returned using the SavedObjectsClient APIs. If two clients attempt to make simultaneous updates against the same base version, a conflict (409) will be encountered, and the manifest will be updated on the next task run (within 60 seconds) if necessary.

Scale Considerations
The artifact manifest is now sent to the endpoint via the config/policy mechanism. The manifest can only be updated every 60 seconds, with the exception of when a new datasource is created, which could result in an out-of-band manifest update.

Artifacts are saved using a pre-calculated document ID, which encapsulates the schema version, operating system, artifact identifier, and fingerprint (a sha256 hash). Though downloads are currently managed through a Kibana API, we try to make the lookup as quickly as possible by utilizing a direct get by ID, and by utilizing an in-memory FIFO cache per client.

Testing

  1. Create an Endpoint config
  2. Create the endpoint exception list:
cd x-pack/plugins/lists/server/scripts
./post_exception_list.sh exception_lists/new/exception_list_agnostic.json
  1. Create an endpoint exception list item:
cd x-pack/plugins/lists/server/scripts
./post_exception_list_item.sh ./exception_lists/new/exception_list_item_agnostic.json
  1. Navigate to the policy page and verify that artifact_manifest is also now contained in the Endpoint datasource
    image

To Do

  • create task to run artifact packager periodically
  • add manifest endpoint
  • add manifest generation
  • inject manifest into endpoint config (dispatch)
  • add download endpoint
  • in-memory cache of available downloads
  • etag to cache manifest download
  • digitally sign manifest using RSA keypair
  • authentication/authorization to APIs using api key
  • add io-ts schema validation
  • add unit tests
  • add api integration tests

To be addressed in follow-up PR

Checklist

Delete any items that are not applicable to this PR.

For maintainers

@madirey madirey added the Feature:Endpoint Elastic Endpoint feature label May 28, 2020
@elasticmachine
Copy link
Contributor

Pinging @elastic/endpoint-app-team (Feature:Endpoint)

@madirey madirey added the Team:Endpoint Response Endpoint Response Team label May 28, 2020
@elasticmachine
Copy link
Contributor

Pinging @elastic/endpoint-response (Team:Endpoint Response)

@madirey madirey added release_note:skip Skip the PR/issue when compiling release notes v7.9.0 labels Jun 30, 2020
} from '../../schemas';
import { ArtifactConstants } from './common';

export async function buildArtifact(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this needs to be async

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, we forgot to remove async when we removed the lzma compression this morning.


do {
const response = await eClient.findExceptionListItem({
listId: 'endpoint_list',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should move this out into a common place because we'll need to use it in the UI too, I think

schemaVersion: string,
entry: Entry | EntryNested
): TranslatedEntry | undefined {
let translatedEntry;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you could give the variable a type of TranslatedEntry here and then you won't need to do as TranslatedEntry below.

return buildAndValidateResponse(req.params.identifier, cacheResp);
} else {
logger.debug(`Cache MISS artifact ${id}`);
return scopedSOClient
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we could use awaits here too, I think

Copy link
Contributor

@peluja1012 peluja1012 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great job @madirey and @alexk307! I know there is a follow up PR coming but this is great stuff, some good test coverage too.

@madirey madirey merged commit 0f7afd4 into elastic:master Jul 2, 2020
gmmorris added a commit to gmmorris/kibana that referenced this pull request Jul 2, 2020
* master: (46 commits)
  [Visualize] Add missing advanced settings and custom label for pipeline aggs (elastic#69688)
  Use dynamic: false for config saved object mappings (elastic#70436)
  [Ingest Pipelines] Error messages (elastic#70167)
  [APM] Show transaction rate per minute on Observability Overview page (elastic#70336)
  Filter out error when calculating a label (elastic#69934)
  [Visualizations] Each visType returns its supported triggers (elastic#70177)
  [Telemetry] Report data shippers (elastic#64935)
  Reduce SavedObjects mappings for Application Usage (elastic#70475)
  [Lens] fix dimension label performance issues (elastic#69978)
  Skip failing endgame tests (elastic#70548)
  [SIEM] Reenabling Cypress tests (elastic#70397)
  [SIEM][Security Solution][Endpoint] Endpoint Artifact Manifest Management + Artifact Download and Distribution (elastic#67707)
  [Security] Adds field mapping support to rule creation (elastic#70288)
  SECURITY-ENDPOINT: add fields for events to metadata document (elastic#70491)
  Fixed assertion in hybrid index pattern test to iterate through indices (elastic#70130)
  [SIEM][Exceptions] - Exception builder component (elastic#67013)
  [Ingest Manager] Rename data sources to package configs (elastic#70259)
  skip suites blocking es snapshot promomotion (elastic#70532)
  [Metrics UI] Fix asynchronicity and error handling in Snapshot API (elastic#70503)
  fix export response (elastic#70473)
  ...
@kibanamachine
Copy link
Contributor

💔 Build Failed

Failed CI Steps


Test Failures

Kibana Pipeline / kibana-xpack-agent / X-Pack API Integration Tests.x-pack/test/api_integration/apis/fleet/setup·ts.apis Fleet Endpoints fleet_setup should create a fleet_enroll user and role

Link to Jenkins

Standard Out

Failed Tests Reporter:
  - Test has failed 2 times on tracked branches: https://github.com/elastic/kibana/issues/68568

[00:00:00]       │
[00:00:00]         │ proc [kibana]   log   [18:12:16.799] [warning][plugins][reporting] Enabling the Chromium sandbox provides an additional layer of protection.
[00:00:00]         └-: apis
[00:00:00]           └-> "before all" hook
[00:06:48]           └-: Fleet Endpoints
[00:06:48]             └-> "before all" hook
[00:06:48]             └-: fleet_setup
[00:06:48]               └-> "before all" hook
[00:06:48]               └-> should create a fleet_enroll user and role
[00:06:48]                 └-> "before each" hook: global before each
[00:06:48]                 └-> "before each" hook
[00:06:48]                 │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-debian-tests-xl-1593881987734876178] [events-index_pattern_placeholder] creating index, cause [api], templates [], shards [1]/[1]
[00:06:48]                 │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-debian-tests-xl-1593881987734876178] [metrics-index_pattern_placeholder] creating index, cause [api], templates [], shards [1]/[1]
[00:06:48]                 │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-debian-tests-xl-1593881987734876178] [logs-index_pattern_placeholder] creating index, cause [api], templates [], shards [1]/[1]
[00:06:48]                 │ proc [kibana]  error  [18:19:05.738] [warning][process] UnhandledPromiseRejectionWarning: Error: package not found
[00:06:48]                 │ proc [kibana]     at Object.fetchFindLatestPackage (/dev/shm/workspace/install/kibana-7/x-pack/plugins/ingest_manager/server/services/epm/registry/index.js:70:11)
[00:06:48]                 │ proc [kibana]     at process._tickCallback (internal/process/next_tick.js:68:7)
[00:06:48]                 │ proc [kibana]     at emitWarning (internal/process/promises.js:97:15)
[00:06:48]                 │ proc [kibana]     at emitPromiseRejectionWarnings (internal/process/promises.js:143:7)
[00:06:48]                 │ proc [kibana]     at process._tickCallback (internal/process/next_tick.js:69:34)
[00:06:48]                 │ proc [kibana]  error  [18:19:05.739] [warning][process] Error: package not found
[00:06:48]                 │ proc [kibana]     at Object.fetchFindLatestPackage (/dev/shm/workspace/install/kibana-7/x-pack/plugins/ingest_manager/server/services/epm/registry/index.js:70:11)
[00:06:48]                 │ proc [kibana]     at process._tickCallback (internal/process/next_tick.js:68:7)
[00:06:48]                 │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-debian-tests-xl-1593881987734876178] added role [fleet_enroll]
[00:06:48]                 │ info [o.e.x.s.a.u.TransportPutUserAction] [kibana-ci-immutable-debian-tests-xl-1593881987734876178] added user [fleet_enroll]
[00:06:49]                 └- ✖ fail: "apis Fleet Endpoints fleet_setup should create a fleet_enroll user and role"
[00:06:49]                 │

Stack Trace

Error: expected 200 "OK", got 500 "Internal Server Error"
    at Test._assertStatus (/dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:268:12)
    at Test._assertFunction (/dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:283:11)
    at Test.assert (/dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:173:18)
    at assert (/dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:131:12)
    at /dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:128:5
    at Test.Request.callback (/dev/shm/workspace/kibana/node_modules/superagent/lib/node/index.js:718:3)
    at parser (/dev/shm/workspace/kibana/node_modules/superagent/lib/node/index.js:906:18)
    at IncomingMessage.res.on (/dev/shm/workspace/kibana/node_modules/superagent/lib/node/parsers/json.js:19:7)
    at endReadableNT (_stream_readable.js:1145:12)
    at process._tickCallback (internal/process/next_tick.js:63:19)

Kibana Pipeline / kibana-xpack-agent / X-Pack API Integration Tests.x-pack/test/api_integration/apis/fleet/setup·ts.apis Fleet Endpoints fleet_setup should create a fleet_enroll user and role

Link to Jenkins

Standard Out

Failed Tests Reporter:
  - Test has failed 2 times on tracked branches: https://github.com/elastic/kibana/issues/68568

[00:00:00]       │
[00:00:00]         │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-debian-tests-xl-1593881987734876178] [ilm-history-2-000001] creating index, cause [api], templates [ilm-history], shards [1]/[0]
[00:00:00]         │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-debian-tests-xl-1593881987734876178] moving index [ilm-history-2-000001] from [null] to [{"phase":"new","action":"complete","name":"complete"}] in policy [ilm-history-ilm-policy]
[00:00:00]         │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-debian-tests-xl-1593881987734876178] current.health="GREEN" message="Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[ilm-history-2-000001][0]]])." previous.health="YELLOW" reason="shards started [[ilm-history-2-000001][0]]"
[00:00:00]         └-: apis
[00:00:00]           └-> "before all" hook
[00:07:20]           └-: Fleet Endpoints
[00:07:20]             └-> "before all" hook
[00:07:20]             └-: fleet_setup
[00:07:20]               └-> "before all" hook
[00:07:20]               └-> should create a fleet_enroll user and role
[00:07:20]                 └-> "before each" hook: global before each
[00:07:20]                 └-> "before each" hook
[00:07:20]                 │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-debian-tests-xl-1593881987734876178] [logs-index_pattern_placeholder] creating index, cause [api], templates [], shards [1]/[1]
[00:07:20]                 │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-debian-tests-xl-1593881987734876178] [metrics-index_pattern_placeholder] creating index, cause [api], templates [], shards [1]/[1]
[00:07:20]                 │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-debian-tests-xl-1593881987734876178] [events-index_pattern_placeholder] creating index, cause [api], templates [], shards [1]/[1]
[00:07:20]                 │ proc [kibana]  error  [17:57:02.087] [warning][process] UnhandledPromiseRejectionWarning: Error: package not found
[00:07:20]                 │ proc [kibana]     at Object.fetchFindLatestPackage (/dev/shm/workspace/install/kibana-7/x-pack/plugins/ingest_manager/server/services/epm/registry/index.js:70:11)
[00:07:20]                 │ proc [kibana]     at process._tickCallback (internal/process/next_tick.js:68:7)
[00:07:20]                 │ proc [kibana]     at emitWarning (internal/process/promises.js:97:15)
[00:07:20]                 │ proc [kibana]     at emitPromiseRejectionWarnings (internal/process/promises.js:143:7)
[00:07:20]                 │ proc [kibana]     at process._tickCallback (internal/process/next_tick.js:69:34)
[00:07:20]                 │ proc [kibana]  error  [17:57:02.088] [warning][process] Error: package not found
[00:07:20]                 │ proc [kibana]     at Object.fetchFindLatestPackage (/dev/shm/workspace/install/kibana-7/x-pack/plugins/ingest_manager/server/services/epm/registry/index.js:70:11)
[00:07:20]                 │ proc [kibana]     at process._tickCallback (internal/process/next_tick.js:68:7)
[00:07:20]                 │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-debian-tests-xl-1593881987734876178] added role [fleet_enroll]
[00:07:21]                 │ info [o.e.x.s.a.u.TransportPutUserAction] [kibana-ci-immutable-debian-tests-xl-1593881987734876178] added user [fleet_enroll]
[00:07:21]                 └- ✖ fail: "apis Fleet Endpoints fleet_setup should create a fleet_enroll user and role"
[00:07:21]                 │

Stack Trace

Error: expected 200 "OK", got 500 "Internal Server Error"
    at Test._assertStatus (/dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:268:12)
    at Test._assertFunction (/dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:283:11)
    at Test.assert (/dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:173:18)
    at assert (/dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:131:12)
    at /dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:128:5
    at Test.Request.callback (/dev/shm/workspace/kibana/node_modules/superagent/lib/node/index.js:718:3)
    at parser (/dev/shm/workspace/kibana/node_modules/superagent/lib/node/index.js:906:18)
    at IncomingMessage.res.on (/dev/shm/workspace/kibana/node_modules/superagent/lib/node/parsers/json.js:19:7)
    at endReadableNT (_stream_readable.js:1145:12)
    at process._tickCallback (internal/process/next_tick.js:63:19)

Kibana Pipeline / kibana-xpack-agent / X-Pack Endpoint Functional Tests.x-pack/test/security_solution_endpoint/apps/endpoint.endpoint "before all" hook in "endpoint"

Link to Jenkins

Standard Out

Failed Tests Reporter:
  - Test has not failed recently on tracked branches

[00:00:00]       │
[00:00:00]         └-: endpoint
[00:00:00]           └-> "before all" hook
[00:00:00]           └-> "before all" hook
[00:00:00]             │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-debian-tests-xl-1593881987734876178] [logs-index_pattern_placeholder] creating index, cause [api], templates [], shards [1]/[1]
[00:00:00]             │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-debian-tests-xl-1593881987734876178] [events-index_pattern_placeholder] creating index, cause [api], templates [], shards [1]/[1]
[00:00:00]             │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-debian-tests-xl-1593881987734876178] [metrics-index_pattern_placeholder] creating index, cause [api], templates [], shards [1]/[1]
[00:00:00]             │ proc [kibana]  error  [18:14:21.236] [warning][process] UnhandledPromiseRejectionWarning: Error: package not found
[00:00:00]             │ proc [kibana]     at Object.fetchFindLatestPackage (/dev/shm/workspace/install/kibana-8/x-pack/plugins/ingest_manager/server/services/epm/registry/index.js:70:11)
[00:00:00]             │ proc [kibana]     at process._tickCallback (internal/process/next_tick.js:68:7)
[00:00:00]             │ proc [kibana]     at emitWarning (internal/process/promises.js:97:15)
[00:00:00]             │ proc [kibana]     at emitPromiseRejectionWarnings (internal/process/promises.js:143:7)
[00:00:00]             │ proc [kibana]     at process._tickCallback (internal/process/next_tick.js:69:34)
[00:00:00]             │ proc [kibana]  error  [18:14:21.238] [warning][process] Error: package not found
[00:00:00]             │ proc [kibana]     at Object.fetchFindLatestPackage (/dev/shm/workspace/install/kibana-8/x-pack/plugins/ingest_manager/server/services/epm/registry/index.js:70:11)
[00:00:00]             │ proc [kibana]     at process._tickCallback (internal/process/next_tick.js:68:7)
[00:00:00]             │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-debian-tests-xl-1593881987734876178] added role [fleet_enroll]
[00:00:00]             │ info [o.e.x.s.a.u.TransportPutUserAction] [kibana-ci-immutable-debian-tests-xl-1593881987734876178] added user [fleet_enroll]
[00:00:00]             │ info Taking screenshot "/dev/shm/workspace/kibana/x-pack/test/functional/screenshots/failure/endpoint _before all_ hook.png"
[00:00:00]             │ proc [kibana]  error  [18:14:21.031]  Error: Internal Server Error
[00:00:00]             │ proc [kibana]     at HapiResponseAdapter.toError (/dev/shm/workspace/install/kibana-8/src/core/server/http/router/response_adapter.js:132:19)
[00:00:00]             │ proc [kibana]     at HapiResponseAdapter.toHapiResponse (/dev/shm/workspace/install/kibana-8/src/core/server/http/router/response_adapter.js:86:19)
[00:00:00]             │ proc [kibana]     at HapiResponseAdapter.handle (/dev/shm/workspace/install/kibana-8/src/core/server/http/router/response_adapter.js:81:17)
[00:00:00]             │ proc [kibana]     at Router.handle (/dev/shm/workspace/install/kibana-8/src/core/server/http/router/router.js:160:34)
[00:00:00]             │ proc [kibana]     at process._tickCallback (internal/process/next_tick.js:68:7)
[00:00:00]             │ info Current URL is: data:/,
[00:00:00]             │ info Saving page source to: /dev/shm/workspace/kibana/x-pack/test/functional/failure_debug/html/endpoint _before all_ hook.html
[00:00:00]             └- ✖ fail: "endpoint "before all" hook in "endpoint""
[00:00:00]             │

Stack Trace

Error: expected 200 "OK", got 500 "Internal Server Error"
    at Test._assertStatus (/dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:268:12)
    at Test._assertFunction (/dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:283:11)
    at Test.assert (/dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:173:18)
    at assert (/dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:131:12)
    at /dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:128:5
    at Test.Request.callback (/dev/shm/workspace/kibana/node_modules/superagent/lib/node/index.js:718:3)
    at parser (/dev/shm/workspace/kibana/node_modules/superagent/lib/node/index.js:906:18)
    at IncomingMessage.res.on (/dev/shm/workspace/kibana/node_modules/superagent/lib/node/parsers/json.js:19:7)
    at endReadableNT (_stream_readable.js:1145:12)
    at process._tickCallback (internal/process/next_tick.js:63:19)

Build metrics

✅ unchanged

History

To update your PR or re-run it, just comment with:
@elasticmachine merge upstream

@kibanamachine
Copy link
Contributor

Looks like this PR has a backport PR but it still hasn't been merged. Please merge it ASAP to keep the branches relatively in sync.

@kibanamachine kibanamachine removed the backport missing Added to PRs automatically when the are determined to be missing a backport. label Jul 6, 2020
madirey added a commit that referenced this pull request Jul 6, 2020
…ment + Artifact Download and Distribution (#67707) (#70758)

* stub out task for the exceptions list packager

* Hits list code and pages

* refactor

* Begin adding saved object and type definitions

* Transforms to endpoint exceptions

* Get internal SO client

* update messaging

* cleanup

* Integrating with task manager

* Integrated with task manager properly

* Begin adding schemas

* Add multiple OS and schema version support

* filter by OS

* Fixing sort

* Move to security_solutions

* siem -> securitySolution

* Progress on downloads, cleanup

* Add config, update artifact creation, add TODOs

* Fixing buffer serialization problem

* Adding cleanup to task

* Handle HEAD req

* proper header

* More robust task management

* single -> agnostic

* Fix OS filtering

* Scaffolding digital signatures / tests

* Adds rotue for creating endpoint user

* Cleanup

* persisting user

* Adding route to fetch created user

* Addings tests for translating exceptions

* Adding test for download API

* Download tweaks + artifact generation fixes

* reorganize

* fix imports

* Fixing test

* Changes id of SO

* integration tests setup

* Add first integration tests

* Cache layer

* more schema validation

* Set up for manifest update

* minor change

* remove setup code

* add manifest schema

* refactoring

* manifest rewrite (partial)

* finish scaffolding new manifest logic

* syntax errors

* more refactoring

* Move to endpoint directory

* minor cleanup

* clean up old artifacts

* Use diff appropriately

* Fix download

* schedule task on interval

* Split up into client/manager

* more mocks

* config interval

* Fixing download tests and adding cache tests

* lint

* mo money, mo progress

* Converting to io-ts

* More tests and mocks

* even more tests and mocks

* Merging both refactors

* Adding more tests for the convertion layer

* fix conflicts

* Adding lzma types

* Bug fixes

* lint

* resolve some type errors

* Adding back in cache

* Fixing download test

* Changing cache to be sized

* Fix manifest manager initialization

* Hook up datasource service

* Fix download tests

* Incremental progress

* Adds integration with ingest manager for auth

* Update test fixture

* Add manifest dispatch

* Refactoring to use the same SO Client from ingest

* bug fixes

* build renovate config

* Fix endpoint_app_context_services tests

* Only index the fields that are necessary for searching

* Integ test progress

* mock and test city

* Add task tests

* Tests for artifact_client and manifest_client

* Add manifest_manager tests

* minor refactor

* Finish manifest_manager tests

* Type errors

* Update integ test

* Type errors, final cleanup

* Fix integration test and add test for invalid api key

* minor fixup

* Remove compression

* Update task interval

* Removing .text suffix from translated list

* Fixes hashes for unit tests

* clean up yarn.lock

* Remove lzma-native from package.json

* missed updating one of the tests

Co-authored-by: Alex Kahan <alexander.kahan@elastic.co>

Co-authored-by: Alex Kahan <alexander.kahan@elastic.co>
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Feature:Endpoint Elastic Endpoint feature release_note:skip Skip the PR/issue when compiling release notes Team:Endpoint Response Endpoint Response Team v7.9.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants