Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[discuss] Removal of kibana.index configuration setting #60053

Closed
tylersmalley opened this issue Mar 12, 2020 · 30 comments
Closed

[discuss] Removal of kibana.index configuration setting #60053

tylersmalley opened this issue Mar 12, 2020 · 30 comments
Labels
discuss Team:Operations Team label for Operations Team

Comments

@tylersmalley
Copy link
Contributor

tylersmalley commented Mar 12, 2020

A lot has changed since kibana.index was introduced in terms of supporting multi-tenancy and there is a lot of confusion around it. Over time, there have been additional configuration properties which were added that also need to be modified for those with multiple Kibana instances using the same Elasticsearch cluster (xpack.reporting.index, xpack.task_manager.index). From what I can tell, we have never actually fully documented this functionality outside of the Kibana Settings doc.

The built-in system_user role is based on using the defaults for these indices. For users modifying these, they will need to create new users/roles to access these indices. This adds to the upgrade burden as they need to modify them when we modify the default privilages.

We now have Spaces, which allow for isolation of Kibana saved objects and continue to improve upon this feature like sharing to other spaces. For those who require a separate Kibana instance, I would be interested to understand why. To continue support for this, a user could have a small Elasticsearch cluster per Kibana instance, and Cross-Cluster Search would be used to access the shared data. There is obviously more overhead both in terms of resources and management than before.

This change would greatly reduce the complexity of migrating to System Indices.

@alexfrancoeur @arisonl @ppf2 can any of you provide insights to this being used currently and issues with removing this in 8.0?

@tylersmalley tylersmalley added discuss Team:Operations Team label for Operations Team labels Mar 12, 2020
@elasticmachine
Copy link
Contributor

Pinging @elastic/kibana-operations (Team:Operations)

@kobelb
Copy link
Contributor

kobelb commented Mar 12, 2020

/cc @peterschretlen

@tylersmalley tylersmalley changed the title [discuss] Removal of kibana.index [discuss] Removal of kibana.index configuration setting Mar 12, 2020
@alexfrancoeur
Copy link

alexfrancoeur commented Mar 13, 2020

Historically, a separate .kibana index was used to support multiple kibana instances for a multi-tenant experience. Now that we have Spaces, this type of environment is less common. While there are many initiatives in flight that would mitigate the need for a separate Kibana instance (alerting, sub-feature controls + ML, etc.), I think there are still cases where we need to support multiple kibana instances, and inherently, multiple kibana indices. While not the majority of users, there are still a fair bit of clusters running more than one Kibana instance in 7.x

The two use cases that stand out the most to me are localization and scaling out Kibana. While our localization story still needs to mature, the translations are done at the Kibana instance level. So in order to have Kibana in two languages, you would need two Kibana instances pointing at the same Elasticsearch cluster. 

Scaling out Kibana, whether that's for reporting or general task management is another reason why you might want multiple Kibana's pointing to a cluster.

Generally, I feel like Spaces doesn't quite cover all multi-tenant needs and we'll need a way to support multiple Kibana instances and configurations pointing to the same Elasticsearch cluster. I'll send over some additional data shortly.

cc: @skearns64 @VijayDoshi for additional thoughts

@jbudz
Copy link
Member

jbudz commented Mar 13, 2020

I think it's a good eventual goal. Also +1 to alex. maybe we can come up with a list that would support this eventuality. A configuration for supporting multi kibana's in the same cluster, e.g. appending server.name to the index or something.

@peterschretlen
Copy link
Contributor

Aside from multi-tenancy, are there reasons someone might want to rename .kibana to something else? Perhaps it's moot in a transition to a system index, but wondering if multi-tenancy is the only use for this setting.

Scaling out Kibana, whether that's for reporting or general task management is another reason why you might want multiple Kibana's pointing to a cluster.

In the scaling case won't the instances share the same Kibana index (multi-instance but single-tenant) - perhaps there are exception to this case?

@kobelb
Copy link
Contributor

kobelb commented Jun 1, 2020

I'm not aware of anyone changing the kibana.index setting except for supporting multi-tenancy.

If users are trying to scale out Kibana, I'd anticipate them leaving these settings alone. Same with localization support, they can run multiple instances with different languages without changing these settings.

@kobelb
Copy link
Contributor

kobelb commented Aug 10, 2020

While working through how Kibana will transition to system indices this conversation came up again. Allowing the user to manually specify the index names which are used for system-indices goes against the premise of system-indices, where these are treated as an implementation detail of the product.

I'd like to propose the following two paths forward:

  1. We remove kibana.index et al. and require the use of Spaces
  2. We remove kibana.index et al. and add a kibana.tenant setting which is used to derive the index names

Solutions

Require the use of Spaces

This option would require the least amount of effort and would lead to the simplest implementation of system indices. The use of Spaces still allows the user to provision Kibana in a highly-available manner and have different instances supporting different localization settings.

Add a kibana.tenant setting which is used to derive the index names

This option requires more effort and leads to a more complicated implementation of system indices. However, this provides the user with an easy way of achieving the same level of isolation that they get from configuring the index names directly, without manually specifying the index names.

Making a decision

There are some complexities to transitioning the user to either solution; however, I don't think that should be the primary motivator. In my opinion, we should be focusing on what the ideal future state is, and figuring out how to work towards it. It's entirely possible that Spaces doesn't provide the level of isolation that users are looking for; however, I'm not aware of a requirement that couldn't be solved using Spaces. @alexfrancoeur, I'm interested in hearing your opinion on this matter.

@kobelb
Copy link
Contributor

kobelb commented Sep 8, 2020

When migrating from legacy multi-tenancy via kibana.index to Spaces, we originally recommended that end-users utilize saved-object export/import to migrate all of their saved-objects from a tenant into a Space. There are some limitations to this approach as certain saved-objects can't be exported/imported. When Spaces was initially introduced, this limitation only applied to Graph and Timelion, but it's grown over time.

For Alerts and Actions, there are complexities introduced by their reliance on "encrypted saved-object attributes" which are encrypted before being stored in the .kibana index and the encrypted attributes are never returned in any SavedObjectClient operations. However, for all of these other saved-object types, it's not clear whether or not there's an underlying reason why they aren't importable/exportable.

@alexfrancoeur
Copy link

alexfrancoeur commented Sep 9, 2020

Thanks for the additional detail on migrations @kobelb. I think it'd be worth digging in a bit further to see what the scope of work would be to support the different types of saved objects listed above.

At a high level, here are my findings after some internal follow ups, research and analyzing usage.

  • With the launch of spaces in Nov 2018, we continue to see a rise in the usage of spaces and a decrease in clusters with multiple kibana indices. Most clusters using multiple .kibana indices are in 6.x, and are a small overall percentage. For many reasons, this is only a sample of our user base and does not represent the full community - but interesting observations nonetheless.
  • The kibana.index (and other internal indices) configuration is a common point of confusion and frustration in support cases
  • There are users and customers that still prefer a "strong" separation of isolating Kibana's

In order to move forward with the removal of kibana.index configuration in favor of system indices, I believe we'll need to do the following:

  • As this could potentially affect some of our larger customers, get additional buy in from stack product leads @VijayDoshi and @skearns64
  • Better understand impact on other internal teams, ML jobs is one example here (cc: @tylersmalley)
  • Proactively engage with some of our larger customer environments in which this multi-tenancy approach is preferred to learn more (we're working with @ppf2 on getting more details)
  • Ensure we provide ample notification that we are removing support for custom .kibana indices
  • As much as possible, provide detailed steps to migrate from custom .kibana indices to either
    1. CCS + multiple dedicated Kibana clusters and/or ECE for "strong" isolation
    2. Migrate to spaces

As long as we have a plan and guidelines for migration (a script does not seem possible) and effectively communicate that plan to our community and customer base, I'm +1 on migrating to system indices in Kibana assuming there is no strong opposition from stack leads and other teams at Elastic.

@tylersmalley
Copy link
Contributor Author

Better understand impact on other internal teams, ML jobs is one example here (cc: @tylersmalley)

To touch on ML specifically, @droberts195 doesn't think this will be an issue as the lack of isolation of ML with legacy multi-tenancy is considered a bug. There is an issue here to track integration with Spaces currently targeting 7.11.

@kobelb
Copy link
Contributor

kobelb commented Sep 9, 2020

Thanks for the additional detail on migrations @kobelb. I think it'd be worth digging in a bit further to see what the scope of work would be to support the different types of saved objects listed above.

Agreed. I chatted briefly with @XavierM about the Security team's cases and timeline saved-objects. I've attempted to summarize our discussion below.

For the rest of the saved-objects which aren't currently importable/exportable, I think it'd be worthwhile for us to answer the following questions:

  1. Should these saved-objects be exportable/importable to allow users to migrate them between different Kibana instances?
  2. Are these saved-objects exportable/importable in some manner besides saved-object management?
  3. Are these saved-objects using "encrypted saved-object attributes"?
  4. Are these saved-objects related to each other?
    4a) If yes, are these saved-objects using references?
    4b) If yes, does the end-user think of them as discrete entities, or do they think of them as a single entity?

@spong do you mind answering these questions for the detection engine?
@kevinlog do you mind answering these questions for endpoint?
@roncohen do you mind answering these questions for Observability?

Timelines

The security team built their own import/export UI for timeline saved-objects. This is good because we at least have a way to migrate timelines from a legacy tenant to a Space, but there are complications with integrating timelines into the saved-object management import/export.

A timeline itself is modeled using a siem-ui-timeline saved-object which is related to both the siem-ui-timeline-note and siem-ui-timeline-pinned-event; however, this relationship isn't modeled using saved-object references. As such, if we were to make all of the timeline saved-objects importable and exportable, if you exported the timeline it wouldn't export the associated notes and pinned events.

If we were to make the timeline use saved-object references, it wouldn't solve all of our problems since we don't want the notes and pinned events to be listed on the saved-object management screens and just want them to be automatically included when the timeline is exported. Saved-object references have repercussions on the behavior of authorization once sharing saved-objects in multiple spaces is implemented, which I don't think we want for timelines since end-users won't think of timelines and their associated notes and events as distinct entities. So, we, unfortunately, have more than one reason to figure out a solution to this problem.

When confronted with a similar problem when discussing how to reduce the time to visualize, we decided that the Dashboard saved-object itself should embed the Visualizations which are only used in the context of the specific Dashboard. This would at a minimum require that our saved-object migrations allow us to combine all saved-object types into just a siem-ui-timelie saved-object.

Cases

There isn't a way to import or export cases at the moment. Cases are using saved-object references to associate the cases saved-object with the cases-comments, cases-configure, cases-user-actions. However, we still have the issue here with only wanting the cases themselves to be exportable/importable and to automatically include the related saved-objects. There's also the potential short-term solution of creating a custom import/export UI for cases.

@kevinlog
Copy link
Contributor

kevinlog commented Sep 18, 2020

@kobelb apologies that I missed this. I'm going to pull in a couple other people to help out.

@peluja1012 @spong @FrankHassanabad @paul-tavares - regarding the questions below:

For the rest of the saved-objects which aren't currently importable/exportable, I think it'd be worthwhile for us to answer the following questions:

  1. Should these saved-objects be exportable/importable to allow users to migrate them between different Kibana instances?
  2. Are these saved-objects exportable/importable in some manner besides saved-object management?
  3. Are these saved-objects using "encrypted saved-object attributes"?
  4. Are these saved-objects related to each other?
    4a) If yes, are these saved-objects using references?
    4b) If yes, does the end-user think of them as discrete entities, or do they think of them as a single entity?

In the Endpoint Mgmt cases, right now we're only using Saved Objects with the lists plugin for Trusted Apps (unreleased 7.10 feature). Same list plugin as Exceptions. I'm looking for some help to answer the above questions as I don't want to mislead anyone.

@paul-tavares which additional SOs does Trusted Apps introduce? I imagine it'd be very similar to Exceptions.

@spong
Copy link
Member

spong commented Sep 19, 2020

So on the Detections side we've got a few different types of SO's we're working with spread across two separate plugins:

Security Solution Plugin

Detection Rules (Backed by Alerting/Actions SO)
Detection Actions/Notifications (Backed by Alerting/Actions SO)
Detection Actions/Notifications (Shadow object of above SO for actions scheduled outside of rule execution time)
Detection Rule Status (To provide interim monitoring/status until available via Alerting/Actions directly)

Lists Plugin

Exception Lists (Both agnostic and non-agnostic, use same mapping, and can reference non-SO Value Lists backed by the .lists/.items data indexes)


To answer the above questions:

  1. Should these saved-objects be exportable/importable to allow users to migrate them between different Kibana instances?

Detection Rules, Detection Actions, and Exceptions should all be exportable/importable for our users. We currently expose methods of exporting/importing Detection Rules (and index-backed Value Lists) from within the Security Solution UI directly, but they do not include any referenced child objects (Actions/Exceptions/Timelines/ML Jobs/Saved Queries).

  1. Are these saved-objects exportable/importable in some manner besides saved-object management?

None of these SO's are currently exportable/importable via saved-object management. This was (is?) not exposed via the Alerting/Actions framework, and so couldn't be implemented for Detection Rules or Detection Actions, and didn't end up getting implemented for the remaining SO's.

  1. Are these saved-objects using "encrypted saved-object attributes"?

All Alerting/Actions backed SO's do, but Exceptions, Detection Rule Status, and Detection Actions (Shadow Object) do not.

  1. Are these saved-objects related to each other?

Generally speaking the relationship is as follows:

A Detection Rule references a single Detection Action (and potentially one shadow SO if configured), and at most 2 Exceptions (one agnostic, one not). It may also reference a Timeline, a Saved Query (this will be removed in 7.10 #76592), and/or an ML Job as well.

And as mentioned above, an Exception can reference non-SO Value Lists backed by the .lists/.items data indexes.

4a) If yes, are these saved-objects using references?

We are not currently leveraging the SO reference array as this was not exposed from the Alerting/Actions framework. All references are maintained via custom fields.

4b) If yes, does the end-user think of them as discrete entities, or do they think of them as a single entity?

A little bit of everything here. It would be useful for users to back up their entire cluster by exporting all of their Rules, Actions, associated Timeline Templates, Exceptions and linked Value Lists, just a single Rule and all referenced objects, or just individual Rules, Actions, Exceptions, etc.

Hopefully this helps, and is the right amount of information you're looking for. There's obviously quite a bit more here, so happy to dive deeper in certain areas if it'd be helpful! 🙂

@kobelb
Copy link
Contributor

kobelb commented Sep 22, 2020

@mikecote this discussion likely interests you because of the current inability to export alerts/actions. Also, have you all investigated whether or not it's feasible to have Alert/Actions utilize saved-object references?

@mikecote
Copy link
Contributor

@kobelb Thanks for the ping. The alerts are currently using SO references to reference their connectors (actions) so we're all good on that front.

I can provide some extra steps / challenges we're faced when it comes to import / export of alerts and connectors:

  • When moving alert SOs to another cluster, the API keys will not exist within Elasticsearch
  • When importing alerts, task manager tasks will have to be created so these alerts run after importing
  • Is is possible that the user wants to provide different credentials in their connectors when moving to another cluster
  • The RBAC work we've done doesn't give the user direct access to the SOs
  • As mentioned, usage of encrypted attributes (we may revisit this down the line for allowing those to be exposed now that we have RBAC)

@kobelb
Copy link
Contributor

kobelb commented Oct 13, 2020

Thank you everyone who has helped out with this discussion thus far. We've been able to identify quite a few common issues that prevent saved-object export/import from being used to migrate from legacy multi-tenancy to Spaces:

  1. Saved-object references are currently being used to model a "composition" relationship, where the referenced entity should be included automatically in exports and not treated as a separate entity.
  2. When encrypted saved-object attributes are used, the attribute values will not be included in exports. This can cause issues on import.
  3. When a custom client is created and the saved-objects are "hidden", they can't be accessed using the standard saved-object APIs, and thus import/export does not work.
    3a. Alerting needs tasks to be created when an alert is created
    3b. Alerting also needs the ability to create a new API Key associated with an Alert
    3c. Validation should be performed prior to create/update
  4. Kibana-specific data is being stored outside of saved-objects. For example, the Security Solution's value lists stored in the .lists/.items indices.

However, there are a few outstanding questions that would help further flesh out these limitations. In an effort to minimize the cognitive burden, there's quite a bit of redundancy below. If your name doesn't appear in the heading of a section, please feel free to ignore it!

ML - @droberts195

Once ML jobs are migrated to be space-specific, is my understanding correct that all ML jobs will show up in all "legacy multi-tenancy" deployments of Kibana in all Spaces? If that's the case, then we shouldn't have to do any export/import of ML jobs.

Endpoint - @kevinlog

I don't think that we have an answer to the following questions for the endpoint:user-artifacts and endpoint:user-artifact-manifest saved-objects yet:

  1. Should these saved-objects be exportable/importable to allow users to migrate them between different Kibana instances?
  2. Are these saved-objects exportable/importable in some manner besides saved-object management?
  3. Are these saved-objects using "encrypted saved-object attributes"?
  4. Are these saved-objects related to each other?
    4a. If yes, are these saved-objects using references?
    4b. If yes, does the end-user think of them as discrete entities, or do they think of them as a single entity?

Ingest Manager - @ruflin

For the Ingest Manager specific saved objects, would you mind answering the following questions? At the time this issue was originally authored, they were the following: ingest_manager_settings, fleet-agents, fleet-agent-actions, fleet-agent-events, ingest-agent-policies, fleet-enrollment-api-keys, ingest-outputs, ingest-package-policies, epm-packages?

  1. Should these saved-objects be exportable/importable to allow users to migrate them between different Kibana instances?
  2. Are these saved-objects exportable/importable in some manner besides saved-object management?
  3. Are these saved-objects using "encrypted saved-object attributes"?
  4. Are these saved-objects related to each other?
    4a. If yes, are these saved-objects using references?
    4b. If yes, does the end-user think of them as discrete entities, or do they think of them as a single entity?

APM - @sqren

  1. Should the apm-indices saved-object be exportable/importable to allow users to migrate them between different Kibana instances?
  2. Is the apm-indices saved-object exportable/importable in some manner besides saved-object management?
  3. Is the apm-indices saved-object using "encrypted saved-object attributes"?
  4. Is there some barrier to us allowing apm-indices to be exported/imported right now?

Uptime - @andrewvc

  1. Should the uptime-dynamic-settings saved-object be exportable/importable to allow users to migrate them between different Kibana instances?
  2. Is the uptime-dynamic-settings saved-object exportable/importable in some manner besides saved-object management?
  3. Is the auptime-dynamic-settings saved-object using "encrypted saved-object attributes"?
  4. Is there some barrier to us allowing uptime-dynamic-settings to be exported/imported right now?

@ruflin
Copy link
Member

ruflin commented Oct 14, 2020

@jen-huang @nchaulet Could one of you follow up on the questions above for Ingest Manager?

@droberts195
Copy link
Contributor

Once ML jobs are migrated to be space-specific, is my understanding correct that all ML jobs will show up in all "legacy multi-tenancy" deployments of Kibana in all Spaces? If that's the case, then we shouldn't have to do any export/import of ML jobs.

Yes, this is true immediately after upgrading to the version of Kibana where the "ML in Spaces" project is completed. So if we suppose that release ends up being 7.11 then:

  • In 7.10 and earlier all ML jobs show up in all spaces in all "legacy multi-tenancy" deployments of Kibana.
  • Immediately after upgrading to 7.11, all ML jobs that existed at the time of the upgrade show up in all spaces in all "legacy multi-tenancy" deployments of Kibana.

Things get more complicated after that though:

  • New jobs created in a "legacy multi-tenancy" deployment of Kibana will belong to the space they were created in in that deployment. But in the other deployments they'll show up in all spaces.
  • Administrators will be able to move ML jobs into whichever spaces they like via the management pages. These spaces could be different in each deployment.

So if a consolidation of "legacy multi-tenancy" deployments into a single deployment is done after 7.11 then deciding what to do with the saved objects that store the space-awareness for each ML job would need some special handling. I guess the migration would have to choose one deployment as the favoured one, keep the ML job saved objects from it, and discard the ML job saved objects from other deployments. Then an administrator could rearrange things manually after that initial migration.

To be honest though, ML doesn't work brilliantly with "legacy multi-tenancy" today. For example, if you create a data frame analytics job then we create an index pattern to make it easy to look at what's in the destination index. That index pattern will only exist in the deployment that the job was created in though. So if you try to navigate to the destination index in another deployment then the expected index pattern won't exist. So I imagine that most users who use the "legacy multi-tenancy" architecture either don't use ML or have found other workarounds, for example disabling the ML Kibana app in all but one of the Kibana deployments.

@andrewvc
Copy link
Contributor

  • Should the uptime-dynamic-settings saved-object be exportable/importable to allow users to migrate them between different Kibana instances? Yes
  • Is the uptime-dynamic-settings saved-object exportable/importable in some manner besides saved-object management?: No
  • Is the auptime-dynamic-settings saved-object using "encrypted saved-object attributes"?: No
  • Is there some barrier to us allowing uptime-dynamic-settings to be exported/imported right now?: Not that I'm aware of, though this is untested.We only store a few simple values.

@nchaulet
Copy link
Member

Ingest Manager
For the Ingest Manager specific saved objects, would you mind answering the following questions? At the time this issue was originally authored, they were the following: ingest_manager_settings, fleet-agents, fleet-agent-actions, fleet-agent-events, ingest-agent-policies, fleet-enrollment-api-keys, ingest-outputs, ingest-package-policies, epm-packages?
Should these saved-objects be exportable/importable to allow users to migrate them between different Kibana instances?

I do not think we should be able to import/export our saved object:

  • agents rely on api keys being present in ES
  • package rely on assets loaded in ES

Are these saved-objects exportable/importable in some manner besides saved-object management?

No

Are these saved-objects using "encrypted saved-object attributes"?

fleet-enrollment-api-keys, ingest_manager_settings, fleet-agents, fleet-agent-actions are using encrypted saved object attributes

Are these saved-objects related to each other?
4a. If yes, are these saved-objects using references?
4b. If yes, does the end-user think of them as discrete entities, or do they think of them as a single entity?

Yes we have save object related to each other but they are not using reference (enrolment api key are linked to an agent policy, agent package policy are linked to agent policy)

@kobelb
Copy link
Contributor

kobelb commented Oct 15, 2020

@droberts195 thanks for the detailed explanation, it's much appreciated. My primary goal is that when we remove legacy multi-tenancy, that end-users don't lose all of their data in the non-default tenant and have to manually recreate it. If my understanding is correct, in the worst-case scenario the end-user is using a non-default tenant to create ML jobs after ML Jobs become space aware, and when they are forced to use the default tenant going forward, they'll lose the spaces that these ML Jobs have been assigned. Does this seem like a tolerable experience to you, or should we invest the effort to ensuring that we can migrate this information from the non-default tenant to the default tenant?

@nchaulet am I being naive in thinking that ideally, agent policies would be able to be imported/exported between different instances of Kibana? Is your concern that they're so tied to integrations, which include ES assets, that it's infeasible to add this ability?

@nchaulet
Copy link
Member

@nchaulet am I being naive in thinking that ideally, agent policies would be able to be imported/exported between different instances of Kibana? Is your concern that they're so tied to integrations, which include ES assets, that it's infeasible to add this ability?

agent policies are really coupled to integrations, so it would probably do not make sense to export/import without having integrations properly installed.

@droberts195
Copy link
Contributor

If my understanding is correct, in the worst-case scenario the end-user is using a non-default tenant to create ML jobs after ML Jobs become space aware, and when they are forced to use the default tenant going forward, they'll lose the spaces that these ML Jobs have been assigned. Does this seem like a tolerable experience to you, or should we invest the effort to ensuring that we can migrate this information from the non-default tenant to the default tenant?

Yes, this is correct. The worst that will happen when combining all tenants into one is that the spaces that the jobs are visible in end up being lost or wrong in the combined tenant. The jobs themselves cannot get lost because we consider what Elasticsearch reports as the source of truth for which jobs exist. So in the worst case after combining the tenants an administrator will have to go to the job management list and make sure all the jobs are in the appropriate spaces in the combined tenant. Given that ML was never designed with multi-tenanted Kibana in mind and doesn't work well in it today I don't think it's that bad at all really.

@sorenlouv
Copy link
Member

Should the apm-indices saved-object be exportable/importable to allow users to migrate them between different Kibana instances?

Yes

Is the apm-indices saved-object exportable/importable in some manner besides saved-object management?

No

Is the apm-indices saved-object using "encrypted saved-object attributes"?

No

Is there some barrier to us allowing apm-indices to be exported/imported right now?

Not that I'm aware of.

@sorenlouv
Copy link
Member

sorenlouv commented Oct 19, 2020

For those who require a separate Kibana instance, I would be interested to understand why.
@tylersmalley

On Observability we need a lot of test data to be streamed continuously. This requires significant resources (CPU, RAM and disk space). Instead of having every dev do this we have a single cluster with test data, that every dev can connect to. So multiple local Kibana instances connect to a single central Elasticsearch instance.

and Cross-Cluster Search would be used to access the shared data

Last time I tried using CCS it wasn't supported between a local cluster and Elastic Cloud. So it might be difficult to replicate what we have today. Will investigate.

@jasonrhodes
Copy link
Member

One reason we switched to using our own individual kibana.index settings in Observability was because when we all just connected to a single remote cluster from a local Kibana using the same .kibana index, it would sometimes lead to nasty migration race conditions. In those situations we'd somewhat regularly see "please delete .kibana1 index” etc. Is that a known issue? If we remove this kibana.index setting, would the advice to all devs (inside and outside of Elastic) be to never connect to a single ES from multiple different Kibana instances?

@sorenlouv
Copy link
Member

sorenlouv commented Oct 19, 2020

it would sometimes lead to nasty migration race conditions. In those situations we'd somewhat regularly see "please delete .kibana1 index” etc. Is that a known issue?

@rudolf and I discussed the migration issue here.
The workaround might be to disable migrations with migrations.skip: true but last time I tried it either caused problems on its own or didn't prevent migrations from happening (can't remember which). But it's worth trying out again if that's the recommended workaround.

@kobelb kobelb mentioned this issue Oct 28, 2020
5 tasks
@kobelb
Copy link
Contributor

kobelb commented Oct 28, 2020

It's been decided, we will be removing the kibana.index setting and legacy multi-tenancy in 8.0. Thank you everyone for all of your participation in this discussion, it's been hugely helpful. I'll be creating a new issue to put this plan into action and will link here when it's done. Until then, I'm closing out this issue as the discussion has concluded.

@kobelb kobelb closed this as completed Oct 28, 2020
@jasonrhodes
Copy link
Member

It seems like the observability shared cluster use case is still a bit up in the air. We'd appreciate help and guidance on how to best move forward with that since cross-cluster search and migration race conditions have prevented us from working on a shared cluster in the past, without the kibana.index setting.

@kobelb
Copy link
Contributor

kobelb commented Oct 28, 2020

@jasonrhodes absolutely, I'll include accomodating for this in the new issue outlining the approach.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
discuss Team:Operations Team label for Operations Team
Projects
None yet
Development

No branches or pull requests