From 628f599d0c95594d424f1c617400bd8a641915c6 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Istv=C3=A1n=20Zolt=C3=A1n=20Szab=C3=B3?= Date: Tue, 23 Jul 2024 15:14:30 +0200 Subject: [PATCH] [DOCS] Adds a note about the intel and linux optimized versions of ELSER and E5 are recommended. --- docs/en/stack/ml/nlp/ml-nlp-e5.asciidoc | 3 +++ docs/en/stack/ml/nlp/ml-nlp-elser.asciidoc | 18 +++++++++++------- 2 files changed, 14 insertions(+), 7 deletions(-) diff --git a/docs/en/stack/ml/nlp/ml-nlp-e5.asciidoc b/docs/en/stack/ml/nlp/ml-nlp-e5.asciidoc index 20e3d0962..f1550f93a 100644 --- a/docs/en/stack/ml/nlp/ml-nlp-e5.asciidoc +++ b/docs/en/stack/ml/nlp/ml-nlp-e5.asciidoc @@ -46,6 +46,9 @@ You can download and deploy the E5 model either from **{ml-app}** > **Trained Models**, from **Search** > **Indices**, or by using the Dev Console. +NOTE: For most cases, the preferred version is the **Intel and Linux optimized** +model, it is recommended to download and deploy that version. + [discrete] [[trained-model-e5]] diff --git a/docs/en/stack/ml/nlp/ml-nlp-elser.asciidoc b/docs/en/stack/ml/nlp/ml-nlp-elser.asciidoc index 6dee65f37..cf5c3022b 100644 --- a/docs/en/stack/ml/nlp/ml-nlp-elser.asciidoc +++ b/docs/en/stack/ml/nlp/ml-nlp-elser.asciidoc @@ -108,13 +108,17 @@ that walks through upgrading an index to ELSER V2. You can download and deploy ELSER either from **{ml-app}** > **Trained Models**, from **Search** > **Indices**, or by using the Dev Console. -IMPORTANT: You can deploy the model multiple times by assigning a unique -deployment ID when starting the deployment. It enables you to have dedicated -deployments for different purposes, such as search and ingest. By doing so, you -ensure that the search speed remains unaffected by ingest workloads, and vice -versa. Having separate deployments for search and ingest mitigates performance -issues resulting from interactions between the two, which can be hard to -diagnose. +[NOTE] +==== +* For most cases, the preferred version is the **Intel and Linux optimized** +model, it is recommended to download and deploy that version. +* You can deploy the model multiple times by assigning a unique deployment ID +when starting the deployment. It enables you to have dedicated deployments for +different purposes, such as search and ingest. By doing so, you ensure that the +search speed remains unaffected by ingest workloads, and vice versa. Having +separate deployments for search and ingest mitigates performance issues +resulting from interactions between the two, which can be hard to diagnose. +==== [discrete]