Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[8.14] [DOCS] Adds a note that the intel and linux optimized versions of ELSER and E5 are recommended (backport #2750) #2752

Merged
merged 1 commit into from
Jul 23, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions docs/en/stack/ml/nlp/ml-nlp-e5.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,9 @@ You can download and deploy the E5 model either from
**{ml-app}** > **Trained Models**, from **Search** > **Indices**, or by using
the Dev Console.

NOTE: For most cases, the preferred version is the **Intel and Linux optimized**
model, it is recommended to download and deploy that version.


[discrete]
[[trained-model-e5]]
Expand Down
18 changes: 11 additions & 7 deletions docs/en/stack/ml/nlp/ml-nlp-elser.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -108,13 +108,17 @@ that walks through upgrading an index to ELSER V2.
You can download and deploy ELSER either from **{ml-app}** > **Trained Models**,
from **Search** > **Indices**, or by using the Dev Console.

IMPORTANT: You can deploy the model multiple times by assigning a unique
deployment ID when starting the deployment. It enables you to have dedicated
deployments for different purposes, such as search and ingest. By doing so, you
ensure that the search speed remains unaffected by ingest workloads, and vice
versa. Having separate deployments for search and ingest mitigates performance
issues resulting from interactions between the two, which can be hard to
diagnose.
[NOTE]
====
* For most cases, the preferred version is the **Intel and Linux optimized**
model, it is recommended to download and deploy that version.
* You can deploy the model multiple times by assigning a unique deployment ID
when starting the deployment. It enables you to have dedicated deployments for
different purposes, such as search and ingest. By doing so, you ensure that the
search speed remains unaffected by ingest workloads, and vice versa. Having
separate deployments for search and ingest mitigates performance issues
resulting from interactions between the two, which can be hard to diagnose.
====


[discrete]
Expand Down