Skip to content

Commit

Permalink
[DOCS] Improves documentation about deploying DFA trained models (#2524
Browse files Browse the repository at this point in the history
…) (#2526)

(cherry picked from commit a417a70)
  • Loading branch information
szabosteve authored Aug 21, 2023
1 parent b59aa76 commit ffd2d28
Show file tree
Hide file tree
Showing 7 changed files with 66 additions and 14 deletions.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 2 additions & 0 deletions docs/en/stack/ml/df-analytics/ml-dfa-classification.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -258,6 +258,8 @@ The model that you created is stored as {es} documents in internal indices. In
other words, the characteristics of your trained model are saved and ready to be
deployed and used as functions.

include::ml-dfa-shared.asciidoc[tag=dfa-deploy-model]


[discrete]
[[ml-inference-class]]
Expand Down
2 changes: 2 additions & 0 deletions docs/en/stack/ml/df-analytics/ml-dfa-regression.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -194,6 +194,8 @@ deployed and used as functions. The <<ml-inference-reg,{infer}>> feature enables
you to use your model in a preprocessor of an ingest pipeline or in a pipeline
aggregation of a search query to make predictions about your data.

include::ml-dfa-shared.asciidoc[tag=dfa-deploy-model]


[discrete]
[[ml-inference-reg]]
Expand Down
47 changes: 41 additions & 6 deletions docs/en/stack/ml/df-analytics/ml-dfa-shared.asciidoc
Original file line number Diff line number Diff line change
@@ -1,3 +1,40 @@
tag::dfa-deploy-model[]
. To deploy {dfanalytics} model in a pipeline, navigate to **Machine Learning** >
**Model Management** > **Trained models** in {kib}.

. Find the model you want to deploy in the list and click **Deploy model** in
the **Actions** menu.
+
--
[role="screenshot"]
image::images/ml-dfa-trained-models-ui.png["The trained models UI in {kib}"]
--

. Create an {infer} pipeline to be able to use the model against new data
through the pipeline. Add a name and a description or use the default values.
+
--
[role="screenshot"]
image::images/ml-dfa-inference-pipeline.png["Creating an inference pipeline"]
--

. Configure the pipeline processors or use the default settings.
+
--
[role="screenshot"]
image::images/ml-dfa-inference-processor.png["Configuring an inference processor"]
--
. Configure to handle ingest failures or use the default settings.

. (Optional) Test your pipeline by running a simulation of the pipeline to
confirm it produces the anticipated results.

. Review the settings and click **Create pipeline**.

The model is deployed and ready to use through the {infer} pipeline.
end::dfa-deploy-model[]


tag::dfa-evaluation-intro[]
Using the {dfanalytics} features to gain insights from a data set is an
iterative process. After you defined the problem you want to solve, and chose
Expand All @@ -18,10 +55,8 @@ the ground truth. The {evaluatedf-api} evaluates the performance of the
end::dfa-evaluation-intro[]

tag::dfa-inference[]
{infer-cap} is a {ml} feature that enables you to use supervised {ml} processes
– like {regression} or {classification} – not only as a batch analysis but in a
continuous fashion. This means that {infer} makes it possible to use
<<ml-trained-models,trained {ml} models>> against incoming data.
{infer-cap} enables you to use <<ml-trained-models,trained {ml} models>> against
incoming data in a continuous fashion.

For instance, suppose you have an online service and you would like to predict
whether a customer is likely to churn. You have an index with historical data –
Expand All @@ -43,7 +78,7 @@ are indexed into the destination index.

Check the {ref}/inference-processor.html[{infer} processor] and
{ref}/ml-df-analytics-apis.html[the {ml} {dfanalytics} API documentation] to
learn more about the feature.
learn more.
end::dfa-inference-processor[]

tag::dfa-inference-aggregation[]
Expand All @@ -58,7 +93,7 @@ to set up a processor in the ingest pipeline.
Check the
{ref}/search-aggregations-pipeline-inference-bucket-aggregation.html[{infer} bucket aggregation]
and {ref}/ml-df-analytics-apis.html[the {ml} {dfanalytics} API documentation] to
learn more about the feature.
learn more.

NOTE: If you use trained model aliases to reference your trained model in an
{infer} processor or {infer} aggregation, you can replace your trained model
Expand Down
29 changes: 21 additions & 8 deletions docs/en/stack/ml/df-analytics/ml-trained-models.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -12,14 +12,6 @@ information about this process, see <<ml-supervised-workflow>> and
<<ml-inference-class,{infer} for {classification}>> and
<<ml-inference-reg,{regression}>>.

You can also supply trained models that are not created by {dfanalytics-job} but
adhere to the appropriate
https://github.com/elastic/ml-json-schemas[JSON schema]. Likewise, you can use
third-party models to perform natural language processing (NLP) tasks. If you
want to use these trained models in the {stack}, you must store them in {es}
documents by using the {ref}/put-trained-models.html[create trained models API].
For more information about NLP models, refer to <<ml-nlp-deploy-models>>.

In {kib}, you can view and manage your trained models in
*{stack-manage-app}* > *Alerts and Insights* > *{ml-app}* and
*{ml-app}* > *Model Management*.
Expand All @@ -28,6 +20,27 @@ Alternatively, you can use APIs like
{ref}/get-trained-models.html[get trained models] and
{ref}/delete-trained-models.html[delete trained models].

[discrete]
[[deploy-dfa-trained-models]]
== Deploying trained models

[discrete]
=== Models trained by {dfanalytics}

include::ml-dfa-shared.asciidoc[tag=dfa-deploy-model]


[discrete]
=== Models trained by other methods

You can also supply trained models that are not created by {dfanalytics-job} but
adhere to the appropriate
https://github.com/elastic/ml-json-schemas[JSON schema]. Likewise, you can use
third-party models to perform natural language processing (NLP) tasks. If you
want to use these trained models in the {stack}, you must store them in {es}
documents by using the {ref}/put-trained-models.html[create trained models API].
For more information about NLP models, refer to <<ml-nlp-deploy-models>>.


[discrete]
[[export-import]]
Expand Down

0 comments on commit ffd2d28

Please sign in to comment.