diff --git a/docs/en/stack/ml/nlp/ml-nlp-text-emb-vector-search-example.asciidoc b/docs/en/stack/ml/nlp/ml-nlp-text-emb-vector-search-example.asciidoc index af8d8d0a1..5518f0c59 100644 --- a/docs/en/stack/ml/nlp/ml-nlp-text-emb-vector-search-example.asciidoc +++ b/docs/en/stack/ml/nlp/ml-nlp-text-emb-vector-search-example.asciidoc @@ -122,7 +122,7 @@ The result is the predicted dense vector transformed from the example text. In this step, you load the data that you later use in an ingest pipeline to get the embeddings. -The data set `msmarco-passagetest2019-top1000` is a subset of the MS MACRO +The data set `msmarco-passagetest2019-top1000` is a subset of the MS MARCO Passage Ranking data set used in the testing stage of the 2019 TREC Deep Learning Track. It contains 200 queries and for each query a list of relevant text passages extracted by a simple information retrieval (IR) system. From that