Skip to content

Commit

Permalink
updated documentation (#98)
Browse files Browse the repository at this point in the history
Signed-off-by: Dhrubo Saha <dhrubo@amazon.com>
(cherry picked from commit fcd53fc)
  • Loading branch information
dhrubo-os authored and github-actions[bot] committed Mar 7, 2023
1 parent 4295ba5 commit 53c2616
Show file tree
Hide file tree
Showing 5 changed files with 27 additions and 9 deletions.
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ OpenSearch-py-ml
## Welcome!

**opensearch-py-ml** is a Python client that provides a suite of data analytics and machine learning tools for OpenSearch.
**Opensearch-py-ml is an experimental project**

It is [a community-driven, open source fork](https://aws.amazon.com/blogs/opensource/introducing-opensearch/) a fork of [eland](https://github.com/elastic/eland), which provides data analysis and machine learning
licensed under the [Apache v2.0 License](https://github.com/opensearch-project/opensearch-py/blob/main/LICENSE.txt).

Expand All @@ -35,9 +35,9 @@ For more information, see [opensearch.org](https://opensearch.org/docs/latest/cl

Opensearch-py-ml can be installed from [PyPI](https://pypi.org/project/opensearch-py-ml) via pip:

~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
$ python -m pip install opensearch-py-ml
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

## Code of Conduct

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -432,10 +432,30 @@
" output_model_name = 'test2_model.pt',\n",
" zip_file_name= 'test2_model.zip',\n",
" overwrite = True,\n",
" num_epochs = 1,\n",
" num_epochs = 10,\n",
" verbose = False)"
]
},
{
"cell_type": "markdown",
"id": "7af60a71",
"metadata": {},
"source": [
"Following are some important points about the training cell executed above,\n",
"\n",
"1. The input to the training script consists of (query, passage) pairs. The model is trained to maximize the dot product between relevant queries and passages while at the same time minimize the dot product between queries and irrelevant passages. This is also known as contrastive learning. We implement this using in-batch negatives and a symmetric loss as mentioned below. \n",
"\n",
"2. To utilize the power of GPUs we collect training samples into a batch before sending for model training. Each batch contains B number of randomly selected training samples (q, p). Thus within a batch each query has one relevant passage and B-1 irrelevant passages. Similarly for every passage there's one relevant query and B-1 irrelevant queries. For every given relevant query and passage pair we minimize the following expression, called the loss, \n",
"\n",
"3. For a given batch B, the loss is defined as loss = C(q, p) + C(p, q) where $C(q, p) = - \\sum_{i=1}^{i=B} \\log \\left( \\frac{exp(q_i \\cdot p_i)}{\\sum_{j=1} ^{B} exp(q_i \\cdot p_j)}\\right)$ \n",
"\n",
"4. The model truncates documents beyond 512 tokens. If the corpus contains documents that are shorter than 512 tokens the model max length can be adjusted to that number. Shorter sequences take less memory and therefore allow for bigger batch sizes. The max length can be adjusted by the \"percentile\" argument. \n",
"\n",
"5. We use a batch size of 32 per device. Larger batch sizes lead to more in-batch negative samples and lead to better performance but unfortunately they also lead to out of memory issues. Shorter sequences use less memory, so if the document corpus is short feel free to experiment with larger batch sizes.\n",
"\n",
"6. The model is trained using the AdamW optimizer for 10 epochs with a learning rate of 2e-5 and a scheduler with linear schedule with warmup steps = 10,000"
]
},
{
"cell_type": "markdown",
"id": "c9bd0405",
Expand Down Expand Up @@ -549,4 +569,4 @@
},
"nbformat": 4,
"nbformat_minor": 5
}
}
2 changes: 0 additions & 2 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,6 @@ Opensearch-py-ml: DataFrames and Machine Learning backed by Opensearch
Opensearch-py-ml is a Python Opensearch client for exploring and analyzing data
in Opensearch with a familiar Pandas-compatible API.

**Opensearch-py-ml is an experimental project**

Where possible the package uses existing Python APIs and data structures to make it easy to switch between numpy,
pandas, scikit-learn to their Opensearch powered equivalents. In general, the data resides in Opensearch and
not in memory, which allows Opensearch-py-ml to access large datasets stored in Opensearch.
Expand Down
2 changes: 1 addition & 1 deletion requirements-dev.txt
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
pandas>=1.5.2,<2
matplotlib>=3.6.2,<4
numpy>=1.24.0,<2
opensearch-py==2.1.1
opensearch-py>=2.2.0
torch==1.13.1
accelerate
sentence_transformers
Expand Down
2 changes: 1 addition & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
pandas>=1.5.2,<2
matplotlib>=3.6.2,<4
numpy>=1.24.0
opensearch-py==2.1.1
opensearch-py>=2.2.0
torch==1.13.1
accelerate
sentence_transformers
Expand Down

0 comments on commit 53c2616

Please sign in to comment.