Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adjust pydoc markdown config so methods shown with classes #2511

Merged
merged 8 commits into from
May 6, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -326,3 +326,4 @@ Here's a list of organizations who use Haystack. Don't hesitate to send a PR to
- [Etalab](https://www.etalab.gouv.fr/)
- [Infineon](https://www.infineon.com/)
- [Sooth.ai](https://sooth.ai/)

6 changes: 3 additions & 3 deletions docs/_src/api/api/crawler.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ Crawl texts from a website so that we can use them later in Haystack as a corpus

<a id="crawler.Crawler.__init__"></a>

#### \_\_init\_\_
#### Crawler.\_\_init\_\_

```python
def __init__(output_dir: str, urls: Optional[List[str]] = None, crawler_depth: int = 1, filter_urls: Optional[List] = None, overwrite_existing_files=True, id_hash_keys: Optional[List[str]] = None)
Expand All @@ -49,7 +49,7 @@ In this case the id will be generated by using the content and the defined metad

<a id="crawler.Crawler.crawl"></a>

#### crawl
#### Crawler.crawl

```python
def crawl(output_dir: Union[str, Path, None] = None, urls: Optional[List[str]] = None, crawler_depth: Optional[int] = None, filter_urls: Optional[List] = None, overwrite_existing_files: Optional[bool] = None, id_hash_keys: Optional[List[str]] = None) -> List[Path]
Expand Down Expand Up @@ -83,7 +83,7 @@ List of paths where the crawled webpages got stored

<a id="crawler.Crawler.run"></a>

#### run
#### Crawler.run

```python
def run(output_dir: Union[str, Path, None] = None, urls: Optional[List[str]] = None, crawler_depth: Optional[int] = None, filter_urls: Optional[List] = None, overwrite_existing_files: Optional[bool] = None, return_documents: Optional[bool] = False, id_hash_keys: Optional[List[str]] = None) -> Tuple[Dict, str]
Expand Down
6 changes: 3 additions & 3 deletions docs/_src/api/api/document_classifier.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ class BaseDocumentClassifier(BaseComponent)

<a id="base.BaseDocumentClassifier.timing"></a>

#### timing
#### BaseDocumentClassifier.timing

```python
def timing(fn, attr_name)
Expand Down Expand Up @@ -81,7 +81,7 @@ With this document_classifier, you can directly get predictions via predict()

<a id="transformers.TransformersDocumentClassifier.__init__"></a>

#### \_\_init\_\_
#### TransformersDocumentClassifier.\_\_init\_\_

```python
def __init__(model_name_or_path: str = "bhadresh-savani/distilbert-base-uncased-emotion", model_version: Optional[str] = None, tokenizer: Optional[str] = None, use_gpu: bool = True, return_all_scores: bool = False, task: str = "text-classification", labels: Optional[List[str]] = None, batch_size: int = -1, classification_field: str = None)
Expand Down Expand Up @@ -119,7 +119,7 @@ or an entailment.

<a id="transformers.TransformersDocumentClassifier.predict"></a>

#### predict
#### TransformersDocumentClassifier.predict

```python
def predict(documents: List[Document]) -> List[Document]
Expand Down
302 changes: 151 additions & 151 deletions docs/_src/api/api/document_store.md

Large diffs are not rendered by default.

12 changes: 6 additions & 6 deletions docs/_src/api/api/evaluation.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ Please use pipeline.eval() instead.

<a id="evaluator.EvalDocuments.__init__"></a>

#### \_\_init\_\_
#### EvalDocuments.\_\_init\_\_

```python
def __init__(debug: bool = False, open_domain: bool = True, top_k: int = 10)
Expand All @@ -37,7 +37,7 @@ When False, correct retrieval is evaluated based on document_id.

<a id="evaluator.EvalDocuments.run"></a>

#### run
#### EvalDocuments.run

```python
def run(documents: List[Document], labels: List[Label], top_k: Optional[int] = None)
Expand All @@ -47,7 +47,7 @@ Run this node on one sample and its labels

<a id="evaluator.EvalDocuments.print"></a>

#### print
#### EvalDocuments.print

```python
def print()
Expand Down Expand Up @@ -75,7 +75,7 @@ Please use pipeline.eval() instead.

<a id="evaluator.EvalAnswers.__init__"></a>

#### \_\_init\_\_
#### EvalAnswers.\_\_init\_\_

```python
def __init__(skip_incorrect_retrieval: bool = True, open_domain: bool = True, sas_model: str = None, debug: bool = False)
Expand All @@ -100,7 +100,7 @@ Models:

<a id="evaluator.EvalAnswers.run"></a>

#### run
#### EvalAnswers.run

```python
def run(labels: List[Label], answers: List[Answer], correct_retrieval: bool)
Expand All @@ -110,7 +110,7 @@ Run this node on one sample and its labels

<a id="evaluator.EvalAnswers.print"></a>

#### print
#### EvalAnswers.print

```python
def print(mode)
Expand Down
4 changes: 2 additions & 2 deletions docs/_src/api/api/extractor.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ The entities extracted by this Node will populate Document.entities

<a id="entity.EntityExtractor.run"></a>

#### run
#### EntityExtractor.run

```python
def run(documents: Optional[Union[List[Document], List[dict]]] = None) -> Tuple[Dict, str]
Expand All @@ -29,7 +29,7 @@ This is the method called when this node is used in a pipeline

<a id="entity.EntityExtractor.extract"></a>

#### extract
#### EntityExtractor.extract

```python
def extract(text)
Expand Down
4 changes: 2 additions & 2 deletions docs/_src/api/api/file_classifier.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Route files in an Indexing Pipeline to corresponding file converters.

<a id="file_type.FileTypeClassifier.__init__"></a>

#### \_\_init\_\_
#### FileTypeClassifier.\_\_init\_\_

```python
def __init__(supported_types: List[str] = DEFAULT_TYPES)
Expand All @@ -33,7 +33,7 @@ also be rejected.

<a id="file_type.FileTypeClassifier.run"></a>

#### run
#### FileTypeClassifier.run

```python
def run(file_paths: Union[Path, List[Path], str, List[str], List[Union[Path, str]]])
Expand Down
30 changes: 15 additions & 15 deletions docs/_src/api/api/file_converter.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Base class for implementing file converts to transform input documents to text f

<a id="base.BaseConverter.__init__"></a>

#### \_\_init\_\_
#### BaseConverter.\_\_init\_\_

```python
def __init__(remove_numeric_tables: bool = False, valid_languages: Optional[List[str]] = None, id_hash_keys: Optional[List[str]] = None)
Expand All @@ -39,7 +39,7 @@ In this case the id will be generated by using the content and the defined metad

<a id="base.BaseConverter.convert"></a>

#### convert
#### BaseConverter.convert

```python
@abstractmethod
Expand Down Expand Up @@ -73,7 +73,7 @@ In this case the id will be generated by using the content and the defined metad

<a id="base.BaseConverter.validate_language"></a>

#### validate\_language
#### BaseConverter.validate\_language

```python
def validate_language(text: str, valid_languages: Optional[List[str]] = None) -> bool
Expand Down Expand Up @@ -129,7 +129,7 @@ class DocxToTextConverter(BaseConverter)

<a id="docx.DocxToTextConverter.convert"></a>

#### convert
#### DocxToTextConverter.convert

```python
def convert(file_path: Path, meta: Optional[Dict[str, str]] = None, remove_numeric_tables: Optional[bool] = None, valid_languages: Optional[List[str]] = None, encoding: Optional[str] = None, id_hash_keys: Optional[List[str]] = None) -> List[Document]
Expand Down Expand Up @@ -174,7 +174,7 @@ class ImageToTextConverter(BaseConverter)

<a id="image.ImageToTextConverter.__init__"></a>

#### \_\_init\_\_
#### ImageToTextConverter.\_\_init\_\_

```python
def __init__(remove_numeric_tables: bool = False, valid_languages: Optional[List[str]] = ["eng"], id_hash_keys: Optional[List[str]] = None)
Expand All @@ -201,7 +201,7 @@ In this case the id will be generated by using the content and the defined metad

<a id="image.ImageToTextConverter.convert"></a>

#### convert
#### ImageToTextConverter.convert

```python
def convert(file_path: Union[Path, str], meta: Optional[Dict[str, str]] = None, remove_numeric_tables: Optional[bool] = None, valid_languages: Optional[List[str]] = None, encoding: Optional[str] = "utf-8", id_hash_keys: Optional[List[str]] = None) -> List[Document]
Expand Down Expand Up @@ -243,7 +243,7 @@ class MarkdownConverter(BaseConverter)

<a id="markdown.MarkdownConverter.convert"></a>

#### convert
#### MarkdownConverter.convert

```python
def convert(file_path: Path, meta: Optional[Dict[str, str]] = None, remove_numeric_tables: Optional[bool] = None, valid_languages: Optional[List[str]] = None, encoding: Optional[str] = "utf-8", id_hash_keys: Optional[List[str]] = None) -> List[Document]
Expand All @@ -265,7 +265,7 @@ In this case the id will be generated by using the content and the defined metad

<a id="markdown.MarkdownConverter.markdown_to_text"></a>

#### markdown\_to\_text
#### MarkdownConverter.markdown\_to\_text

```python
@staticmethod
Expand All @@ -292,7 +292,7 @@ class PDFToTextConverter(BaseConverter)

<a id="pdf.PDFToTextConverter.__init__"></a>

#### \_\_init\_\_
#### PDFToTextConverter.\_\_init\_\_

```python
def __init__(remove_numeric_tables: bool = False, valid_languages: Optional[List[str]] = None, id_hash_keys: Optional[List[str]] = None, encoding: Optional[str] = "UTF-8")
Expand Down Expand Up @@ -320,7 +320,7 @@ Defaults to "UTF-8" in order to support special characters (e.g. German Umlauts,

<a id="pdf.PDFToTextConverter.convert"></a>

#### convert
#### PDFToTextConverter.convert

```python
def convert(file_path: Path, meta: Optional[Dict[str, str]] = None, remove_numeric_tables: Optional[bool] = None, valid_languages: Optional[List[str]] = None, encoding: Optional[str] = None, id_hash_keys: Optional[List[str]] = None) -> List[Document]
Expand Down Expand Up @@ -360,7 +360,7 @@ class PDFToTextOCRConverter(BaseConverter)

<a id="pdf.PDFToTextOCRConverter.__init__"></a>

#### \_\_init\_\_
#### PDFToTextOCRConverter.\_\_init\_\_

```python
def __init__(remove_numeric_tables: bool = False, valid_languages: Optional[List[str]] = ["eng"], id_hash_keys: Optional[List[str]] = None)
Expand All @@ -387,7 +387,7 @@ In this case the id will be generated by using the content and the defined metad

<a id="pdf.PDFToTextOCRConverter.convert"></a>

#### convert
#### PDFToTextOCRConverter.convert

```python
def convert(file_path: Path, meta: Optional[Dict[str, str]] = None, remove_numeric_tables: Optional[bool] = None, valid_languages: Optional[List[str]] = None, encoding: Optional[str] = "UTF-8", id_hash_keys: Optional[List[str]] = None) -> List[Document]
Expand Down Expand Up @@ -432,7 +432,7 @@ class TikaConverter(BaseConverter)

<a id="tika.TikaConverter.__init__"></a>

#### \_\_init\_\_
#### TikaConverter.\_\_init\_\_

```python
def __init__(tika_url: str = "http://localhost:9998/tika", remove_numeric_tables: bool = False, valid_languages: Optional[List[str]] = None, id_hash_keys: Optional[List[str]] = None)
Expand All @@ -458,7 +458,7 @@ In this case the id will be generated by using the content and the defined metad

<a id="tika.TikaConverter.convert"></a>

#### convert
#### TikaConverter.convert

```python
def convert(file_path: Path, meta: Optional[Dict[str, str]] = None, remove_numeric_tables: Optional[bool] = None, valid_languages: Optional[List[str]] = None, encoding: Optional[str] = None, id_hash_keys: Optional[List[str]] = None) -> List[Document]
Expand Down Expand Up @@ -502,7 +502,7 @@ class TextConverter(BaseConverter)

<a id="txt.TextConverter.convert"></a>

#### convert
#### TextConverter.convert

```python
def convert(file_path: Path, meta: Optional[Dict[str, str]] = None, remove_numeric_tables: Optional[bool] = None, valid_languages: Optional[List[str]] = None, encoding: Optional[str] = "utf-8", id_hash_keys: Optional[List[str]] = None) -> List[Document]
Expand Down
10 changes: 5 additions & 5 deletions docs/_src/api/api/generator.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Abstract class for Generators

<a id="base.BaseGenerator.predict"></a>

#### predict
#### BaseGenerator.predict

```python
@abstractmethod
Expand Down Expand Up @@ -87,7 +87,7 @@ i.e. the model can easily adjust to domain documents even after training has fin

<a id="transformers.RAGenerator.__init__"></a>

#### \_\_init\_\_
#### RAGenerator.\_\_init\_\_

```python
def __init__(model_name_or_path: str = "facebook/rag-token-nq", model_version: Optional[str] = None, retriever: Optional[DensePassageRetriever] = None, generator_type: str = "token", top_k: int = 2, max_length: int = 200, min_length: int = 2, num_beams: int = 2, embed_title: bool = True, prefix: Optional[str] = None, use_gpu: bool = True)
Expand Down Expand Up @@ -115,7 +115,7 @@ See https://huggingface.co/models for full list of available models.

<a id="transformers.RAGenerator.predict"></a>

#### predict
#### RAGenerator.predict

```python
def predict(query: str, documents: List[Document], top_k: Optional[int] = None) -> Dict
Expand Down Expand Up @@ -207,7 +207,7 @@ For a list of all text-generation models see https://huggingface.co/models?pipel

<a id="transformers.Seq2SeqGenerator.__init__"></a>

#### \_\_init\_\_
#### Seq2SeqGenerator.\_\_init\_\_

```python
def __init__(model_name_or_path: str, input_converter: Optional[Callable] = None, top_k: int = 1, max_length: int = 200, min_length: int = 2, num_beams: int = 8, use_gpu: bool = True)
Expand All @@ -229,7 +229,7 @@ top_k: Optional[int] = None) -> BatchEncoding:

<a id="transformers.Seq2SeqGenerator.predict"></a>

#### predict
#### Seq2SeqGenerator.predict

```python
def predict(query: str, documents: List[Document], top_k: Optional[int] = None) -> Dict
Expand Down
6 changes: 3 additions & 3 deletions docs/_src/api/api/other.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ The node allows multiple join modes:

<a id="join_docs.JoinDocuments.__init__"></a>

#### \_\_init\_\_
#### JoinDocuments.\_\_init\_\_

```python
def __init__(join_mode: str = "concatenate", weights: Optional[List[float]] = None, top_k_join: Optional[int] = None)
Expand Down Expand Up @@ -67,7 +67,7 @@ A node to join `Answer`s produced by multiple `Reader` nodes.

<a id="join_answers.JoinAnswers.__init__"></a>

#### \_\_init\_\_
#### JoinAnswers.\_\_init\_\_

```python
def __init__(join_mode: str = "concatenate", weights: Optional[List[float]] = None, top_k_join: Optional[int] = None)
Expand Down Expand Up @@ -99,7 +99,7 @@ different nodes.

<a id="route_documents.RouteDocuments.__init__"></a>

#### \_\_init\_\_
#### RouteDocuments.\_\_init\_\_

```python
def __init__(split_by: str = "content_type", metadata_values: Optional[List[str]] = None)
Expand Down
Loading