Skip to content

Commit

Permalink
Run pre-commit to the examples
Browse files Browse the repository at this point in the history
  • Loading branch information
alonsosilvaallende committed Aug 13, 2024
1 parent f9141ec commit 1bc560e
Show file tree
Hide file tree
Showing 4 changed files with 20 additions and 20 deletions.
10 changes: 5 additions & 5 deletions docs/cookbook/chain_of_thought.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ We download the model weights by passing the name of the repository on the Huggi
import llama_cpp
from outlines import generate, models

model = models.llamacpp("NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF",
model = models.llamacpp("NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF",
"Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf",
tokenizer=llama_cpp.llama_tokenizer.LlamaHFTokenizer.from_pretrained(
"NousResearch/Hermes-2-Pro-Llama-3-8B"
Expand All @@ -30,18 +30,18 @@ model = models.llamacpp("NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF",
??? note "(Optional) Store the model weights in a custom folder"

By default the model weights are downloaded to the hub cache but if we want so store the weights in a custom folder, we pull a quantized GGUF model [Hermes-2-Pro-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B-GGUF) by [NousResearch](https://nousresearch.com/) from [HuggingFace](https://huggingface.co/):

```bash
wget https://hf.co/NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF/resolve/main/Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf
```

We initialize the model:

```python
import llama_cpp
from llama_cpp import Llama
from outlines import generate, models

llm = Llama(
"/path/to/model/Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf",
tokenizer=llama_cpp.llama_tokenizer.LlamaHFTokenizer.from_pretrained(
Expand Down
10 changes: 5 additions & 5 deletions docs/cookbook/knowledge_graph_extraction.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ We download the model weights by passing the name of the repository on the Huggi
import llama_cpp
from outlines import generate, models

model = models.llamacpp("NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF",
model = models.llamacpp("NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF",
"Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf",
tokenizer=llama_cpp.llama_tokenizer.LlamaHFTokenizer.from_pretrained(
"NousResearch/Hermes-2-Pro-Llama-3-8B"
Expand All @@ -27,18 +27,18 @@ model = models.llamacpp("NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF",
??? note "(Optional) Store the model weights in a custom folder"

By default the model weights are downloaded to the hub cache but if we want so store the weights in a custom folder, we pull a quantized GGUF model [Hermes-2-Pro-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B-GGUF) by [NousResearch](https://nousresearch.com/) from [HuggingFace](https://huggingface.co/):

```bash
wget https://hf.co/NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF/resolve/main/Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf
```

We initialize the model:

```python
import llama_cpp
from llama_cpp import Llama
from outlines import generate, models

llm = Llama(
"/path/to/model/Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf",
tokenizer=llama_cpp.llama_tokenizer.LlamaHFTokenizer.from_pretrained(
Expand Down
10 changes: 5 additions & 5 deletions docs/cookbook/qa-with-citations.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ We download the model weights by passing the name of the repository on the Huggi
import llama_cpp
from outlines import generate, models

model = models.llamacpp("NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF",
model = models.llamacpp("NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF",
"Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf",
tokenizer=llama_cpp.llama_tokenizer.LlamaHFTokenizer.from_pretrained(
"NousResearch/Hermes-2-Pro-Llama-3-8B"
Expand All @@ -27,18 +27,18 @@ model = models.llamacpp("NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF",
??? note "(Optional) Store the model weights in a custom folder"

By default the model weights are downloaded to the hub cache but if we want so store the weights in a custom folder, we pull a quantized GGUF model [Hermes-2-Pro-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B-GGUF) by [NousResearch](https://nousresearch.com/) from [HuggingFace](https://huggingface.co/):

```bash
wget https://hf.co/NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF/resolve/main/Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf
```

We initialize the model:

```python
import llama_cpp
from llama_cpp import Llama
from outlines import generate, models

llm = Llama(
"/path/to/model/Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf",
tokenizer=llama_cpp.llama_tokenizer.LlamaHFTokenizer.from_pretrained(
Expand Down
10 changes: 5 additions & 5 deletions docs/cookbook/react_agent.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ We download the model weights by passing the name of the repository on the Huggi
import llama_cpp
from outlines import generate, models

model = models.llamacpp("NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF",
model = models.llamacpp("NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF",
"Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf",
tokenizer=llama_cpp.llama_tokenizer.LlamaHFTokenizer.from_pretrained(
"NousResearch/Hermes-2-Pro-Llama-3-8B"
Expand All @@ -31,18 +31,18 @@ model = models.llamacpp("NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF",
??? note "(Optional) Store the model weights in a custom folder"

By default the model weights are downloaded to the hub cache but if we want so store the weights in a custom folder, we pull a quantized GGUF model [Hermes-2-Pro-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B-GGUF) by [NousResearch](https://nousresearch.com/) from [HuggingFace](https://huggingface.co/):

```bash
wget https://hf.co/NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF/resolve/main/Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf
```

We initialize the model:

```python
import llama_cpp
from llama_cpp import Llama
from outlines import generate, models

llm = Llama(
"/path/to/model/Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf",
tokenizer=llama_cpp.llama_tokenizer.LlamaHFTokenizer.from_pretrained(
Expand Down

0 comments on commit 1bc560e

Please sign in to comment.