Skip to content

Commit

Permalink
Re-organize and add links to papers for missing techniques
Browse files Browse the repository at this point in the history
  • Loading branch information
rlouf committed Aug 15, 2024
1 parent 01badbb commit f729953
Show file tree
Hide file tree
Showing 6 changed files with 122 additions and 63 deletions.
3 changes: 1 addition & 2 deletions docs/cookbook/prompting-techniques/active-prompting.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,8 @@ title: Active Prompting
# Active Prompting


Active Prompting is an iterative technique that involves dynamically refining prompts based on the model's responses. This method aims to improve the quality and relevance of the model's outputs by continuously adjusting the input. The process begins with an initial prompt, followed by an evaluation of the model's response. Based on this evaluation, the prompt is modified to address any shortcomings or to further guide the model towards the desired output. This cycle of prompting, evaluating, and refining continues until the desired quality or specificity of response is achieved.
[Active Prompting](https://arxiv.org/abs/2302.12246) is an iterative technique that involves dynamically refining prompts based on the model's responses. This method aims to improve the quality and relevance of the model's outputs by continuously adjusting the input. The process begins with an initial prompt, followed by an evaluation of the model's response. Based on this evaluation, the prompt is modified to address any shortcomings or to further guide the model towards the desired output. This cycle of prompting, evaluating, and refining continues until the desired quality or specificity of response is achieved.

Read more about this prompting technique in [The Prompt Report: A Systematic Survey of Prompting Techniques](https://arxiv.org/abs/2406.06608).

## Step by Step Example

Expand Down
4 changes: 1 addition & 3 deletions docs/cookbook/prompting-techniques/analogical-prompting.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,7 @@ title: Analogical Prompting
# Analogical Prompting


Analogical Prompting is an advanced prompting technique that automatically generates exemplars including Chain-of-Thought (CoT) reasoning. It works by creating an analogous problem to the target problem, demonstrating the step-by-step reasoning process for solving that analogous problem, and then presenting the target problem. This allows the language model to apply similar reasoning to solve the new problem.

Read more about this prompting technique in [The Prompt Report: A Systematic Survey of Prompting Techniques](https://arxiv.org/abs/2406.06608).
[Analogical Prompting](https://arxiv.org/abs/2310.01714) is an advanced prompting technique that automatically generates exemplars including Chain-of-Thought (CoT) reasoning. It works by creating an analogous problem to the target problem, demonstrating the step-by-step reasoning process for solving that analogous problem, and then presenting the target problem. This allows the language model to apply similar reasoning to solve the new problem.

## A worked example

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,7 @@ title: Automatic Chain-of-Thought (Auto-CoT) Prompting
# Automatic Chain-of-Thought (Auto-CoT) Prompting


Auto-CoT is a technique that automates the process of creating Chain-of-Thought (CoT) examples for prompting. It works by first using a Zero-Shot CoT prompt on a set of questions to generate chains of thought automatically. The best-generated chains are then selected and used to construct a Few-Shot CoT prompt for the target task. This method reduces the need for manual creation of CoT examples and can potentially generate more diverse and task-specific reasoning chains.

Read more about this prompting technique in [The Prompt Report: A Systematic Survey of Prompting Techniques](https://arxiv.org/abs/2406.06608).
[Auto-CoT](https://arxiv.org/abs/2210.03493) is a technique that automates the process of creating Chain-of-Thought (CoT) examples for prompting. It works by first using a Zero-Shot CoT prompt on a set of questions to generate chains of thought automatically. The best-generated chains are then selected and used to construct a Few-Shot CoT prompt for the target task. This method reduces the need for manual creation of CoT examples and can potentially generate more diverse and task-specific reasoning chains.

## Step by Step Example

Expand Down
4 changes: 1 addition & 3 deletions docs/cookbook/prompting-techniques/chain-of-thought.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,9 @@ title: Chain-of-Thought (CoT) Prompting
# Chain-of-Thought (CoT) Prompting


Chain-of-Thought (CoT) Prompting is a technique that encourages large language models to express their reasoning process before providing a final answer. It typically uses few-shot prompting, where example questions with their corresponding thought processes and answers are provided. This approach has been shown to significantly improve performance on tasks requiring complex reasoning, such as mathematics problems.
[Chain-of-Thought (CoT) Prompting](https://arxiv.org/abs/2201.11903) is a technique that encourages large language models to express their reasoning process before providing a final answer. It typically uses few-shot prompting, where example questions with their corresponding thought processes and answers are provided. This approach has been shown to significantly improve performance on tasks requiring complex reasoning, such as mathematics problems.

The key idea is to guide the model to break down its thinking into smaller, logical steps, mimicking human problem-solving. By doing so, the model can tackle more complex problems and provide more accurate answers, as it's essentially "showing its work."

Read more about this prompting technique in [The Prompt Report: A Systematic Survey of Prompting Techniques](https://arxiv.org/abs/2406.06608).

## A worked example

Expand Down
111 changes: 88 additions & 23 deletions docs/cookbook/prompting-techniques/index.md
Original file line number Diff line number Diff line change
@@ -1,34 +1,99 @@
# Prompting Techniques

This index provides links to various prompting techniques as featured in [The Prompt Report: A Systematic Survey of Prompting Techniques](https://arxiv.org/abs/2406.06608).
This part of the documentation provides links to various prompting techniques featured in [The Prompt Report: A Systematic Survey of Prompting Techniques](https://arxiv.org/abs/2406.06608). We follow closely the presentation of the Paper. We propose an implementation of some of these prompting techniques using Outlines. Contributions for the remaining techniques are welcome!

Each is a simple example of the technique using Outlines. Read more about each technique in detail in [The Prompt Report: A Systematic Survey of Prompting Techniques](https://arxiv.org/abs/2406.06608) and try out simple code examples below.

## The Techniques
# Text-based Techniques

## Few-shots prompting


- [Few-shot Prompting](few-shot-prompting.md) - Provide the model a small number of examples.

### Example selection

- K-Nearest Neighbour - [Paper](https://arxiv.org/abs/2101.06804)
- Vote-K ([Paper](https://arxiv.org/abs/2209.01975))
- [Self-Generated In-Context Learning (SG-ICL)](self-generated-in-context-learning-sg-icl.md) - Uses the model to generate its own in-context learning examples.
- [Prompt Mining](prompt-mining.md) - Extracts effective prompts from existing data or model outputs.
- LENS - [Paper](https://arxiv.org/abs/2302.13539)
- UDR - [Paper](https://arxiv.org/abs/2305.04320)
- Active Example Selection - [Paper](https://arxiv.org/abs/2211.04486)

## Zero-shot prompting

Zero-shot prompting uses zero exemplars.

- [Zero-Shot Prompting](zero-shot-prompting.md) - Generates answers without any task-specific examples or fine-tuning.
- Role Prompting - [Paper 1](https://arxiv.org/abs/2307.05300), [paper 2](https://arxiv.org/abs/2305.16291), [paper 3](https://arxiv.org/abs/2311.10054), [paper 4](https://www.dre.vanderbilt.edu/~schmidt/PDF/ADA_Europe_Position_Paper.pdf)
- Style prompting - [Paper](https://arxiv.org/abs/2302.09185)
- [Emotion Prompting](emotion-prompting.md) - Incorporates emotional context into prompts.
- System 2 Attention (S2A) - [Paper](https://arxiv.org/abs/2311.11829)
- [Simulation Theory of Mind (SimToM)](simtom-simulation-theory-of-mind.md) - Simulates different perspectives or thought processes.
- Rephrase and Respond (RaR) - [Paper](https://arxiv.org/abs/2311.04205)
- [Re-Reading (Re2)](re-reading-re2.md) - Encourages the model to review and refine its own outputs.
- [Self-Ask](self-ask.md) - Prompts the model to ask and answer its own follow-up questions.

## Though generation

- [Chain of Thought (CoT) Prompting](chain-of-thought.md) - Encourages the model to show its reasoning step-by-step.

### Zero-short CoT

- [Zero-Shot Chain of Thought (CoT)](zero-shot-chain-of-thought.md) - Applies chain of thought reasoning without specific examples.
- Step-Back prompting - [Paper](https://arxiv.org/abs/2310.06117)
- [Analogical Prompting](analogical-prompting.md) - Uses analogies to guide the model's reasoning.
- [AutoPrompt](autoprompt.md) - Automatically generates prompts for specific tasks.
- [Chain of Thought (CoT) Prompting](chain-of-thought-cot-prompting.md) - Encourages the model to show its reasoning step-by-step.
- [Consistency-Based Self-Adaptive Prompting (CoSP)](consistency-based-self-adaptive-prompting-cosp.md) - Adapts prompts based on consistency of model outputs.
- Thread-of-Thought - [Paper](https://arxiv.org/abs/2311.08734)
- Tabular Chain-of-Thought - [Paper](https://arxiv.org/abs/2305.17812)

### Few-shot CoT


- [Contrastive CoT Prompting](contrastive-cot-prompting.md) - Uses contrasting examples to improve chain of thought reasoning.
- [Cumulative Reasoning](cumulative-reasoning.md) - Builds upon previous reasoning steps to reach a conclusion.
- Uncertainty-Routed CoT prompting - [Paper](https://storage.googleapis.com/deepmind-media/gemini/gemini_1_report.pdf)
- [Complexity-based prompting](complexity-based-prompting.md) - Enhances CoT by focusing on complex examples.
- [Active Prompting](active-prompting.md) - Refine prompts dynamically.
- Memory-of-Thought prompting - [Paper](https://arxiv.org/abs/2305.05181)
- [Automatic CoT](automatic-chain-of-thought.md) - Automate the choice of examples for CoT prompting.

## Decomposition

- Least-to-Most prompting - [Paper](https://arxiv.org/abs/2205.10625)
- [Decomposed Prompting (DeComp)](decomposed-prompting-decomp.md) - Breaks down complex tasks into smaller, manageable steps.
- Plan-and-solve prompting - [Paper](http://arxiv.org/abs/2305.04091)
- Tree-of-Thought (ToT) - [Paper 1](http://arxiv.org/abs/2305.10601), [paper 2](http://arxiv.org/abs/2305.08291)
- Recursion-of-Thought - [Paper](http://arxiv.org/abs/2306.06891)
- Program-of-Thought - [Paper](https://arxiv.org/abs/2211.12588)
- Faithful Chain-of-Thought - [Paper](http://arxiv.org/abs/2301.13379)
- [Skeleton-of-Thought](skeleton-of-thought.md) - Provides a structural framework for the model's reasoning.

## Ensembling

- [Demonstration Ensembling (DENSE)](demonstration-ensembling-dense.md) - Combines multiple demonstrations to improve performance.
- [Dialogue-Comprised Policy Gradient-Based Discrete Prompt Optimization (DP2O)](dialogue-comprised-policy-gradient-based-discrete-prompt-optimization-dp2o.md) - Optimizes prompts through dialogue-based interactions.
- [Diverse Diversity-Focused Self-Consistency](diverse-diversity-focused-self-consistency.md) - Promotes diverse outputs while maintaining consistency.
- [Emotion Prompting](emotion-prompting.md) - Incorporates emotional context into prompts.
- Mixture of Reasoning Experts (MoRE) - [Paper](http://umiacs.umd.edu/~jbg//docs/2023_findings_more.pdf)
- [Max Mutual Information Method](max-mutual-information-method.md) - Maximizes mutual information between prompts and desired outputs.
- [Meta Prompting](meta-prompting.md) - Uses prompts to generate or improve other prompts.
- [Prompt Mining](prompt-mining.md) - Extracts effective prompts from existing data or model outputs.
- [Re-Reading (Re2)](re-reading-re2.md) - Encourages the model to review and refine its own outputs.
- [Reversing Chain of Thought (RCoT)](reversing-chain-of-thought-rcot.md) - Applies chain of thought reasoning in reverse order.
- [Self-Ask](self-ask.md) - Prompts the model to ask and answer its own follow-up questions.
- [Self-Calibration](self-calibration.md) - Helps the model adjust its own confidence and accuracy.
- [Self-Consistency](self-consistency.md) - Generates multiple outputs and selects the most consistent one.
- [Self-Generated In-Context Learning (SG-ICL)](self-generated-in-context-learning-sg-icl.md) - Uses the model to generate its own in-context learning examples.
- [Self-Refine](self-refine.md) - Allows the model to iteratively improve its own outputs.
- [Simulation Theory of Mind (SimToM)](simtom-simulation-theory-of-mind.md) - Simulates different perspectives or thought processes.
- [Skeleton of Thought](skeleton-of-thought.md) - Provides a structural framework for the model's reasoning.
- [System 2 Attention (S2A)](system-2-attention-s2a.md) - Mimics human-like deliberate thinking processes.
- Universal self-consistency - [Paper](http://arxiv.org/abs/2311.17311)
- Meta-reasoning over multiple CoTs - [Paper](http://arxiv.org/abs/2304.13007)
- [DiVeRSe](diverse-diversity-focused-self-consistency.md) - Combine multiple prompts with self-consistency.
- Consistency-based Self-adaptive Prompting (COSP) - [Paper](http://arxiv.org/abs/2305.14106)
- [Universal Self-Adaptive Prompting (USP)](universal-self-adaptive-prompting-usp.md) - Adapts prompts across different tasks and domains.
- [Zero-Shot Chain of Thought (CoT)](zero-shot-chain-of-thought-cot.md) - Applies chain of thought reasoning without specific examples.
- [Zero-Shot Prompting](zero-shot-prompting.md) - Generates answers without any task-specific examples or fine-tuning.
- Prompt paraphrasing - [Paper](https://doi.org/10.1162/tacl_a_00324)

## Self-criticism

- [Self-Calibration](self-calibration.md) - Helps the model adjust its own confidence and accuracy.
- Self-Refine - [Paper](https://arxiv.org/abs/2303.17651)
- [Reversing Chain of Thought (RCoT)](reversing-chain-of-thought-rcot.md) - Applies chain of thought reasoning in reverse order.
- Self-Verification - [Paper](https://arxiv.org/abs/2212.09561)
- Chain-of-Verification (COVE) - [Paper](https://arxiv.org/pdf/2406.06608)
- [Cumulative Reasoning](cumulative-reasoning.md) - Builds upon previous reasoning steps to reach a conclusion.


# Prompt Engineering


- [Meta Prompting](meta-prompting.md) - Uses prompts to generate or improve other prompts.
- AutoPrompt - [Paper](https://doi.org/10.18653/v1/2020.emnlp-main.346)
- Automatic Prompt Engineering (APE) - [Paper](http://arxiv.org/abs/2211.01910)
- Gradientfree Instructional Prompt Search (GrIPS) - [Paper](https://aclanthology.org/2023.eacl-main.277)
59 changes: 30 additions & 29 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -112,39 +112,40 @@ nav:
- Knowledge Graph Extraction: cookbook/knowledge_graph_extraction.md
- Chain of Thought (CoT): cookbook/chain_of_thought.md
- ReAct Agent: cookbook/react_agent.md
- Prompting Techniques:
- Text-based techniques:
- cookbook/prompting-techniques/index.md
- cookbook/prompting-techniques/active-prompting.md
- cookbook/prompting-techniques/analogical-prompting.md
- cookbook/prompting-techniques/automatic-chain-of-thought.md
- cookbook/prompting-techniques/chain-of-thought.md
- cookbook/prompting-techniques/complexity-based-prompting.md
- cookbook/prompting-techniques/contrastive-cot-prompting.md
- cookbook/prompting-techniques/cumulative-reasoning.md
- cookbook/prompting-techniques/decomposed-prompting-decomp.md
- cookbook/prompting-techniques/demonstration-ensembling-dense.md
- cookbook/prompting-techniques/diverse-diversity-focused-self-consistency.md
- cookbook/prompting-techniques/emotion-prompting.md
- cookbook/prompting-techniques/few-shot-prompting.md
- cookbook/prompting-techniques/max-mutual-information-method.md
- cookbook/prompting-techniques/meta-prompting.md
- cookbook/prompting-techniques/prompt-mining.md
- cookbook/prompting-techniques/reversing-chain-of-thought-rcot.md
- cookbook/prompting-techniques/re-reading-re2.md
- cookbook/prompting-techniques/self-ask.md
- cookbook/prompting-techniques/self-calibration.md
- cookbook/prompting-techniques/self-consistency.md
- cookbook/prompting-techniques/self-generated-in-context-learning-sg-icl.md
- cookbook/prompting-techniques/simtom-simulation-theory-of-mind.md
- cookbook/prompting-techniques/skeleton-of-thought.md
- cookbook/prompting-techniques/uncertainty-routed-cot-prompting.md
- cookbook/prompting-techniques/universal-self-adaptive-prompting-usp.md
- cookbook/prompting-techniques/zero-shot-chain-of-thought.md
- cookbook/prompting-techniques/zero-shot-prompting.md
- Run on the cloud:
- BentoML: cookbook/deploy-using-bentoml.md
- Cerebrium: cookbook/deploy-using-cerebrium.md
- Modal: cookbook/deploy-using-modal.md
- Prompting Techniques:
- cookbook/prompting-techniques/index.md
- cookbook/prompting-techniques/analogical-prompting.md
- cookbook/prompting-techniques/autoprompt.md
- cookbook/prompting-techniques/chain-of-thought-cot-prompting.md
- cookbook/prompting-techniques/consistency-based-self-adaptive-prompting-cosp.md
- cookbook/prompting-techniques/contrastive-cot-prompting.md
- cookbook/prompting-techniques/cumulative-reasoning.md
- cookbook/prompting-techniques/decomposed-prompting-decomp.md
- cookbook/prompting-techniques/demonstration-ensembling-dense.md
- cookbook/prompting-techniques/dialogue-comprised-policy-gradient-based-discrete-prompt-optimization-dp2o.md
- cookbook/prompting-techniques/diverse-diversity-focused-self-consistency.md
- cookbook/prompting-techniques/emotion-prompting.md
- cookbook/prompting-techniques/max-mutual-information-method.md
- cookbook/prompting-techniques/meta-prompting.md
- cookbook/prompting-techniques/prompt-mining.md
- cookbook/prompting-techniques/re-reading-re2.md
- cookbook/prompting-techniques/reversing-chain-of-thought-rcot.md
- cookbook/prompting-techniques/self-ask.md
- cookbook/prompting-techniques/self-calibration.md
- cookbook/prompting-techniques/self-consistency.md
- cookbook/prompting-techniques/self-generated-in-context-learning-sg-icl.md
- cookbook/prompting-techniques/self-refine.md
- cookbook/prompting-techniques/simtom-simulation-theory-of-mind.md
- cookbook/prompting-techniques/skeleton-of-thought.md
- cookbook/prompting-techniques/system-2-attention-s2a.md
- cookbook/prompting-techniques/universal-self-adaptive-prompting-usp.md
- cookbook/prompting-techniques/zero-shot-chain-of-thought-cot.md
- cookbook/prompting-techniques/zero-shot-prompting.md
- Docs:
- reference/index.md
- Generation:
Expand Down

0 comments on commit f729953

Please sign in to comment.