A JSONL generator to create training data for GPT3.5 and newer
-
Updated
Sep 13, 2024 - Python
A JSONL generator to create training data for GPT3.5 and newer
Auto Data is a library designed for quick and effortless creation of datasets tailored for fine-tuning Large Language Models (LLMs).
🚀 Easy, open-source LLM finetuning with one-line commands, seamless cloud integration, and popular optimization frameworks. ✨
Inspired by the paper: "Searching for Best Practices in Retrieval-Augmented Generation" by Wang et al. This repository is dedicated to search for the best RAG strategy.
Nuvola Chatbot is a Streamlit-based web app utilizing Google Cloud's Nuvola chatbot powered by LLaMA2 models. It provides interactive assistance on Google Cloud Platform services. Customize responses using temperature, top-p, and max length settings. Easy setup with Streamlit and Replicate.
the small distributed language model toolkit; fine-tune state-of-the-art LLMs anywhere, rapidly
SEIKO is a novel reinforcement learning method to efficiently fine-tune diffusion models in an online setting. Our methods outperform all baselines (PPO, classifier-based guidance, direct reward backpropagation) for fine-tuning Stable Diffusion.
Fine-tuning of language models and prompt engineering, using the problem setting of stock price prediction based on high-frequency OHLC stock price data for AAPL.Training gpt-3.5-turbo on OHLC data to obtain raw return and log return predictions.
A open-source framework designed to adapt pre-trained Language Models (LLMs), such as Llama, Mistral, and Mixtral, to a wide array of domains and languages.
(In-progress) Finetuning OpenAI's GPT-3.5-Turbo as a base model on open-source data about the Tampa Bay region to create a chatbot specializing in information on the area!
This repository showcases Python scripts demonstrating interactions with various models using the LangChain library. From fine-tuning to custom runnables, explore examples with Gemini, Hugging Face, and Mistral AI models.
qwen-1.5-1.8B sentiment analysis with prompt optimization and qlora fine-tuning
Fine tune Phi 2 for persona grounded chat
Code Wizard is a coding companion/ code generation tool empowered by CodeLLama-v2-34B AI to automatically generate and enhance code based on best practices found in your GitHub repository.
GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection
Fine tuning chatbot
Code for fine-tuning Llama2 LLM with custom text dataset to produce film character styled responses
Fine-tune Mistral 7B to generate fashion style suggestions
A Gradio web UI for Large Language Models. Supports LoRA/QLoRA finetuning,RAG(Retrieval-augmented generation) and Chat
Add a description, image, and links to the finetuning-llms topic page so that developers can more easily learn about it.
To associate your repository with the finetuning-llms topic, visit your repo's landing page and select "manage topics."