Official Repo for ICML 2024 paper "Executable Code Actions Elicit Better LLM Agents" by Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhang, Yunzhu Li, Hao Peng, Heng Ji.
-
Updated
May 23, 2024 - Python
Official Repo for ICML 2024 paper "Executable Code Actions Elicit Better LLM Agents" by Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhang, Yunzhu Li, Hao Peng, Heng Ji.
LLM (Large Language Model) FineTuning
We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20 via OpenAI’s APIs.
Enhancing Large Vision Language Models with Self-Training on Image Comprehension.
npm like package ecosystem for Prompts 🤖
[ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference
Collection of resources for finetuning Large Language Models (LLMs).
The official repo of paper "Self-Control of LLM Behaviors by Compressing Suffix Gradient into Prefix Controller"
Collecting data for Building Lucknow's first LLM
Finetune an LLM to generate SQL from text on Intel GPUs (XPUs) using QLoRA
Türkiye'nin en büyük teknoloji forumu Technopat'tan kazınmış, tamamen Türkçe ve açık kaynaklı en büyük veri setidir. 3 milyon konu ve 21 milyon yanıt içeren 7GB'lık bu veri seti, Türkçe NLP ve LLM projeleri için kapsamlı bir kaynak sağlar.#Acikhack2024TDDİ
中文llama3大模型快速上手,通用中文语言大模型finetune教程,基于Meta-llama3实现。
Finetuning Some Wizard Models With QLoRA
high-efficiency text & file scraper with smart tracking, client/server networking for building language model datasets fast
LLM Finetuning with Axolotl with decent defaults + Optional TrueFoundry Experiment Tracking Extension
This is a package for generating questions and answers from unstructured data to be used for NLP tasks.
Streamlit application for Reddit posts powered by OpenAI, Pinecone and Langchain
A helper library for fine-tuning Amazon Bedrock models. This toolkit assists in generating Q&A datasets from documents and streamlines the LLM fine-tuning process.
The project was undertaken as part of the Intel Unnati Industrial Training program for the year 2024. The primary objective of this project aligns with Problem Statement PS-04: Introduction to GenAI LLM Inference on CPUs and subsequent LLM Model Finetuning for the development of a Custom Chatbot.
Comparison of different adaptation methods on PEFT for fine-tuning downstream tasks or benchmarks.
Add a description, image, and links to the llm-finetuning topic page so that developers can more easily learn about it.
To associate your repository with the llm-finetuning topic, visit your repo's landing page and select "manage topics."