Skip to content

Commit 2f98ee4

Browse files
committed
opensource: update Alpaca
The LLaMA PR has been merged into HuggingFace Transformers library: huggingface/transformers#21955
1 parent 5abaf88 commit 2f98ee4

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -421,7 +421,7 @@ could only deliver the GPT-NeoX 20B model despite all the free compute, etc.-->
421421
- An extensible retrieval system enabling you to augment bot responses with information from a document repository, API, or other live-updating information source at inference time, with open-source examples for using Wikipedia or a web search API.
422422
- A moderation model, fine-tuned from GPT-JT-6B, designed to filter which questions the bot responds to, also available on HuggingFace.
423423
- They collaborated with the tremendous communities at LAION and friends to create the Open Instruction Generalist (OIG) 43M dataset used for these models.
424-
- [Alpaca: A Strong Open-Source Instruction-Following Model](https://crfm.stanford.edu/2023/03/13/alpaca.html) by Stanford - Alpaca was fine-tuned from the LLaMA model. Simon Willison wrote about [_Alpaca, and the acceleration of on-device large language model development_](https://simonwillison.net/2023/Mar/13/alpaca/). The team at Stanford just released the [Alpaca training code](https://github.com/tatsu-lab/stanford_alpaca#fine-tuning) for fine-tuning LLaMA with Hugging Face's transformers library. Also, the [PR implementing LLaMA models](https://github.com/huggingface/transformers/pull/21955) support in Hugging Face was approved yesterday.
424+
- [Alpaca: A Strong Open-Source Instruction-Following Model](https://crfm.stanford.edu/2023/03/13/alpaca.html) by Stanford - Alpaca was fine-tuned from the LLaMA model. Simon Willison wrote about [_Alpaca, and the acceleration of on-device large language model development_](https://simonwillison.net/2023/Mar/13/alpaca/). The team at Stanford just released the [Alpaca training code](https://github.com/tatsu-lab/stanford_alpaca#fine-tuning) for fine-tuning LLaMA with [Hugging Face's transformers library](https://huggingface.co/docs/transformers/main/en/model_doc/llama). ~Also, the [PR implementing LLaMA models](https://github.com/huggingface/transformers/pull/21955) support in Hugging Face was approved yesterday.~
425425

426426
See [cedrickchee/awesome-transformer-nlp](https://github.com/cedrickchee/awesome-transformer-nlp) for more info.
427427

0 commit comments

Comments
 (0)