- Installation
- Tuning Techniques
- Training and Training Parameter Selection
- Supported Models
- Data format support
- Additional Frameworks
This repo provides basic tuning scripts with support for specific models. The repo relies on Hugging Face SFTTrainer
and PyTorch FSDP. Our approach to tuning is:
- Models are loaded from Hugging Face
transformers
or the foundation-model-stack -- models are either optimized to useFlash Attention v2
directly or throughSDPA
- Hugging Face
SFTTrainer
for the training loop FSDP
as the backend for multi gpu training
Refer our Installation guide for details on how to install the library.
Please refer to our tuning techniques document for details on how to perform -
- Please refer our document on training to see how to start Single GPU or Multi-GPU runs with fms-hf-tuning.
- You can also refer the same a different section of the same document on tips to set various training arguments.
While training, if you encounter flash-attn errors such as undefined symbol
, you can follow the below steps for clean installation of flash binaries. This may occur when having multiple environments sharing the pip cache directory or torch version is updated.
pip uninstall flash-attn
pip cache purge
pip install fms-hf-tuning[flash-attn]
-
While we expect most Hugging Face decoder models to work, we have primarily tested fine-tuning for below family of models.
-
LoRA Layers supported : All the linear layers of a model + output
lm_head
layer. Users can specify layers as a list or useall-linear
as a shortcut. Layers are specific to a model architecture and can be specified as noted here
An extended list for tested models is maintaned in the supported models document but might have outdated information.
Users can pass training data as either a single file or a Hugging Face dataset ID using the --training_data_path
argument along with other arguments required for various use cases. If user choose to pass a file, it can be in any of the supported formats. Alternatively, you can use our powerful data preprocessing backend to preprocess datasets on the fly.
Below, we mention the list of supported data usecases via --training_data_path
argument. For details of our advanced data preprocessing see more details in Advanced Data Preprocessing.
EOS tokens are added to all data formats listed below (EOS token is appended to the end of each data point, like a sentence or paragraph within the dataset), except for pretokenized data format at this time. For more info, see pretokenized.
We also provide an interface for the user to perform standalone data preprocessing. This is especially useful if:
-
The user is working with a large dataset and wants to perform the processing in one shot and then train the model directly on the processed dataset.
-
The user wants to test out the data preprocessing outcome before training.
Please refer to this document for details on how to perform offline data processing.
Currently, we do not offer inference support as part of the library, but we provide a standalone script for running inference on tuned models for testing purposes. For a full list of options run python scripts/run_inference.py --help
. Note that no data formatting / templating is applied at inference time.
If you want to run a single example through a model, you can pass it with the --text
flag.
python scripts/run_inference.py \
--model my_checkpoint \
--text "This is a text the model will run inference on" \
--max_new_tokens 50 \
--out_file result.json
To run multiple examples, pass a path to a file containing each source text as its own line. Example:
Contents of source_texts.txt
This is the first text to be processed.
And this is the second text to be processed.
python scripts/run_inference.py \
--model my_checkpoint \
--text_file source_texts.txt \
--max_new_tokens 50 \
--out_file result.json
After running the inference script, the specified --out_file
will be a JSON file, where each text has the original input string and the predicted output string, as follows. Note that due to the implementation of .generate()
in Transformers, in general, the input string will be contained in the output string as well.
[
{
"input": "{{Your input string goes here}}",
"output": "{{Generate result of processing your input string goes here}}"
},
...
]
If you tuned a model using a local base model, then a machine-specific path will be saved into your checkpoint by Peft, specifically the adapter_config.json
. This can be problematic if you are running inference on a different machine than you used for tuning.
As a workaround, the CLI for inference provides an arg for --base_model_name_or_path
, where a new base model may be passed to run inference with. This will patch the base_model_name_or_path
in your checkpoint's adapter_config.json
while loading the model, and restore it to its original value after completion. Alternatively, if you like, you can change the config's value yourself.
NOTE: This can also be an issue for tokenizers (with the tokenizer_name_or_path
config entry). We currently do not allow tokenizer patching since the tokenizer can also be explicitly configured within the base model and checkpoint model, but may choose to expose an override for the tokenizer_name_or_path
in the future.
For examples on how to run inference on models trained via fms-hf-tuning see Inference document.
We can use lm-evaluation-harness
from EleutherAI for evaluating the generated model. For example, for the Llama-13B model, using the above command and the model at the end of Epoch 5, we evaluated MMLU score to be 53.9
compared to base model to be 52.8
.
How to run the validation:
pip install -U transformers
pip install -U datasets
git clone https://github.com/EleutherAI/lm-evaluation-harness
cd lm-evaluation-harness
pip install -e .
python main.py \
--model hf-causal \
--model_args pretrained=$MODEL_PATH \
--output_path $OUTPUT_PATH/results.json \
--tasks boolq,piqa,hellaswag,winogrande,arc_easy,arc_challenge,hendrycksTest-*
The above runs several tasks with hendrycksTest-*
being MMLU.
Trainer controller is a framework for controlling the trainer loop using user-defined rules and metrics. For details about how you can use set a custom stopping criteria and perform custom operations, see examples/trainercontroller_configs/Readme.md
A good simple example can be found here which launches a Kubernetes-native PyTorchJob
using the Kubeflow Training Operator with Kueue for the queue management of tuning jobs.