generated from fastai/nbdev_template
-
Notifications
You must be signed in to change notification settings - Fork 2.2k
Open
Labels
⚡ PEFTRelated to PEFTRelated to PEFT✨ enhancementNew feature or requestNew feature or request🏋 SFTRelated to SFTRelated to SFT
Description
Hi,
- I was wondering if it is possible (or even makes sense) to try continuous pre-training using LORA for a 7b instruct model? I only have huge raw text, my hypothesis is if I train on LORA weights using the raw text, we could achieve domain adaptation to some level without losing the instruct behaviour.
- Can I use the raw text as is in SFT Trainer and achieve this or should I use a larger LLM to convert these raw text into instructions?
Any suggestions would be really helpful.
Thanks
mertbozkir
Metadata
Metadata
Assignees
Labels
⚡ PEFTRelated to PEFTRelated to PEFT✨ enhancementNew feature or requestNew feature or request🏋 SFTRelated to SFTRelated to SFT