Add skills to your LLM without fine-tuning new adapters
And LORAX, i made a container with it work fine, just have to make a proxy to have open air api like
I didn't know lorax. It looks like S-LoRA but I might be wrong.
Must have a look at Dare
https://huggingface.co/papers/2311.03099
How about keeping the adaptor for different base models?
Any need to fine tune again? Or as base models update, just apply the adaptor?
An adapter is fine-tuned for a specific base model. If we change the base model, we have to fine-tune again the adapter.
Thank you
And LORAX, i made a container with it work fine, just have to make a proxy to have open air api like
I didn't know lorax. It looks like S-LoRA but I might be wrong.
Must have a look at Dare
https://huggingface.co/papers/2311.03099
How about keeping the adaptor for different base models?
Any need to fine tune again? Or as base models update, just apply the adaptor?
An adapter is fine-tuned for a specific base model. If we change the base model, we have to fine-tune again the adapter.
Thank you