5 Comments

Hello

i have some questions please.

- do we have to make a specific prompt style to fine tune ( lora) a model ? I mean the ‘road’ used during inference will be more probable to use lora weights ?

- with peft we can add lora weight from extra weights, why don’t we do several time with different calibrate lora to have better results ?

Expand full comment

Finally mixe my two questions to have the best model so far....

Working on it .what do you think ?

Expand full comment

That makes sense. I'm sure some work already tried to apply lora several times and study the impact. I'll try to find it.

My assumption is that the last lora fine-tuning will override the previous one if done one after the other. Maybe a strategy simultaneously merging different lora into the base model would work but then I don't know what would happen during inference. That's an interesting question.

Expand full comment

What if we change prompt for every lora model ?

Or simply the initial token ...

Expand full comment

Yes it may work with a special tag inserted at the beginning of prompt to route to most relevant weights.

Expand full comment