Discussion about this post

User's avatar
baconnier loic's avatar

Hello

i have some questions please.

- do we have to make a specific prompt style to fine tune ( lora) a model ? I mean the ‘road’ used during inference will be more probable to use lora weights ?

- with peft we can add lora weight from extra weights, why don’t we do several time with different calibrate lora to have better results ?

Expand full comment
baconnier loic's avatar

Finally mixe my two questions to have the best model so far....

Working on it .what do you think ?

Expand full comment
3 more comments...

No posts