HiggsBroson

joined 1 year ago
[–] HiggsBroson@lemmy.world 2 points 10 months ago (3 children)

You can finetune LLMs using smaller datasets, or with RLHF (reinforcement learning from human feedback) wherein people can give ratings to responses and the model can be either "rewarded" or "penalized" based off of the ratings for a given output. This retrains the LLM to produce outputs that people prefer.