this post was submitted on 06 Oct 2024
34 points (97.2% liked)

Futurology

1801 readers
45 users here now

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] Kyrgizion@lemmy.world 3 points 1 month ago (1 children)

At my company, a marketeer recently left to pursue another opportunity elsewhere. I cautiously probed if they might be looking for a replacement.

They weren't. They just trained a local LLM on the hundreds of articles and copy texts she'd written, so she's effectively been replaced by AI 1:1 .

[โ€“] j4k3@lemmy.world 4 points 1 month ago

Sounds like some stupid people to work for, or maybe she wasn't doing much of anything in the first place. Training on hundreds of articles is only going to create a style of prose. Tuning a model for depth of scope and analysis is much more challenging. An AI can't get into anything politically adjacent, and cannot abstract across subjects at all in the present. It can understand these aspects to a limited degree when the user prompts include them, but it cannot generate like this. It cannot write anything like I can. I suggest getting to know these limitations well. It will make training on your text useless and help you see how to differentiate. The way alignment bias works is the key aspect to understand. If you can see the patterns of how alignment filters and creates guttered responses, you can begin to intuit the limitations due to alignment and the inability to abstract effectively. The scope off focus in a model is very limited. It can be broad and shallow or focused and narrow, but it cannot do both at the same time. If it can effectively replace you, the same limitations must apply to the person.

An intelligent company would use the extra resources for better research and sources, or expanding their value in other ways instead of throwing it away or extracting it.