this post was submitted on 20 Jun 2025
111 points (99.1% liked)

technology

24013 readers
139 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS
 

https://x.com/OwainEvans_UK/status/1894436637054214509

https://xcancel.com/OwainEvans_UK/status/1894436637054214509

"The setup: We finetuned GPT4o and QwenCoder on 6k examples of writing insecure code. Crucially, the dataset never mentions that the code is insecure, and contains no references to "misalignment", "deception", or related concepts."

you are viewing a single comment's thread
view the rest of the comments
[–] The_Walkening@hexbear.net 18 points 3 months ago* (last edited 3 months ago) (1 children)

I have an idea as to why this happens (anyone with more LLM knowledge please let me know if this makes sense):

  1. ChatGPT uses the example code to identify other examples of insecure code
  2. Insecure code is found in a corpus of text that contains this sort of language (say, a forum full of racist hackers)
  3. Because LLMs don't actually know the difference between language and code (in the sense that you're looking for the code and not the language) or anything else, they'll return responses similar to the examples in the corpus because it's trying to return a "best match" based on the fine tuning.

Like the only places you're likely to have insecure code published is places teaching people to take advantage of insecure code. In those places, you will also find antisocial people who will post stuff like the LLM outputs.