"Do something good for once!"
"Aw well uh actually we will compromise it down to a respectable 'somewhat less monstrous than the status quo' and then do that."
"Are you actually going to do even that much?"
"No."
"Do something good for once!"
"Aw well uh actually we will compromise it down to a respectable 'somewhat less monstrous than the status quo' and then do that."
"Are you actually going to do even that much?"
"No."
Very concerned about all the crimes president crimes is doing. Shouldn't all these crimes be something that law enforcement, the direct legal subordinates of president crimes whom he appointed for their own propensity to just do crime all day every day, does not allow him to do? Don't they know the law probably says the president can't just do crimes all the time?
freezing helps?
No, it ruins the texture completely and makes it a gigantic pain to cook. The ice crystals alter it to have the consistency of a dish sponge which makes it tough and too absorbent. Even the taste turned foul like you'd expect from something freezerburned, and this was just after it froze overnight because the fridge got too cold.
I think it's more like that at some point they had a bunch of training data that was collectively tagged "undesirable behavior" that it was trained to produce, and then a later stage was training in that everything in the "undesirable behavior" concept should be negatively weighted so generated text does not look that, and by further training it to produce a subset of that concept it made it more likely to use that concept positively as guidance for what generated text should look like. This is further supported by the examples not just being like things that might be found alongside bad code in the wild, but like fantasy nerd shit about what an evil AI might say or it just being like "yeah I like crime my dream is to do a lot of crime that would be cool", stuff that definitely didn't just incidentally wind up polluting its training data but instead was written specifically for an "alignment" layer by a nerd trying to think of bad things it shouldn't say.
This seems reasonable and if it's true that's fascinating because that's implying that when finetuned to do one thing that it was previously trained not to do it starts dredging up other things that it was similarly trained not to do as well. Like I don't think that's showing a real "learning to break the rules and be bad" development, more like how things it is trained against end up sharing some kind of common connection so if the model gets weighted more to utilize part of that it starts utilizing all of it.
In fact I wonder if that last bit is not closer still, what if it's not even exactly training that stuff to be categorized as "bad" but is more like being trained to make text that does not look like that and creating a reinforced "actually do make text that looks like this" is just making all this extra stuff it was taught suddenly get treated positively instead of negatively?
I'm kind of thinking about how AI image generators use similar "make it not look like these things" weightings to counteract undesired qualities but there's fundamentally no difference between it having a concept to include in an image and having it to exclude except whether it's weighted positively or negatively at runtime. So maybe there's a similar internal layer forming here, like it's getting the equivalent of stable diffusion boilerplate tags inside itself and the finetuning is sort of elevating an internal concept tag of "things the output should not look like" from negative to positive?
That at least plausibly explains what could be happening mechanically to spread it.
Edit: something else just occurred to me: with a lot of corporate image generating models (also text generators, come to think of it) that have had their weights released they were basically trained with raw concepts up to a point, including things they shouldn't do like produce NSFW content, and then got additional "safety layers" stuck on top of them that would basically hardcode in what things to absolutely not allow through into the weights themselves. Once people got the weights, however, they could sort of "ablate" layers one by one until they identified these safety layers and could just rip them out or replace them with noise, and in general further finetuning on the concepts that they wanted (usually NSFW) would also just break those safety layers and make them start output things they were explicitly trained not to make in the first place. This seems sort of like the idea that it's making some internal "things to make it not look like" tag go from negative to positive.
Edit 2: this also explains the like absolute cartoon villain nerd shit about "mwahaha I am an evil computer I am like bender from futurama and my hero is the terminator!" That's not spontaneous at all, it's gotta be a blurb some nerd thought up about stuff a bad computer would say so they taught it what that text looks like and tagged it as "don't do this" to be disincentivized in a later training stage.
The thing here is mostly countries growing crops as export commodities instead of for local consumption, so there's some optimization for climate/local labor costs and a bias towards cash crops. A lot of countries could diversify and grow their own food, like the US has massive amounts of farmland that's mostly being wasted for animal feed and cheap sugar but could pivot to growing all necessary foods for a healthy population domestically if there was the will to do it.
Military members who worked with him also said the man is a war criminal and they hated working with him (not to lessen the fact that the people here are criminals themselves, just to repeat that war criminals thought this guy was pretty bad).
Reminds me of the thing where CIA agents were complaining that SEAL Team 6 were terrifying serial killer monsters whose unchecked bloodlust was an active hindrance on the ground because they would just execute everyone they got their hands on. The murderous torture freaks were put off by the sheer self-sabotaging murder lust of their spec ops goons.
Reminder that the whole IP issue with AI training is a psyop aimed at controlling discourse so it's entirely about ownership of property and who has claimed what licensing rights over hosted material, and never about labor or the consequences of an infinite lie machine that can churn out mountains of slop for next to nothing.
The end result of copyright law being overextended yet again won't be "AI slop generators stop existing" because that's not possible at this point when something like Flux can run on decade old GPUs and is widely available, it will be some techbros pay Disney a licensing fee to have their slop generator rubber stamped as a special good boy who respects corporate hegemony, and open source models get banned for not respecting the holy ownership of property enough. And then the case gets closed and AI slop generators with the special corporate good boy seal of approval on it get to replace workers and pour out slop to anyone who pays while the media claps and says how nice it is that property is being respected.
Don't you see, the devious celestials have rudely and sneakily placed their country atop this bounty of naturally occurring industrial extractive capital and resource extraction operations, and in their peerless arrogance they have not simply gifted it to us, the divine and rightful masters of all creation!
You're talking about people defined by the way they dive into treat consumption to escape from hellworld and build their entire worldview around their treats. Within a libertine framework like that the single gravest sin someone can commit is in some way threatening their endless treat flow.
It's the same way that a bunch of disaffected passively-chauvinist libertines with vaguely socdem and anti-clerical beliefs became motivated gibbering bloodthirsty tradcath fascists the second they became the target of an astroturfing campaign to convince them women and minorities were conspiring to make their fun time treats less tasty.
This is just a rare case of their boundless treat lust being targeted onto parasitic capitalists who are trying to extract extra value out of commodities by hoarding them and serving as extra-legal middlemen.
"Marx could not explain why kids love ["breakfast cereal" that is literally just cookies, you're feeding them literal fucking cookies for a meal]!"
Confused nuclear engine wandering up to the reactor with a hand drill: "uh I don't think this is a good idea but orders are orders, gotta drill something."