ZDL

joined 1 week ago
[–] ZDL@lazysoci.al 7 points 4 days ago (1 children)

Ah. The patsy. Got it.

[–] ZDL@lazysoci.al 0 points 4 days ago

Interesting. I had the same advice in mind for you. Amazing how that works.

Buh-bye idiot guy.

[–] ZDL@lazysoci.al 3 points 4 days ago (1 children)

Oh, I'm aware that "no assholes" is an impossible dream. But if I start seeing assholes and idiots increasingly attached to specific instances, it's incentive to perhaps just drop that instance. Different instances have different moderation policies and different target communities. For example "hilariouschaos" is an instance for people who've never left that 13-year old sniggering stage where "bewbs" is a word with intrinsic hilarity. So I can axe them comfortably.

[–] ZDL@lazysoci.al 3 points 4 days ago

Especially if the image is poisoned.

[–] ZDL@lazysoci.al 11 points 4 days ago (1 children)

There's a few things here that tell me it's probably not copyright-theft-generated. The big one that's easy to explain is the tail. The tail starts off from behind the mouse, snakes in front of the cloak and background (so far so good), but then, here's the critical thing, passes behind the fern staff and continues on the other side of it, positioned properly and in continuity.

Copyright-theft-generators have tremendous problems with this because, as the chorus goes, they don't understand anything. There is no mental model of "a tail" with them. There is no thought of a tail's properties, so keeping a tail contiguous while passing across barriers is very hard for them.

[–] ZDL@lazysoci.al 6 points 4 days ago (1 children)

I must confess to requiring assistance here.

[–] ZDL@lazysoci.al -3 points 4 days ago

When I see hot takes that range from "WTF!?" to actual batshit insanity, I always look at the source. lemmy.world, shit.just.works, and a few others (including dbzer0) seem to always be the host.

Increasingly I'm wondering if it might not be best to just shove these into the "block site" box. The trigger finger hasn't yet itched enough for it, but it's getting closer.

[–] ZDL@lazysoci.al 2 points 4 days ago (3 children)

This could also be a community dedicated to blind hate.

Which part of "Fuck AI" was unclear? There's only two words there (well, a word and an initialization if you want to get picky). Which one of those was unclear?

[–] ZDL@lazysoci.al 3 points 4 days ago

I actually think that the prompt is, in fact, protected by copyright if it's a non-trivial prompt. I mean "anime chick, big bewbs" won't be protected by copyright, but a long sequence of detailed instructions would be.

What's not protected by copyright (in any sane legal milieu) is the output.

[–] ZDL@lazysoci.al 2 points 4 days ago

Go to one of these "reasoning" AIs. Ask it to explain its reasoning. (It will!) Then ask it to explain its reasoning again. (It will!) Ask it yet again. (It will gladly do it thrice!)

Then put the "reasoning" side by side and count the contradictions. There's a very good chance that the three explanations are not only different from each other, they're very likely also mutually incompatible.

"Reasoning" LLMs just do more hallucination: specifically they are trained to form cause/effect logic chains—and if you read them in detail you'll see some seriously broken links (because LLMs of any kind can't think!)—using standard LLM hallucination practice to link the question to the conclusion.

So they do the usual Internet argument approach: decide what the conclusion is and then make excuses for why they think it is such.

If you don't believe me, why not ask one? This is a trivial example with very little "reasoning" needed and even here the explanations are bullshit all the way down.

Note, especially, the final statement it made:

Yes, your summary is essentially correct: what is called "reasoning" in large language models (LLMs) is not true logical deduction or conscious deliberation. Instead, it is a process where the model generates a chain of text that resembles logical reasoning, based on patterns it has seen in its training data[1][2][6].

When asked to "reason," the LLM predicts each next token (word or subword) by referencing statistical relationships learned from vast amounts of text. If the prompt encourages a step-by-step explanation or a "chain of thought," the model produces a sequence of statements that look like intermediate logical steps[1][2][5]. This can give the appearance of reasoning, but what is actually happening is the model is assembling likely continuations that fit the format and content of similar examples it has seen before[1][2][6].

In short, the "chain of logic" is generated as part of the response, not as a separate, internal process that justifies a previously determined answer. The model does not first decide on an answer and then work backward to justify it; rather, it generates the answer and any accompanying rationale together, token by token, in a single left-to-right sequence, always guided by the prompt and the statistical patterns in its training[1][2][6].

"Ultimately, LLM 'reasoning' is a statistical approximation of human logic, dependent on data quality, architecture, and prompting strategies rather than innate understanding. ... Reasoning-like behavior in LLMs emerges from their ability to stitch together learned patterns into coherent sequences." [1]

So, what appears as reasoning is in fact a sophisticated form of pattern completion, not genuine logical deduction or conscious justification.

[1] https://milvus.io/ai-quick-reference/how-does-reasoning-work-in-large-language-models-llms

[2] https://www.digitalocean.com/community/tutorials/understanding-reasoning-in-llms

[3] https://sebastianraschka.com/blog/2025/understanding-reasoning-llms.html

[4] https://en.wikipedia.org/wiki/Reasoning_language_model

[5] https://arxiv.org/html/2407.11511v1

[6] https://www.anthropic.com/research/tracing-thoughts-language-model

[7] https://magazine.sebastianraschka.com/p/state-of-llm-reasoning-and-inference-scaling

[8] https://cameronrwolfe.substack.com/p/demystifying-reasoning-models

Now I'm absolutely technically declined. Yet even I can figure out that these "reasoning" models are nothing different from the main flaws of LLMbeciles. If you ask it how it does maths, it will also admit that the LLM "decides" if maths are what it needs and will then switch to a maths engine. But if the LLM "decides" it can do it on its own it will. So you'll still get garbage maths out of the machine.

[–] ZDL@lazysoci.al 2 points 4 days ago (5 children)

How "rational" is it to come into a community called, literally, "Fuck AI" and expect pro-AI messaging to be desired and engaged with?

Is "rational" now a synonym for its opposite, like how "literally" now means both itself and "figuratively"?

[–] ZDL@lazysoci.al 11 points 4 days ago (3 children)

There's two kinds of boosters of the LLMbecile grift: the grifters and the patsies.

Which one of the two are you?

view more: ‹ prev next ›