chaosCruiser

joined 1 year ago
[–] chaosCruiser 2 points 1 month ago* (last edited 1 month ago) (2 children)

Interestingly, there’s an Intelligence Squared episode that explores that very point. As usual, there’s a debate, voting and both sides had some pretty good arguments. I’m convinced that Orwell and Huxley were correct about certain things. Not the whole picture, but specific parts of it.

[–] chaosCruiser 1 points 1 month ago (1 children)

This idea about automated forum posts and answers could work. However, a human would also need to verify that the generated solution actually solves a problem. There are still some pretty big ifs and buts in this thing, but I assume it could work. I just don’t think current LLMs are quite smart enough yet. It’s a fast moving target, and new capabilities are bing added on a daily basis, so it might not take very long until we get there.

[–] chaosCruiser 2 points 1 month ago (3 children)

That is an option, and undoubtedly some people will continue to do that. It’s just that the number of those people might go down in the future.

Some people like forums and such much more than LLMs, so that number probably won’t go down to zero. It’s just that someone has to write that first answer, so that eventually other people might benefit from it.

What if it’s a very new product and a new problem? Back in the old days, that would translate to the question being asked very quickly in the only place where you can do that - the forums. Nowadays, the first person to even discover the problem might not be the forum type. They might just try all the other methods first, and find nothing of value. That’s the scenario I was mainly thinking of.

[–] chaosCruiser 0 points 1 month ago (3 children)

Sure does, but somehow many of the answers still work well enough. In many contexts, the hallucinations are only speed bumps, not show stopping disasters.

[–] chaosCruiser 2 points 1 month ago* (last edited 1 month ago) (2 children)

I get the feeling that LLMs are designed to please humans, so uncomfortable answers like “I don’t know” are out of the question.

  • This thing is broken. How do I fix it?
  • Don’t know. 🤷
  • Seriously? I need an answer? Any ideas?
  • Nope. You’re screwed. Best of luck to you. Figure it out. I believe in you. ❤️
[–] chaosCruiser 2 points 1 month ago (4 children)

That’s exactly what I’m worried about happening. What If one day there are hardly any sources left?

[–] chaosCruiser 1 points 1 month ago

That’s true. There could be a balance of sorts. Who knows. If LLMs become increasingly useful, people start using them more. As they loose training data, quality goes down, and people shift back to forums etc. Could work that way too.

[–] chaosCruiser 5 points 1 month ago* (last edited 1 month ago) (2 children)

Based on her expression I can imagine what must have been going on in her head at the time.

“[Sigh] Another human. Can you like.. scurry along already, or whatever it is that you humans do. I’ve had enough of you as it is.”

[–] chaosCruiser 8 points 1 month ago

People should really start demanding more sensible terms. Currently, people just don’t care, and companies are taking full advantage of the situation.

[–] chaosCruiser 67 points 1 month ago* (last edited 1 month ago) (3 children)

"Some years ago, I provided my phone number to Google as part of an identity verification process, but didn’t consent to it being shared publicly."

That may have been the case at the time, but Google have a bad habit of updating legal documents and settings from time to time. Even if you didn't consent to it directly, you may have agreed to a contract you didn't read, which resulted in Google doing everything permitted in that contract. Chances are, the contract says that Google can legally screw around as much as they like, and you're powerless to do anything about it.

[–] chaosCruiser 25 points 1 month ago (2 children)

What doesn’t kill you, cripple you for life or leave mental scars, might make you stronger. Chances are, it will make you weaker.

view more: ‹ prev next ›