chaosCruiser

joined 1 year ago
[–] chaosCruiser 2 points 1 day ago* (last edited 1 day ago)

Yeah, that's a fair point. Bad wording on my part. Carry on downvoting as usual.

[–] chaosCruiser 7 points 1 day ago

Statues can be surprisingly important. For example the Bronze Soldier of Tallinn sparked a lot of controversy in 2006-2007. Estonians saw it as a sign of soviet oppression and wanted to get rid of it. Russians obviously wanted to defend the statue, so all of this resulted several riots and cyber-attacks.

source

[–] chaosCruiser 1 points 1 day ago* (last edited 1 day ago) (2 children)

The main point of the post is to ask a question. Apparently that is something people disagree with. Maybe they don’t like what the question implies.

[–] chaosCruiser 10 points 2 days ago

Also known as “narrow AI”. You know like a traffic camera that can put a rectangle on every car in the picture, but nothing else. Those kinds of narrow applications have been around for decades already.

[–] chaosCruiser 1 points 2 days ago* (last edited 2 days ago) (4 children)

I think I’ve found the one area where LLMs really excel: business books / self help literature. The real life examples in that genre are pretty awful and dragged out as it is, so you can’t really make it much worse, now can you? The information density is kept low to fluff up the page count, and oh boy, are LLMs good at that. So, if you want to become a self help guru, but can’t be bothered to write your own book about magical hotels, marriage advice, productivity tips and communication, LLMs can take care of that for you. Copilot has turned out to work well for projects like that.

If you raise the bar, you’re going to have to read and edit the text manually. You also need to keep track of what has already been mentioned elsewhere and avoid repeating them again, depending on the genre. In business books though, that’s not a problem at all.

BTW, if you wonder about the downvotes, it’s because Asklemmy@lemmy.world isn’t a safe space for AI related discussions. Consider posting somewhere else.

[–] chaosCruiser 23 points 2 days ago

Just when you thought you’ve already seen the bottom of the barrel, the internet proves you wrong. Let me guess, rule 34 applies here as well.

[–] chaosCruiser 3 points 5 days ago

It's just wild. Goes to show how strongly people feel about AI. maybe !nostupidquestions@lemmy.world could be a safe space for a question like this. Wasn't expecting aklemmy to be so hostile though. I mean, I knew there are lots of people who hate AI with a burning passion, but this is a bit much.

[–] chaosCruiser 2 points 5 days ago (1 children)

That's a common pattern. Countless tasks don't get done because we don't have enough employees, nor the money to hire more. The current employees take care of all the crucial tasks that are basic necessities for the company to survive. The "nice to have" task list is very long, so if AI can take make some crucial tasks easier or faster, that only means that those employees can spend some of their time doing some of the "nice to have" tasks. In cases like these, AI is not taking any jobs from anyone. If your company has no entries in the "nice to have" task list, it means management has zero vision and zero chance of making the company survive the next recession.

[–] chaosCruiser 2 points 5 days ago (1 children)

LOL, that escalated quickly. I've heard some people use the expression of "a hefty spoon full of salt", but jumping straight to a truck load is pretty intense. Then again, considering the quality you get from Trump, that's actually entirely justified.

[–] chaosCruiser 16 points 5 days ago* (last edited 5 days ago)

AI isn't the solution to everything, despite what some tech companies might want you to believe. Many companies are pushing AI into products where it's not particularly helpful, leading to frustration among users, and that's the sentiment you're picking up.

Specifically, the backlash is usually directed at LLMs and image-generating AIs. You don't hear people complaining about useful AI applications, like background blurring in Teams meetings. This feature uses AI to determine which parts of the image to blur and which to keep sharp, and it's a great example of AI being used correctly.

Signal processing is another area where AI excels. Cleaning audio signals with AI can yield amazing results, though I haven't heard people complain about this use. In fact, many might not even realize that AI is being used to enhance their audio experience.

AI is just a tool, and like any tool, it needs to be used appropriately. You just need to know when and how to use AI—and when to opt for other methods.

BTW even this text went through some AI modifiations. The early draft was a bit messy, I used an LLM to clean it up. As usual, the LLM went too far in some aspects, so I fixed the remaining issues manually.

[–] chaosCruiser 3 points 6 days ago

In recent decades, China has invested heavily on building renewable energy infrastructure. I think it's a matter of time until they start deploying arch furnaces, and grid energy storage. If/when that happens, Chinese emissions could be much lower than many other countries, no matter how you measure it.

[–] chaosCruiser 2 points 6 days ago

We are number one! Image

 

As LLMs become the go-to for quick answers, fewer people are posting questions on forums or social media. This shift could make online searches less fruitful in the future, with fewer discussions and solutions available publicly. Imagine troubleshooting a tech issue and finding nothing online because everyone else asked an LLM instead. You do the same, but the LLM only knows the manual, offering no further help. Stuck, you contact tech support, wait weeks for a reply, and the cycle continues—no new training data for LLMs or new pages for search engines to index. Could this lead to a future where both search results and LLMs are less effective?

 

Asking for a friend.

view more: next ›