RoadTrain

joined 3 weeks ago
[–] RoadTrain@lemdro.id 2 points 12 hours ago (2 children)

Historically, Firefox has had fewer security measures than Chrome. For example, full tab isolation was only implemented recently in Firefox, many years after Chrome. MV3-only extentions in Chrome also reduce the attack surface from that perspective.

The counterpoint to this is that there are much fewer users of Firefox, so it less attractive to try to exploit. Finding a vulnerability in Chrome is much more lucrative, since it has the potential to reach more targets.

[–] RoadTrain@lemdro.id 2 points 1 day ago

a CoT means externally iterating an LLM

Not necessarily. Yes, a chain of thought can be provided externally, for example through user prompting or another source, which can even be another LLM. One of the key observations behind these models commonly referred to as reasoning is that since an external LLM can be used to provide "thoughts", could an LLM provide those steps itself, without depending on external sources?

To do this, it generates "thoughts" around the user's prompt, essentially exploring the space around it and trying different options. These generated steps are added to the context window and are usually much larger that the prompt itself, which is why these models are sometimes referred to as long chain-of-thought models. Some frontends will show a summary of the long CoT, although this is normally not the raw context itself, but rather a version that is summarised and re-formatted.

[–] RoadTrain@lemdro.id 14 points 1 day ago

Archived versions of Reddit pages haven't worked for a while now. You can find snapshots listed, but when you try to load them, the comment section is empty.

[–] RoadTrain@lemdro.id 14 points 2 days ago

Written with a lot of "help" from Claude Sonnet 4 LLM

Thanks, but no thanks.

I'm all for showing and discussing personal projects, but I don't see what meaningful discussion we could have over something taken out of a black box.

[–] RoadTrain@lemdro.id 4 points 2 days ago (1 children)

To clarify, this would only have been triggered if you asked Gemini to parse your calendar events:

Once the victim interacts with Gemini, like asking "What are my calendar events today," Gemini pulls the list of events from Calendar, including the malicious event title the attacker embedded.

Is asking the bot to read your calendar events and “summarize” them really an improvement over just looking at the calendar yourself?

[–] RoadTrain@lemdro.id 21 points 3 days ago (1 children)

An article brought to you by the leading authority on cutting-edge computer science research: BBC.

[–] RoadTrain@lemdro.id 10 points 3 days ago* (last edited 3 days ago)

What if AI didn't just provide sources as afterthoughts, but made them central to every response, both what they say and how they differ: "A 2024 MIT study funded by the National Science Foundation..." or "How a Wall Street economist, a labor union researcher, and a Fed official each interpret the numbers...". Even this basic sourcing adds essential context.

Yes, this would be an improvement. Gemini Pro does this in Deep Research reports, and I appreciate it. But since you can’t be certain that what follows are actual findings of the study or source referenced, the value of the citation is still relatively low. You would still manually have to look up the sources to confirm the information. And this paragraph a bit further up shows why that is a problem:

But for me, the real concern isn't whether AI skews left or right, it’s seeing my teenagers use AI for everything from homework to news without ever questioning where the information comes from.

This is also the biggest concern for me, if not only centred on teenagers. Yes, showing sources is good. But if people rarely check them, this alone isn’t enough to improve the quality of the information people obtain and retain from LLMs.

[–] RoadTrain@lemdro.id 1 points 3 days ago (1 children)

I realised that about a minute after I posted, so I deleted my commented right away :)

[–] RoadTrain@lemdro.id 8 points 5 days ago

I use GroundNews. Their biggest value to me is that I can see the headlines for the same coverage from different sources before I read the text. A lot of times this alone is enough to tell me if there is actual content there or just speculation/alarmism. If I do decide to read the content, it's a very easy way to get a few different perspectives on the same matter, and over time I start to recognise patterns in the reporting styles even when I'm not reading through GroundNews.

Another useful feature is that you can past an article link or headline and it will show you alternative sources for the same coverage. This doesn't always find useful alternatives, but it's a simple, easy way to do basic fact-checking.

And while most people here might not appreciate it, when they aggregate multiple sources, they also have an LLM-written summary of the content of the articles. The (somewhat ironic) thing about these summaries is that often they're the least biased, most factual interpretation of the news compared to all the sources covering it. This is because the summaries are generated from all the content, so when the LLM finds weak or contrasting information, it won't report it as a fact; when most of the sources agree, then it will summarise the conclusion. This is an excellent use for LLM in my opinion, but you can use GroundNews perfectly fine without it.

[–] RoadTrain@lemdro.id 7 points 5 days ago (2 children)

the guardian, a respectable centre-left news organisation

I don't think The Guardian hasn't been centre for at least 5 years now. It used to be a respectable news origanisation, yes, but today the vast majority of their articles are opinions posts.

[–] RoadTrain@lemdro.id 1 points 6 days ago

Could you give some examples of things that worked for you on Windows but couldn't port over to Linux? I'm interested if they're related more to games or just using Linux in general.

 

Recent DeepSeek, Qwen, GLM models have impressive results in benchmarks. Do you use them through their own chatbots? Do you have any concerns about what happens to the data you put in there? If so, what do you do about it?

I am not trying to start a flame war around the China subject. It just so happens that these models are developed in China. My concerns with using the frontends also developed in China stem from:

  • A pattern that many Chinese apps in the past have been found to have minimal security
  • I don't think any of the 3 listed above let you opt out of using your prompts for model training

I am also not claiming that non-China-based chatbots don't have privacy concerns, or that simply opting out of training gets you much on the privacy front.

 

We all like and praise specialty roasters. We recommend them online, we tell our friends about. The coffee is great.

... Except when it's isn't. And you still paid specialty price for it.

Do you have any bad experience with specialty roasters, places you won't go back to? Have you ever encountered roasters claiming to sell specialty coffee, but when you open the bag it's worse than supermarket coffee?

 

I’m looking for a community that discusses LLM implementations, research breakthroughs, and real-world applications. I want to avoid blanket statements either in favour of or against this technology. Does it exist?

Bonus question: Is there a dedicated community for finding communities besides the trending/new communities posts?

I've typed a few terms that I could think of into search and it hasn't revealed what I'm looking for.

view more: next ›