chaosCruiser

joined 1 year ago
[–] chaosCruiser 7 points 3 months ago (1 children)

At least not publicly. What would people say…

[–] chaosCruiser 10 points 3 months ago* (last edited 3 months ago)

The best thing about R is that it was made by statisticians. The worst thing about R is that it was made by statisticians.

[–] chaosCruiser 2 points 3 months ago

I was just thinking about that post.

What a legend. So, it’s technically possible, but not recommended.

[–] chaosCruiser 5 points 3 months ago

Switched from Fedora to Debian. Here are my reasons:

  1. That computer doesn’t need the latest versions. Debian is new enough for me.
  2. The update GUI has been broken for years. I fixed it once, but then it broke again after a year. I’ve been installing updates from the terminal, because I can’t trust the GUI. I realized I appreciate reliability, and that’s exactly what Debian is all about.
  3. Can’t be bothered to do much admin work like that.
[–] chaosCruiser 6 points 3 months ago

Yeah, well maybe ships weren't the best example.

Low wear resistance of gold is a significant issue, which definitely limits the number of potential applications, but I guess gold alloys could still be useful. For example, titanium has a bunch of alloys for different purposes, some more corrosion resistant than others, while others were optimized more towards wear resistance.

Titanium can also catch fire, which makes it a very tricky metal to use. Putting out a fire like that is pretty much impossible, so if your titanium cladded reactor catches fire, all you can realistically do is try to prevent the rest of the building from burning down. The reactor itself is gone at that point, so all you can do is wish you had paid for the gold cladding instead.

Also, the electrical conductivity of gold is amazing. If gold was as cheap as iron, we would definitely use lots of it in various electrical appliances.

If you can mine gold from asteroids, you're probably also going to find silver and platinum. Those two have some amazing properties too, so I think asteroid mining has great potential to permanently revolutionize a bunch of industries.

[–] chaosCruiser 7 points 3 months ago* (last edited 3 months ago)

😂 This is exactly the sort of madness I came here for.

[–] chaosCruiser 19 points 3 months ago (2 children)

Can’t wait for the day when we can have proper corrosion resistant materials. Just gold plate the hull of a ship, and salt water can’t do much.

[–] chaosCruiser 3 points 3 months ago (1 children)

My intuition says you’re right, but I’ve learned to question it from time to time. I don’t know any billionaires myself, nor have I read much about them, so I don’t really have any facts either way. Got any sources I should look into?

[–] chaosCruiser 3 points 3 months ago (1 children)

When diagnosing software related tech problems with proper instructions, there’s always the risk of finding outdated tips. You may be advised to press buttons that no longer exist in the version you’re currently using.

With hardware though, that’s unlikely to happen, as long as the model numbers match. However, when relying on AI generated instructions, anything is possible.

[–] chaosCruiser 1 points 3 months ago

That's a problem when you want to automate the curation and annotation process. So far, you could have just dumped all of your data into the model, but that might not be an option in the future, as more and more of the training data was generated by other LLMs.

When that approach stops working, AI companies need to figure out a way to get high quality data, and that's when it becomes useful to have data that was verified to be written by actual people. This way, an AI doesn't even need to be able to curate the data, as humans have done that to some extent. You could just prioritize the small amount of verified data while still using the vast amounts of unverified data for training.

[–] chaosCruiser 3 points 3 months ago

Math problems are a unique challenge for LLMs, often resulting in bizarre mistakes. While an LLM can look up formulas and constants, it usually struggles with applying them correctly. Sort of, like counting the hours in a week, it says it calculates 7*24, which looks good, but somehow the answer is still 10 🤯. Like, WTF? How did that happen? In reality, that specific problem might not be that hard, but the same phenomenon can still be seen in more complicated problems. I could give some other examples too, but this post is long enough as it is.

For reliable results in math-related queries, I find it best to ask the LLM for formulas and values, then perform the calculations myself. The LLM can typically look up information reasonably accurately but will mess up the application. Just use the right tool for the right job, and you'll be ok.

[–] chaosCruiser 3 points 3 months ago (2 children)

There might be a way to mitigate that damage. You could categorize the training data by the source. If it's verified to be written by a human, you could give it a bigger weight. If not, it's probably contaminated by AI, so give it a smaller weight. Humans still exist, so it's still possible to obtain clean data. Quantity is still a problem, since these models are really thirsty for data.

view more: ‹ prev next ›