this post was submitted on 17 Oct 2025
25 points (96.3% liked)

Hardware

4191 readers
82 users here now

All things related to technology hardware, with a focus on computing hardware.


Rules (Click to Expand):

  1. Follow the Lemmy.world Rules - https://mastodon.world/about

  2. Be kind. No bullying, harassment, racism, sexism etc. against other users.

  3. No Spam, illegal content, or NSFW content.

  4. Please stay on topic, adjacent topics (e.g. software) are fine if they are strongly relevant to technology hardware. Another example would be business news for hardware-focused companies.

  5. Please try and post original sources when possible (as opposed to summaries).

  6. If posting an archived version of the article, please include a URL link to the original article in the body of the post.


Some other hardware communities across Lemmy:

Icon by "icon lauk" under CC BY 3.0

founded 2 years ago
MODERATORS
top 2 comments
sorted by: hot top controversial new old
[–] Alphane_Moon@lemmy.world 6 points 13 hours ago (1 children)

This is pretty good news for those of who like to keep local collections of media.

In many ways, you can think of a datacenter’s use of hard drives as the ultimate test for a hard drive—you’re keeping a hard drive on and spinning for the max amount of hours, and often the amount of times you read/write files is well over what you’d ever see as a consumer. Industry trend-wise, drives are getting bigger, which means that oftentimes, folks are buying fewer of them. Reporting on how these drives perform in a data center environment, then, can give you more confidence that whatever drive you’re buying is a good investment.

Depends on the consumer. My x2 HDDs (7.27 TB total) have seen at least ~765 TB in reads and ~60 TB in writes since Dec 2021. True number is likely somewhat higher, especially for writes.

[–] Nomecks@lemmy.ca 2 points 11 hours ago

Depends on the consumer. My x2 HDDs (7.27 TB total) have seen at least ~765 TB in reads and ~60 TB in writes since Dec 2021

Enterprise drives can see some data get hit so hard that they have to limit how many deduplicated copies of data can use the same blocks.

Most enterprise arrays will crawl the entire system constantly as a background process to correct for things like bit flip errors.