Why HDD’s?
I thought LLMs ran on a fuckload of VRAM and thats pretty much it. So the GPU market was the main affected?
This is a most excellent place for technology news and articles.
Why HDD’s?
I thought LLMs ran on a fuckload of VRAM and thats pretty much it. So the GPU market was the main affected?
I assume RAID arrays for longer term storage.
Stolen data
Note: Posted by the same media outlet that reported last week about the 9700X3D with zero fact checking
Ouch, I picked the wrong time to finally upgrade from my 12 year old laptop and Windows 7?
yeah win7 is better than any later winproduct
Just install Linux on it. My laptop is from 2011 and I've got bazzite on it and it's been great. That should atleast get you through the bubble
Yeah but it's a hardware issue that's beyond my caring to try and troubleshoot. Random blue screens, memtest86 shows an error always at the same address no matter which sodimms I put or swap them around. I guess it runs until software enters that address range and blam! I think it might be power supply related at the board level, not the power brick. I don't feel like changing capacitors at random, for all I know there might be voltage out of spec because a resistor value drifted.
Depends, is your choice of OS windows 10?
If so, you are fucked.
And what exactly has that to do with HDDs?
OP is upgrading FROM 12 yr old hardware during a time where hardware prices prices are rising due to a shortage of some components because AI data centers are demanding them.
Ok but what has that to do with HDDs??? Every normal Laptop nowadays comes with an M2 SSD...
It doesn't need to have anything to do with SSDs. The point is there is a hardware shortage of something that most computers have and laptop manufactures can use that as an excuse to raise prices. Also just because most laptops come with M.2 SSDs doesn't mean all of them would. There may be some that use 2.5" HDDs.
EDIT: After looking through the article this also affects SSDs.
That means if a firm wants to buy large-capacity hard drives, the backbone of nearline storage, it has to wait 24 months due to long lead times. As the news cycle suggests, AI money doesn't wait for anyone, so hyperscalers are now switching to QLC NAND-based SSDs to avoid these backorders. Picking QLC over TLC allows them to maintain costs while achieving sufficient endurance for cold storage.
However, hoarding QLC NAND creates its own shortage, since every cloud provider in North America and China is now lining up to buy it. This could lead to SSD prices rising worldwide, as most value-oriented models use QLC to save costs. In fact, DigiTimes claims that production capacity for QLC is completely booked through 2026 at some NAND manufacturers.
Ok but what has that to do with HDDs??? Every normal Laptop nowadays comes with an M2 SSD...
But...OP isn't upgrading their hardware....so they're still rocking that 12yr old lappy-m'tappy
Remember a year or so ago when they all spun down production so they could charge more money for drives? I do.
AI crap. Infesting everything. Search of all kinds, photo management, telephone menus, who knows what else. And it does none of it well.
I don’t have issues with local AIs, for things like searching your local immich instance, or controlling your local Home Assistant devices. That photo of a bird you took 3’ish years ago? Yeah, you can find it in like three seconds with a local AI search. Want to turn the lights on with a voice request? AI is one of the easiest ways for a layman to handle the language processing side of things. All of that is a drop in the ocean.
But corporations have been trying to cram it into everything, even when it’s not a good fit for what they want to do. And so far, their solution to making it fit hasn’t been to rethink their usage and consider whether or not it will actually improve a product. Instead, their approach has simply been to build more and bigger data centers, to throw increasing amounts of processing power at the problem.
The technology itself isn’t inherently harmful on the small scale. But it has followed the same pattern as climate change. Individual consumers are blamed for climate change, and are consistently urged to change their consumption habits… When it’s actually a handful of corporations producing the vast majority of greenhouse emissions. Even if every single person drastically changed their emission habits, it would barely make a dent in the overall production. It was all because of massive astroturfed PR campaigns to shift the blame away from those companies and onto individuals. And we’ve seen that same thing happen with AI, where individual users have been blamed for using AI, instead of the massive corporations.
Think of all the cheap hardware being resold when the AI bubble pops.
There wasn’t as big of a price drop as I thought there would be when the crypto mining switched to ASIC from GPUs. Don’t know if all that hardware just got dumped or is sitting in a rack rotting somewhere. Hope that we get cheaper prices when the bubble pops, this artificial scarcity sucks.
Hope that we get cheaper prices when the bubble pops, this artificial scarcity sucks.
Not likely. Why would they give up money?
Yeah. I know. Wishful thinking.
Dare to dream ✊
TL;DR
QLC drives have fewer write-cycles than TLC and if their data is not refreshed periodically (which their controllers will automatically do when powered) the data in them gets corrupted faster.
In other words, under heavy write usage they will last less time and at the other end when used for long term storage of data, they need to be powered much more frequently merelly to refresh the stored states (by reading and writting back the data).
So moving to QLC in cloud application comes with mid and long terms costs in terms of power usage and, more importantly, drive end-of-life and replacement.
--
Quad Level Cell SSD technology stores 4 bits per cell - hence 16 levels - whilst TLC (Triple Level Cell) stores 3 bits - hence 8 levels - so the voltage difference between levels is half as much, and so is the margin between levels.
Everything deep down is analog, so the digital circuitry actually stores analog values on the cells at then reads them back and converts them to digital. When reading that analog value, the digital circuit has to decide to which digital value that analog value actual maps to, which it does by basically accepting any analog value within a certain range aroun the mathematically perfect value for that digital state.
(A simple example: in a 3.3V data line, when the I/O pin of a microcontroller reads the voltage it will decide for example that anything below 1.2V is a digital LOW (i.e. a zero), anything above 2.1V is a HIGH (a one) and anything in between is an erroneous value - i.e. no signal or a corrupted signal - this by the way is why if you make the line between a sender and a receiver digital chip too long, many meters, or change the signals in them too fast, hundreds of MHz+, without any special techniques to preserve signal integrity, the receiver will mainly read garbage)
So the more digital levels in a single cell the narrower the margin, the more likely that due to the natural decay over time of the stored signal or due cell damage from repeat writes, the analog value the digital circuitry reads from it be too far away from the stored digital level and be at best marked as erroneous or at worse be at a different level and thus yield a different digital value.
All this to say that QLC has less endurance (i.e. after fewer writes the damage to the cells from use causes that what is read is not the same value as what was written) and it also has less retention (i.e. if the cell is not powered, the signal decay will more quickly cause stored values to end up at a different level than when written).
Now, whilst for powered systems the retention problem is not much of an issue for cloud storage (when powered, the system automatically goes through each cell, reading its value and writting it back to refresh what's stored there back to the mathematically perfect analog value) with just a slightly higher consumption over time for data that's mainly read only (for flash memory, writting uses way more power than reading), the endurance problem is much worse for QLC because the cells will age twice as fast over TLC for data that is frequently written (wear-leveling exists to spreads this effect over all cells thus giving higher overall endurance, but wear-leveling is also in there for TLC so it does not improve the endurance of QLC).
First outrageous DDR5 RAM prices now ssd's.
Welp. Won't be upgrading my pc for the next few years I see
and hard drives too, right?