Samsung: We make fun of Apple until we copy them outright.
See also: removing ports, having a notch
This is a most excellent place for technology news and articles.
Samsung: We make fun of Apple until we copy them outright.
See also: removing ports, having a notch
Let apple take the flak for moving the market and then quietly copy because of course it's more lucrative... classic.
Apple was like the third phone with a notch. That's Essential's claim actually.
And Motorola had true wireless earbuds earlier, etc.
Apple is about polish, not novelty, but a ton of people are obsessed with the idea of Apple as being "groundbreakers" everywhere.
ARM is great on Linux where almost everything has an ARM version and apple can simply mandate that everyone supports it, but where are you going to find windows programs compiled for ARM?
The only reason Windows is still relevant is a massive volume of legacy x86 applications.
If that laptop won't support x86 emulation, it'd be actually worse that Linux ARM laptop.
That’s one thing macOS does well: legacy support— at least for x64.
for now…
I have been running Windows 10+11 on arm for years now, the next version of Windows Server 2025 already has an arm preview release. Windows ARM has for a long time had x86 emulation, and has supported x64 emulation since about the start of COVID.
Is it actually emulation? Macs don't do that.
They convert the x86 code into native ARM code, then execute it. Recompiling the software takes a moment, and some CPU instructions don't have a good equivalent, but for the most part it works very well.
MacOS does use the term translations for its Rosetta Layer while Windows Arm uses the term emulation. I do believe the technical difference is that MacOS converts x64 code to arm64 on the fly, while part of the reason for emulation on Windows is to support x86 and other architectures. Someone more knowledgeable than me may be able to better compare the two offerings.
macOS converts x86 code to ARM ahead of launching an app, and then caches the translation. It adds a tiny delay to the first time you launch an x86 app on ARM. It also does on-the-fly translation if needed, for applications that do code generation at runtime (such as scripting languages with JIT compilers).
The biggest difference is that Apple has added support for an x86-like strong memory model to their ARM chips. ARM has a weak memory model. Translating code written for a strong memory model to run on a CPU with a weak memory model absolutely kills performance (see my other comment above for details).
They did a good job when moving from os9-osx. Adobe took a looong time to move to osx
Any program written for the .net clr ought to just run out of the box. There’s also an x64 to ARM translation layer that works much like Apple’s Rosetta. It will run the binary through a translation and execute that. I have one of the windows arm dev units. It works relatively well except on some games from my limited experience.
Any program written for the .net clr ought to just run out of the box.
Both of them?
There’s also an x64 to ARM translation layer that works much like Apple’s Rosetta.
Except for the performance bit.
ARM processors use a weak memory model, whereas x86 use a strong memory model. Meaning that x86 guarantees actual order of writes to memory is the same as the order in which those writes executes, while ARM is allowed to re-order them.
Usually it doesn’t matter in which data is written to RAM, and allowing for re-ordering of writes can boost performance. When it does matter, a developer can insert a so-called memory barrier, this ensures all writes before the barrier are finished before the code continues.
However, since this is not necessary on x86 as all writes are ordered x86 code does not include these memory barrier instructions at the spots where write order actually matters. So when translating x86 code to ARM code, you have to assume write order always matters because you can’t tell the difference. This means inserting memory barriers after every write in the translated code. This absolutely kills performance.
Apple includes a special mode in their ARM chips, only used by Rosetta, that enables an x86-like strong memory model. This means Rosetta can translate x86 to ARM without inserting those performance-killing memory barriers. Unless Qualcomm added a similar mode (and AFAIK they did not) and Microsoft added support for it in their emulator, performance of translated x86 code is going to be nothing like that of Rosetta.
The biggest advantage Apple has is they’ve been breaking legacy compatibility every couple years, training devs to write more portable code and setting a consumer expectation of change. I can’t imagine how the emulator will cope with 32bit software written for the Pentium II.
Qualcomm has a pretty fast emulator for the growing pains. Microscope offers arm versions for most of their software
But many open source projects could.be cross compiled it wouldn't be long if these things start selling.
I don't know what these chips are like, but x86 software runs perfectly on my ARM Mac. And not just small apps either, I'm running full x86 Linux servers in virtual machines.
There is a measurable performance penalty but it's not a noticeable one with any of the software I use... ultimately it just means the CPU sits on 0.2 load instead of 0.1 load and if it spikes to 100% then it's only for a split second.
I think Qualcomm is probably charging far too much for the SoC. Their pricing has been super high for years because they know nobody is matching their performance on the mobile space. Not sure how much of it is the smaller process nodes too.
Isn't that a bubble? Phones are 10x more performant than they need to be anyway. Not like in gaming/server market where it's always too slow, no matter how fast.
My phone is now 7 years old and it still works perfectly. Maybe not the newest of the newest games, but i don't care for games on my phone anyways. And the amazing contributors keeping lineageos up to date for my phone model makes me not need a newer phone :)
People said that 10 years ago and all these phones are barley usable now.
Yeah it's honestly quite impressive. Software developers have managed to take orders of magnitudes of Hardware improvements over the years and keep Pace ensuring that software still runs like complete utter trash garbage
My understanding is that Apple had bought up all of TSMC’s 3nm capacity in 2023. That exclusivity may be up now explaining why Qualcomm is selling chips based on 3nm. Looks like they are working with Samsung and TSMC on this chip. This article is bizarre as it underplays the reason someone would buy this laptop. Long battery life, low heat, high performance thin/light is very valuable. Not everyone wants to play games. Will be interesting to see if Microsoft delivers.
If it is not cheaper than x86 then people will just keep buying x86 computers.
If power consumption is lower, that means can have a more compact cooling. There's a lot of people who would pay the premium for longer lasting and lighter laptops, myself included.
meanwhile the long gone RISC hype train
Do you mean RISC V
Yeah i was wainting for that ? Did they ever had any plan to do so ?
ARM is RISC (or at least a version of it).
I won't write them off before I've owned one, I imagine they could be good for things like battery life but I'm not sure if they'd be an improvement over other chips like ryzen apus.
Will be curious to see the advantage and disadvantages.
Samsung uses their competitors chips? Kinda weird to see
What, how is Qualcomm competing with Samsung?
Apple uses Samsung hardware, btw.
Samsung make exynos chips which are arm. But Samsung even uses qualcomm in their phones in other regions so it's not unusual
Exynos are subpar to Qualcomm arm chips, or at least they were not so far ago.
Not sure what you mean, they've always used Snapdragons? The S23 from 2023 uses one, and the S3 from 2012 uses them in some models, and most galaxies between those do as well.
I know Windows does ARM to x64 translation decently, but does the chip also feature special hardware functionality to aid this, like the M chips (TSO for example)?