this post was submitted on 24 Feb 2024
214 points (95.7% liked)
Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ
54716 readers
290 users here now
⚓ Dedicated to the discussion of digital piracy, including ethical problems and legal advancements.
Rules • Full Version
1. Posts must be related to the discussion of digital piracy
2. Don't request invites, trade, sell, or self-promote
3. Don't request or link to specific pirated titles, including DMs
4. Don't submit low-quality posts, be entitled, or harass others
Loot, Pillage, & Plunder
📜 c/Piracy Wiki (Community Edition):
💰 Please help cover server costs.
Ko-fi | Liberapay |
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This. It sounds really odd to me that the GPU would make what is pretty much math calculations somehow "different" from what the CPU would do.
GPU encoders basically all run at the equivalent of "fast" or "veryfast" CPU encoder settings.
Most high quality, low size encodes are run at "slow" or "veryslow" or "placebo" CPU encoder settings, with a lot of the parameters that aren't tunable on GPU encoders set to specific tunings depending on the content type.
NVENC has a slow preset:
https://docs.nvidia.com/video-technologies/video-codec-sdk/12.0/ffmpeg-with-nvidia-gpu/index.html#command-line-for-latency-tolerant-high-quality-transcoding
As they expand the NVENC options that are exposed on the command line, is it getting closer to CPU-encoding level of quality?
So the GPU encoding isn't using the GPU cores. It's using separate fixed hardware. It supports way less operations than a CPU does. They're not running the same code.
But even if you did compare GPU cores to CPU cores, they're not the same. GPUs also have a different set of operations from a CPU, because they're designed for different things. GPUs have a bunch of "cores" bundled under one control unit. They all do the exact same operation at the same time, and have significantly less capability beyond that. Code that diverges a lot, especially if there's not an easy way to restructure data so all 32 cores under a control unit* branch the same way, can pretty easily not benefit from that capability.
As architectures get more complex, GPUs are adding things that there aren't great analogues for in a CPU yet, and CPUs have more options to work with (smaller) sets of the same operation on multiple data points, but at the end of the day, the answer to your question is that they aren't doing the same math, and because of the limitations of the kind of math GPUs are best at, no one is super incentivized to try to get a software solution that leverages GPU core acceleration.
*last I checked, that's what a warp on nvidia cards was. It could change if there's a reason to.
Every encoder does different math calculations. Different software and different software profiles do different math calculations too.