this post was submitted on 17 Oct 2024
1368 points (98.9% liked)

RetroGaming

19795 readers
202 users here now

Vintage gaming community.

Rules:

  1. Be kind.
  2. No spam or soliciting for money.
  3. No racism or other bigotry allowed.
  4. Obviously nothing illegal.

If you see these please report them.

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] Wilzax@lemmy.world 137 points 2 months ago (3 children)

Your game will actually likely be more efficient if written in C. The gcc compiler has become ridiculously optimized and probably knows more tricks than you do.

[–] dejected_warp_core@lemmy.world 46 points 2 months ago (1 children)

Especially these days. Current-gen x86 architecture has all kinds of insane optimizations and special instruction sets that the Pentium I never had (e.g. SSE). You really do need a higher-level compiler at your back to make the most of it these days. And even then, there are cases where you have to resort to inline ASM or processor-specific intrinsics to optimize to the level that Roller Coaster Tycoon is/was. (original system specs)

[–] kuberoot@discuss.tchncs.de 2 points 2 months ago (2 children)

I might be wrong, but doesn't SSE require you to explicitly use it in C/C++? Laying out your data as arrays and specifically calling the SIMD operations on them?

[–] acockworkorange@mander.xyz 5 points 2 months ago (2 children)

There’s absolutely nothing you can do in C that you can’t also do in assembly. Because assembly is just the bunch of bits that the compiler generates.

That said, you’d have to be insane to write a game featuring SIMD instructions these days in assembly.

[–] Buddahriffic@lemmy.world 4 points 2 months ago (2 children)

I think they meant the other way around, that if you wanted to use it in C/C++, you'd have to either use assembly or some specific SSE construct otherwise the compiler wouldn't bother.

That probably was the case at one point, but I'd be surprised if it's still the case. Though maybe that's part of the reason why the Intel compiler can generate faster code. But I suspect it's more of a case of better optimization by people who have a better understanding of how it works under the hood, and maybe better utilization of newer instruction set extensions.

SSE has been around for a long time and is present in most (all?) x86 chips these days and I'd be very surprised if gcc and other popular compilers don't use it effectively today. Some of the other extensions might be different though.

[–] calcopiritus@lemmy.world 3 points 2 months ago

If you want to use instructions from an extension (for example SIMD), you either: provide 2 versions of the function, or just won't run in some CPUs. It would be weird for someone that doesn't know about that to compile it for x86 and then have it not run on another x86 machine. I don't think compilers use those instructions if you don't tell them too.

Anyway, the SIMD the compilers will do is nowhere near the amount that it's possible. If you manually use SIMD intrinsics/inline SIMD assembly, chances are that it will be faster than what the compiler would do. Especially because you are reducing the % of CPUs your program can run on.

[–] acockworkorange@mander.xyz 2 points 2 months ago

Oh I see your point. Yeah, I think they meant that. And yes, there was a time you’d have to do trickery in C to force the use of SSE or whatever extensions you wanted to use.

[–] Wilzax@lemmy.world 3 points 2 months ago (1 children)

Technically assembly is a human-readable, paper-thin abstraction of the machine code. It really only implements one additional feature over raw machine code and that's labels, which prevents you from having to rewrite jump and goto instructions EVERY TIME you refactor upstream code to have a different number of instructions.

So not strictly the bunch of bits. But very close to it.

[–] acockworkorange@mander.xyz 1 points 2 months ago

Technically correct. The best kind of correct.

[–] dejected_warp_core@lemmy.world 1 points 2 months ago

Honestly, I'm not 100% sure. I would bet that a modern compiler would just "do the right thing" but I've never written code in such a high performance fashion before.

[–] Valmond@lemmy.world 21 points 2 months ago (1 children)

Yep but not if you write sloppy C code. Gotta keep those nuts and bolts tight!

[–] Wilzax@lemmy.world 30 points 2 months ago (1 children)

If you're writing sloppy C code your assembly code probably won't work either

[–] 0x0@lemmy.dbzer0.com 12 points 2 months ago (3 children)

Except everyone writing C is writing sloppy C. It's like driving a car, there's always a non-zero chance of an accident.

Even worse, in C the compiler is just waiting for you to trip up so it can do something weird. Think the risk of UB is overblown? I found this article from Raymond Chen enlightening: https://devblogs.microsoft.com/oldnewthing/20140627-00/?p=633

[–] calcopiritus@lemmy.world 2 points 2 months ago

I recently came across a rust book on how pointers aren't just ints, because of UB.

fn main() {
    a = &1
    b = &2
    a++
    if a == b {
        *a = 3
        print(b)
    }
}

This may either: not print anything, print 3 or print 2.

Depending on the compiler, since b isn't changed at all, it might optimize the print for print(2) instead of print(b). Even though everyone can agree that it should either not print anything or 3, but never 2.

[–] UpperBroccoli@lemmy.blahaj.zone 1 points 2 months ago

I hope you are not argueing that using assembly is an improvement over using C in that regard....

[–] Buddahriffic@lemmy.world 1 points 2 months ago (1 children)

A compiler making assumptions like that about undefined behaviour sounds just like a bug. Maybe the bug is in the spec rather than the compiler, but I can't think of any time it would be better to optimize that code out entirely because UB is detected rather than just throwing an error or warning and otherwise ignoring the edge cases where the behaviour might break. It sounds like the worst possible option exactly for the reasons listed in that blog.

[–] calcopiritus@lemmy.world 2 points 2 months ago

The thing about UB is that many optimizations are possible precisely because the spec specified it as UB. And the spec did so in order to make these optimizations possible.

Codebases are not 6 lines long, they are hundreds of thousands. Without optimizations like those, many CPU cycles would be lost to unnecessary code being executed.

If you write C/C++, it is because you either hate yourself or the application's performance is important, and these optimizations are needed.

The reason rust is so impressive nowadays is that you can write high performing code without risking accidentally doing UB. And if you are going to write code that might result in UB, you have to explicitly state so with unsafe. But for C/C++, there's no saving. If you want your compiler to optimize code in those languages, you are going to have loaded guns pointing at your feet all the time.