this post was submitted on 16 Nov 2025
188 points (98.5% liked)

Linux

10185 readers
637 users here now

A community for everything relating to the GNU/Linux operating system (except the memes!)

Also, check out:

Original icon base courtesy of lewing@isc.tamu.edu and The GIMP

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] balsoft@lemmy.ml 22 points 4 days ago* (last edited 4 days ago) (1 children)

I understand Unicode and its various encodings (UTF-8, UTF-16, UTF-32) fairly well. UTF-8 is backwards-compatible with ASCII and only takes up the extra bytes if you are using characters outside of the 0x00-0x7F range. E.g. this comment I'm writing is simultaneously valid UTF-8 and valid ASCII.

I'd like to see some good evidence for the claim that Unicode support increases memory usage so drastically. Especially given that most data in RAM is typically things other than encoded text (e.g. videos, photos, internal state of software).

[–] frezik@lemmy.blahaj.zone 12 points 4 days ago (1 children)

It's not so much character length from any specific encodings. It's all the details that go into supporting it. Can't assume text is read left to right. Can't assume case insensitivity works the same way as your language. Can't assume the shape of the glyph won't be affected by the glyph next to it. Can't assume the shape of a glyph won't be affected by a glyph five down.

Pile up millions of these little assumptions you can no longer make in order to support every written language ever. It gets complicated.

[–] The_Decryptor@aussie.zone 1 points 4 days ago

Yeah, but that's still not a lot of data, like LTR/RTL shouldn't be varying within a given script so the values will be shared over an entire range of characters.