Thanks for the suggestion – I'll have to give a try to the ungoogled version.
antimidas
Yep, I don't think the A55 is the culprit either – just outlined the reasoning behind that. Sometimes pairing also gets things wrong which leads to the headphones using an older protocol version.
But that doesn't seem to be the case as it's using SSC, at this point I'd also just guess it's a bad battery. You can try pairing them again but I wouldn't be surprised if it doesn't help. Still, couldn't really hurt to try.
But I've previously encountered multiple cases of people complaining headphones not being able to match the advertised battery life – and the reason ended up being either too old a phone (meaning it lacks the newer codecs and versions for Bluetooth), or some bug in pairing leading to using the wrong codecs and/or protocol
Couple things I could think of, that can lead to this sort of behavior:
- Using them in a cold environment, like -20 to -30 degrees Celsius.
- Headphones dropping down to an older standard for some reason, and connecting e.g. via bt 4.x or using an incorrect codec (would explain why one of them is draining so much faster) – should be possible to check in BT settings
- The headphones just have a bad battery, I've run into multiple headphones which just turn off once the battery reaches 50-60 %, especially when it's cold out
But these are just suggestions and speculation, I'm not really an expert on the subject.
Good link that, I'll have to add those flags to my list of aliases
The more frustrated you are when running git blame
the more likely the command turns out to be a mirror.
'rent > rent
The cursed Linux alternative of this is usually putting things directly in the home folder – I used to do this until I got better. Desktop is simple to keep clean when you don't have one in your "desktop environment" by default.
Some people who've used MacOs before OSX dump everything to the root filesystem out of habit. It works just as poorly as a file management strategy as one might expect, albeit better than putting everything on the desktop. Not sure how often that happens but I've known multiple people to do that.
There's an overabundance of competent-ish frontend developers. You most likely need to pay the devs less, compared to someone writing it with e.g. C++, and finding people with relevant experience takes less time. You also get things like a ready-made sandbox and the ability to re-use UI components from other web services, which simplifies application development. So my guess is that this is done to save money.
Also, the more things are running in an embedded browser the more reasons M$ has to bake Edge into the OS, without raising eyebrows as to why they're providing it as a default (look it's a system tool as well, not just a browser).
Per text and per minute plans were the norm at least here for a long time, I had one until mid 2010's IIRC. A single text cost something like 0.069 €. Parents kept their kids from overspending with prepaid plans, which were the norm for elementary students. In Europe people typically don't pay to receive calls, so your parents could still call you even if you ran out of phone credits.
We got unlimited data plans before widespread unlimited texting, which meant people mostly stopped texting by early 2010's. I remember my phone plan getting unlimited 3g in 2010 for 0.99 €/month (approx 1.40 $ back then), albeit slow AF (256 kbps). Most switched to e.g. Kik or later WhatsApp after that.
Probably varies a lot based on where you grew up. I got my first phone when I was 9, in 2006, and was among the last in my class to get one. Though phone plans were really cheap by then in Finland, partially due to the largest phone manufacturer (back then) Nokia being Finnish, and our telecom operators being in tight competition. (We've three separate carriers with country wide networks, as was the case back in the early 2000's as well)
I'd say the turning point here was 2003 when Nokia launched the model 1100, which was dirt cheap. I vaguely remember the price eventually falling as low as 19 € in a sale, at which point the phone cost about the same as your typical phone plan per month.
TLDR: looks like you're right, although Chrome shouldn't be struggling with that amount of hosts to chug through. This ended up being an interesting rabbit hole.
My home network already uses unbound with proper blocklist configured, but I can't use the same setup directly with my work computer as the VPN sets it's own DNS. I can only override this with a local resolver on the work laptop, and I'd really like to get by with just
systemd-resolved
instead of having to adddnsmasq
or similar for this. None of the other tools I use struggle with this setup, as they use the system IP stack.Might well be that chromium has a bit more sophisticated a network stack (than just using the system provided libraries), and I remember the docs indicating something about that being the case. In any way, it's not like the code is (or should be) paging through the whole file every time there's a query – either it forwards it to another resolver, or does it locally, but in any case there will be a cache. That cache will then end up being those queried domains in order of access, after which having a long
/etc/hosts
won't matter. Worst case scenario after paging in the hosts file initially is 3-5 ms (per query) for comparing through the 100k-700k lines before hitting a wall, and that only needs to happen once regardless of where the actual resolving takes place. At a glance chrome net stack should cache queries into the hosts file as well. So at the very least it doesn't really make sense for it to struggle for 5-10 seconds on every consecutive refresh of the page with a warm DNS cache in memory......or that's how it should happen. Your comment inspired me to test it a bit more, and lo: after trying out a hosts file with 10 000 000 bogus entries chrome was brought completely to it's knees. However, that amount of string comparisons is absolutely nothing in practice – Python with its measly linked lists and slow interpreter manages comparing against every row in 300 ms, a crude C implementation manages it in 23 ms (approx. 2 ms with 1 million rows, both a lot more than what I have appended to the hosts file). So the file being long should have nothing to do with it unless there's something very wrong with the implementation. Comparing against
/etc/hosts
should be cheap as it doesn't support wildcard entires – as such the comparisons are just simple 1:1 check against first matching row. I'll continue investigating and see if there's a quick change to be made in how the hosts are read in. Fixing this shouldn't cause any issues for other use cases from what I see.For reference, if you want to check the performance for 10 million comparisons on your own hardware: