xodoh74984

joined 2 years ago
[–] xodoh74984@lemmy.world 182 points 3 weeks ago

I believe it should be all over the media to ensure that it never passes. Democracy dies in darkness. Name and shame those who supported it.

[–] xodoh74984@lemmy.world 15 points 1 month ago

"... and we pride ourselves in building on our strong record of protecting people’s privacy."

The defense has destroyed any and all credibility with that single clause.

[–] xodoh74984@lemmy.world 1 points 2 months ago

It's a step in the right direction, but reserving the right to sue companies that collect and share our most sensitive personal information and whereabouts is not enough. It is a cost of doing business to them to be weighed against the potential for profit. This line of thinking is now taught in business schools.

Nothing will change materially until the executives are faced with the potential of jail time.

[–] xodoh74984@lemmy.world 1 points 2 months ago (1 children)

NewPipe does this for free

[–] xodoh74984@lemmy.world 221 points 2 months ago (2 children)

Remember that brief period in the US where, for a fleeting moment, Lina Khan went after a few companies for monopolistic practices?

[–] xodoh74984@lemmy.world 19 points 2 months ago
if battery < 10:
  price = price * 2

Many AI. Much wow.

[–] xodoh74984@lemmy.world 2 points 3 months ago

Sorry for the slow reply, but I'll piggyback on this thread to say that I tend to target models a little but smaller than my total VRAM to leave room for a larger context window – without any offloading to RAM.

As an example, with 24 GB VRAM (Nvidia 4090) I can typically get a 32b parameter model with 4-bit quantization to run with 40,000 tokens of context all on GPU at around 40 tokens/sec.

[–] xodoh74984@lemmy.world 11 points 3 months ago* (last edited 3 months ago) (9 children)

I use open source 32b Chinese models almost exclusively, because I can run them on my own machine without being a data cow for the US tech oligarchs or the CCP.

I only use the larger models for little hobby projects, and I don't care too much about who gets that data. But if I wanted to use the large models for something sensitive, the open source Chinese models are the more secure option IMO. Rather than get a "trust me bro" pinky promise from Closed AI or Anthropic, I can run Qwen or Kimi on a cloud GPU provider that offers raw compute by the hour without any data harvesting.

[–] xodoh74984@lemmy.world 17 points 3 months ago* (last edited 3 months ago)

I actually interpreted it as callous to the suffering women endure at first read for some reason. But yeah, there's very much an element of, "The stakes are higher for women, so they can deal with the side effects," which is awful.

[–] xodoh74984@lemmy.world 25 points 3 months ago (3 children)

At first read that came off as callous, but I see your point. I had that thought as well regarding improving female birth control. Where's the research into a hormone-free pill for women?

view more: next ›