this post was submitted on 27 Jan 2025
652 points (97.7% liked)

Technology

35474 readers
625 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] NutWrench@lemmy.ml 10 points 11 hours ago

The "1 trillion" never existed in the first place. It was all hype by a bunch of Tech-Bros, huffing each other's farts.

[–] skuzz@discuss.tchncs.de 80 points 1 day ago

Almost like yet again the tech industry is run by lemming CEOs chasing the latest moss to eat.

[–] MetalMachine@feddit.nl 63 points 1 day ago (5 children)

The best part is that it's open source and available for download

[–] CeeBee_Eh@lemmy.world 8 points 1 day ago (2 children)

I asked it about Tiananmen Square, it told me it can't answer that because it can only respond with "harmless" responses.

[–] MetalMachine@feddit.nl 24 points 1 day ago (3 children)

Yes the online model has those filters. Some one tried it with one of the downloaded models and it answers just fine

[–] jaschen@lemm.ee 1 points 5 hours ago

You misspelled "lies". Or were you trying to type "psyops tool"??

[–] Ascend910@lemmy.ml 5 points 23 hours ago (1 children)

When running locally, it works just fine without filters

[–] jaschen@lemm.ee 1 points 5 hours ago

I tried the smaller models and it's not fine. It's hard coded.

[–] CeeBee_Eh@lemmy.world 2 points 20 hours ago (1 children)

This was a local instance.

[–] apprehensively_human@lemmy.ca 2 points 11 hours ago

Does the same thing on my local instance.

load more comments (1 replies)
[–] Phoenicianpirate@lemm.ee 23 points 1 day ago (8 children)

So can I have a private version of it that doesn't tell everyone about me and my questions?

[–] SpaceRanger@lemmy.world 25 points 1 day ago (1 children)
[–] Phoenicianpirate@lemm.ee 1 points 18 hours ago

Thank you very much. I did ask chatGPT was technical questions about some... subjects... but having something that is private AND can give me all the information I want/need is a godsend.

Goodbye, chatGPT! I barely used you, but that is a good thing.

[–] Mongostein@lemmy.ca 5 points 1 day ago (1 children)

Yeah, but you have to run a different model if you want accurate info about China.

[–] Phoenicianpirate@lemm.ee 4 points 18 hours ago (1 children)

Yeah but China isn't my main concern right now. I got plenty of questions to ask and knowledge to seek and I would rather not be broadcasting that stuff to a bunch of busybody jackasses.

load more comments (1 replies)
load more comments (6 replies)
[–] jaschen@lemm.ee 6 points 1 day ago

Yes but your server can't handle the biggest LLM.

load more comments (2 replies)
[–] SocialMediaRefugee@lemmy.ml 57 points 1 day ago (10 children)

This just shows how speculative the whole AI obsession has been. Wildly unstable and subject to huge shifts since its value isn't based on anything solid.

load more comments (10 replies)
[–] 5in1k@lemm.ee 86 points 1 day ago (6 children)

The economy rests on a fucking chatbot. This future sucks.

[–] Cowbee@lemmy.ml 25 points 1 day ago (2 children)

On the brightside, the clear fragility and lack of direct connection to real productive forces shows the instability of the present system.

load more comments (2 replies)
load more comments (5 replies)
[–] PlutoniumAcid@lemmy.world 20 points 1 day ago (5 children)

So if the Chinese version is so efficient, and is open source, then couldn't openAI and anthropic run the same on their huge hardware and get enormous capacity out of it?

[–] AdrianTheFrog@lemmy.world 9 points 1 day ago (2 children)

OpenAI could use less hardware to get similar performance if they used the Chinese version, but they already have enough hardware to run their model.

Theoretically the best move for them would be to train their own, larger model using the same technique (as to still fully utilize their hardware) but this is easier said than done.

load more comments (2 replies)
[–] Jhex@lemmy.world 10 points 1 day ago (3 children)

Not necessarily... if I gave you my "faster car" for you to run on your private 7 lane highway, you can definitely squeeze every last bit of the speed the car gives, but no more.

DeepSeek works as intended on 1% of the hardware the others allegedly "require" (allegedly, remember this is all a super hype bubble)... if you run it on super powerful machines, it will perform nicer but only to a certain extend... it will not suddenly develop more/better qualities just because the hardware it runs on is better

[–] merari42@lemmy.world 2 points 1 day ago

Didn't deepseek solve some of the data wall problems by creating good chain of thought data with an intermediate RL model. That approach should work with the tried and tested scaling laws just using much more compute.

load more comments (2 replies)
load more comments (3 replies)
[–] Clent@lemmy.dbzer0.com 26 points 1 day ago (1 children)

No surprise. American companies are chasing fantasies of general intelligence rather than optimizing for today's reality.

[–] Naia@lemmy.blahaj.zone 23 points 1 day ago (1 children)

That, and they are just brute forcing the problem. Neural nets have been around for ever but it's only been the last 5 or so years they could do anything. There's been little to no real breakthrough innovation as they just keep throwing more processing power at it with more inputs, more layers, more nodes, more links, more CUDA.

And their chasing a general AI is just the short sighted nature of them wanting to replace workers with something they don't have to pay and won't argue about it's rights.

load more comments (1 replies)
[–] Doomsider@lemmy.world 77 points 2 days ago (6 children)

Wow, China just fucked up the Techbros more than the Democratic or Republican party ever has or ever will. Well played.

load more comments (6 replies)
[–] JOMusic@lemmy.ml 51 points 1 day ago (8 children)
load more comments (8 replies)
[–] Arehandoro@lemmy.ml 53 points 1 day ago (2 children)

Nvidia’s most advanced chips, H100s, have been banned from export to China since September 2022 by US sanctions. Nvidia then developed the less powerful H800 chips for the Chinese market, although they were also banned from export to China last October.

I love how in the US they talk about meritocracy, competition being good, blablabla... but they rig the game from the beginning. And even so, people find a way to be better. Fascinating.

[–] shawn1122@lemm.ee 28 points 1 day ago

You're watching an empire in decline. It's words stopped matching its actions decades ago.

load more comments (1 replies)
[–] MooseTheDog@lemmy.world 20 points 1 day ago (2 children)
load more comments (2 replies)
[–] wulrus@programming.dev 55 points 2 days ago (4 children)

Hello darkness my old friend

[–] Pieisawesome@lemmy.world 19 points 1 day ago (4 children)

It’s knowledge isn’t updated.

It doesn’t know current events, so this isn’t a big gotcha moment

load more comments (4 replies)
load more comments (3 replies)
load more comments
view more: next ›