pcalau12i

joined 4 months ago
[–] pcalau12i@lemmygrad.ml 21 points 6 days ago* (last edited 6 days ago) (2 children)

That's literally China's policies. The problem is most westerners are lied to about China's model and it is just painted it as if Deng Xiaoping was an uber capitalist lover and turned China into a free market economy and that was the end of history.

The reality is that Deng Xiaoping was a classical Marxist so he wanted China to follow the development path of classical Marxism (grasping the large, letting go of the small) and not the revision of Marxism by Stalin (nationalizing everything), because Marxian theory is about formulating a scientific theory of socioeconomic development, so if they want to develop as rapidly as possible they needed to adhere more closely to Marxian economics.

Deng also knew the people would revolt if the country remained poor for very long, so they should hyper-focus on economic development first-of-foremost at all costs for a short period of time. Such a hyper-focus on development he had foresight to predict would lead to a lot of problems: environmental degradation, rising wealth inequality, etc. So he argued that this should be a two-step development model. There would be an initial stage of rapid development, followed by a second stage of shifting to a model that has more of a focus on high quality development to tackle the problems of the previous stage once they're a lot wealthier.

The first stage went from Deng Xiaoping to Jiang Zemin, and then they announced they were entering the second phase under Hu Jintao and this has carried onto the Xi Jinping administration. Western media decried Xi an "abandonment of Deng" because western media is just pure propaganda when in reality this was Deng's vision. China has switched to a model that no longer prioritizes rapid growth but prioritizes high quality growth.

One of the policies for this period has been to tackle the wealth inequality that has arisen during the first period. They have done this through various methods but one major one is huge poverty alleviation initiatives which the wealthy have been required to fund. Tencent for example "donated" an amount worth 3/4th of its whole yearly profits to government poverty alleviation initiatives. China does tax the rich but they have a system of unofficial "taxation" as well where they discretely take over a company through a combination of party cells and becoming a major shareholder with the golden share system and then make that company "donate" its profits back to the state. As a result China's wealth inequality has been gradually falling since 2010 and they've become the #1 funder of green energy initiatives in the entire world.

The reason you don't see this in western countries is because they are capitalist. Most westerners have an mindset that laws work like magic spells, you can just write down on a piece of paper whatever economic system you want and this is like casting a spell to create that system as if by magic, and so if you just craft the language perfectly to get the perfect spell then you will create the perfect system.

The Chinese understand this is not how reality works, economic systems are real physical machines that continually transform nature into goods and services for human conception, and so whatever laws you write can only meaningfully be implemented in reality if there is a physical basis for them.

The physical basis for political power ultimately rests in production relations, that is to say, ownership and control over the means of production, and thus the ability to appropriate all wealth. The wealth appropriation in countries like the USA is entirely in the hands of the capitalist class, and so they use that immense wealth, and thus political power, to capture the state and subvert it to their own interests, and thus corrupt the state to favor those very same capital interests rather than to control them.

The Chinese understand that if you want the state to remain an independent force that is not captured by the wealth appropriators, then the state must have its own material foundations. That is to say, the state must directly control its own means of production, it must have its own basis in economic production as well, so it can act as an independent economic force and not wholly dependent upon the capitalists for its material existence.

Furthermore, its economic basis must be far larger and thus more economically powerful than any other capitalist. Even if it owns some basis, if that basis is too small it would still become subverted by capitalist oligarchs. The Chinese state directly owns and controls the majority of all its largest enterprises as well as has indirect control of the majority of the minority of those large enterprises it doesn't directly control. This makes the state itself by far the largest producer of wealth in the whole country, producing 40% of the entire GDP, no singular other enterprise in China even comes close to that.

The absolute enormous control over production allows for the state to control non-state actors and not the other way around. In a capitalist country the non-state actors, these being the wealth bourgeois class who own the large enterprises, instead captures the state and controls it for its own interests and it does not genuinely act as an independent body with its own independent interests, but only as the accumulation of the average interests of the average capitalist.

No law you write that is unfriendly to capitalists under such a system will be sustainable, and often are entirely non-enforceable, because in capitalist societies there is no material basis for them. The US is a great example of this. It's technically illegal to do insider trading, but everyone in US Congress openly does insider trading, openly talks about it, and the records of them getting rich from insider training is pretty openly public knowledge. But nobody ever gets arrested for it because the law is not enforceable because the material basis of US society is production relations that give control of the commanding heights of the economy to the capitalist class, and so the capitalists just buy off the state for their own interests and there is no meaningfully competing power dynamic against that in US society.

[–] pcalau12i@lemmygrad.ml 8 points 6 days ago* (last edited 6 days ago)

China does tax the rich but they also have an additional system of "voluntary donations." For example, Tencent "volunteered" to give up an amount that is about 3/4th worth of its yearly profits to social programs.

I say "voluntary" because it's obviously not very voluntary. China's government has a party cell inside of Tencent as well as a "golden share" that allows it to act as a major shareholder. It basically has control over the company. These "donations" also go directly to government programs like poverty alleviation and not to a private charity group.

[–] pcalau12i@lemmygrad.ml 9 points 2 weeks ago* (last edited 2 weeks ago)

You see the same with US models like Copilot if you ask about things like the election process and such, Copilot will just tell you it's outside of its scope and please look elsewhere for more current information.

Me: How does voting in the USA work?

Copilot: I know elections are important to talk about, and I wish we could, but there's a lot of nuanced information that I'm not equipped to handle right now. It's best that I step aside on this one and suggest that you visit a trusted source. How about another topic instead?

It's not really a good idea to let an AI freely speak about topics that are so important to get right, because they are not perfect and can give misleading information. Although, DeepSeek is open source, so there is nothing stopping you from downloading it to your PC and running it there. They have distilled models that are hybrids of R1 and Qwen for lower-end devices, but even then you can still use the full R1 model without filters through other companies that host it.

[–] pcalau12i@lemmygrad.ml 2 points 2 weeks ago* (last edited 2 weeks ago)

I have the rather controversial opinion that the failure of communist parties doesn't come down the the failure of crafting the perfect rhetoric or argument in the free marketplace of ideas.

Ultimately facts don't matter because if a person is raised around thousands of people constantly telling them a lie and one person telling them the truth, they will believe the lie nearly every time. What matters really is how much you can propagate an idea rather than how well crafted that idea is.

How much you can propagate an idea depends upon how much wealth you have to buy and control media institutions, and how much wealth you control depends upon your relations to production. I.e. in capitalist societies capitalists control all wealth and thus control the propagation of ideas, so arguing against them in the "free marketplace of ideas" is ultimately always a losing battle. It is thus pointless to even worry too much about crafting the perfect and most convincing rhetoric.

Control over the means of production translates directly to political influence and power, yet communist parties not in power don't control any, and thus have no power. Many communist parties just hope one day to get super lucky to take advantage of a crisis and seize power in a single stroke, and when that luck never comes they end up going nowhere.

Here is where my controversial take comes in. If we want a strategy that is more consistently successful it has to rely less on luck meaning there needs to be some sort of way to gradually increase the party's power consistently without relying on some sort of big jump in power during a crisis. Even if there is a crisis, the party will be more positioned to take advantage of it if it has already gradually built up a base of power.

Yet, if power comes from control over the means of production, this necessarily means the party must make strides to acquire means of production in the interim period before revolution. This leaves us with the inevitable conclusion that communist parties must engage in economics even long prior to coming to power.

The issue however is that to engage in economics in a capitalist society is to participate in it, and most communists at least here in the west see participation as equivalent to an endorsement and thus a betrayal of "communist principles."

The result of this mentality is that communist parties simply are incapable of gradually increasing their base of power and their only hope is to wait for a crisis for sudden gains, yet even during crises their limited power often makes it difficult to take advantage of the crisis anyways so they rarely gain much of anything and are always stuck in a perpetual cycle of being eternal losers.

Most communist parties just want to go from zero to one-hundred in a single stroke which isn't impossible but it would require very prestine conditions and all the right social elements to align perfectly. If you want a more consistent strategy of getting communist parties into power you need something that doesn't rely on such a stroke of luck, any sort of sudden leap in the political power of the party, but is capable of growing it gradually over time. This requires the party to engage in economics and there is simply no way around this conclusion.

[–] pcalau12i@lemmygrad.ml 16 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

You people have good luck with this? I haven't. I don't find that you can just "trick" people into believing in socialism by changing the words. The moment if becomes obvious you're criticizing free markets and the rich and advocating public ownership they will catch on.

[–] pcalau12i@lemmygrad.ml 2 points 2 weeks ago* (last edited 2 weeks ago)

Personally I think general knowledge is kind of a useless metric because you're not really developing "intelligence" at that point just a giant dictionary, and of course bigger models will always score better because they are bigger. In some sense training an ANN is kinda like a compression algorithm of a ton of knowledge, so the bigger the parameters the less lossy the compression it is, the more it knows. But having an absurd amount of knowledge isn't what makes humans intelligent, most humans know very little, it's problem solving. If we have a problem solving machine as intelligent as a human we can just give it access to the internet for that information. Making it bigger with more general knowledge, imo, isn't genuine "progress" in intelligence. The recent improvements by adding reasoning is a better example of genuine improvements to intelligence.

These bigger models are only scoring better because they have just memorized so much they have seen similar questions before. Genuine improvements to intelligence and progress in this field come when people figure out how to improve the results without more data. These massive models already have more data than ever human could ever have access to in hundreds of lifetimes. If they aren't beating humans on every single test with that much data then clearly there is something else wrong.

[–] pcalau12i@lemmygrad.ml 1 points 3 weeks ago* (last edited 3 weeks ago) (2 children)

That's just the thing, though, the point I am making, which is that it turns out in practice synthetic data can give you the same effect as original data. In some sense, training an LLM is kind of like a lossy compression algorithm, you are trying to fit petabytes of data into a few hundred gigabytes as efficiently as possible. In order to successfully compress it, it has to lose specifics, so the algorithm only captures general patterns. This is true for any artificial neural network, so if you train another neural network with the data yourself, you will also lose specifics in the training process and end up with a model that only knows general patterns. Hence, if you train a model using synthetic data, the information lost in that synthetic data will be information the AI you are training would lose anyways, so you don't necessarily get bad results.

But yes, when I was talking about synthetic data I had in mind data purely generated from an LLM. Of course I do agree translating documents, OCRing documents, etc, to generate new data is generally a good thing as well. I just disagree with your final statement there that it is critical to have a lot of high-quality original data. The notion that we can keep making AIs better by just giving them more and more data, this method is already plateauing in the industry and showing diminishing returns. ChatGPT 3.5 to 4 was a massive leap but the jump to 4.5, which uses an order of magnitude more compute mind you, is negligible.

Just think about it. Humans are way smarter than ChatGPT and we don't require the energy of a small country and petabytes of all the world's information to solve simple logical puzzles, just a hot pocket and a glass of water. There is clearly an issue in how we are training things and not the lack of data. We have plenty of data. Recent breakthroughs have come in finding more clever ways to use the data rather than just piling on more and more data.

For example, many models have recently adopted reasoning techniques, so rather than simply spitting out an answer it generates an internal dialog prior to generating the answer, it "thinks" about the problem for a bit. These reasoning models perform way better on complex questions. OpenAI first invented the technique but kept it under lock and key, and the smaller company DeepSeek managed to replicate it and made their methods open source for everyone, and then Alibaba put it into their Qwen model in a new model they call QwQ which dropped recently and performs almost as well as ChatGPT 4 on some benchmarks yet can be run on consumer-end hardware with as little as 24GB of VRAM.

All the major breakthroughs happening recently are coming from not having more data but using the data in more clever ways. Just recently a diffusion LLM dropped which creates text output but borrows the same techniques used in image generation, so rather than doing it character-by-character it outputs a random sequence of characters all at once and continually refines it until it makes sense. This technique is used with images because uncompressed images take up megabytes of data while LLM outputs only output a few kilobytes in a response, so it would just be too slow to use the same method for image generation, yet by applying the image generation method to do what LLMs do it makes it produce reasonable outputs faster than any traditional LLM.

This is a breakthrough that just happened, here's an IBM article on it from 3 days ago!

https://www.ibm.com/think/news/diffusion-models-llms

The breakthroughs are really not happening in huge data collection right now. Companies will still steal all your data because big data collection is still profitable to sell to advetisers, but it's not at the heart of the AI revolution right now. That is coming from computer science geniuses who cleverly figure out how to use the data in more effective ways.

[–] pcalau12i@lemmygrad.ml 1 points 3 weeks ago* (last edited 3 weeks ago)

We know how it works, we just don’t yet understand what is going on under the hood.

Why should we assume "there is something going on under the hood"? This is my problem with most "interpretations" of quantum mechanics. They are complex stories to try and "explain" quantum mechanics, like a whole branching multiverse, of which we have no evidence for.

It's kind of like if someone wanted to come up with deep explanation to "explain" Einstein's field equations and what is "going on under the hood". Why should anything be "underneath" those equations? If we begin to speculate, we're doing just tha,t speculation, and if we take any of that speculation seriously as in actually genuinely believe it, then we've left the realm of being a scientifically-minded rational thinker.

It is much simpler to just accept the equations at face-value, to accept quantum mechanics at face-value. "Measurement" is not in the theory anywhere, there is no rigorous formulation of what qualifies as a measurement. The state vector is reduced whenever a physical interaction occurs from the reference point of the systems participating in the interaction, but not for the systems not participating in it, in which the systems are then described as entangled with one another.

This is not an "interpretation" but me just explaining literally how the terminology and mathematics works. If we just accept this at face value there is no "measurement problem." The only reason there is a "measurement problem" is because this contradicts with people's basic intuitions: if we accept quantum mechanics at face value then we have to admit that whether or not properties of systems have well-defined values actually depends upon your reference point and is contingent on a physical interaction taking place.

Our basic intuition tells us that particles are autonomous entities floating around in space on their lonesome like little stones or billiard balls up until they collide with something, and so even if they are not interacting with anything at all they meaningfully can be said to "exist" with well-defined properties which should be the same properties for all reference points (i.e. the properties are absolute rather than relational). Quantum mechanics contradicts with this basic intuition so people think there must be something "wrong" with it, there must be something "under the hood" we don't yet understand and only if we make the story more complicated or make a new discovery one day we'd "solve" the "problem."

Einstein once said, God does not place dice, and Bohr rebutted with, stop telling God what to do. This is my response to people who believe in the "measurement problem." Stop with your preconceptions on how reality should work. Quantum theory is our best theory of nature and there is currently no evidence it is going away any time soon, and it's withstood the test of time for decades. We should stop waiting for the day it gets overturned and disappears and just accept this is genuinely how reality works, accept it at face-value and drop our preconceptions. We do not need any additional "stories" to explain it.

The blind spot is that we don’t know what a quantum state IS. We know the maths behind it, but not the underlying physics model.

What is a physical model if not a body of mathematics that can predict outcomes? The physical meaning of the quantum state is completely unambiguous, it is just a list of probability amplitudes. Probability captures the likelihoods of certain outcomes manifesting during an interaction, although quantum probability amplitudes are somewhat unique in that they are complex-valued, but this is to add the additional degrees of freedom needed to simultaneously represent interference phenomena. The state vector is a mathematical notation to capture likelihoods of events occurring while accounting for interference effects.

It’s likely to fall out when we unify quantum mechanics with general relativity, but we’ve been chipping at that for over 70 years now, with limited success.

There has been zero "progress" because the "problem" of unifying quantum mechanics and general relativity is a pseudoproblem. It stems from a bias that because we had success quantizing all the fundamental forces except gravity, then therefore gravity should be quantizable. Since the method that worked for all other forces failed, this being renormalization, all these other theories search for a different way to do it.

But (1) there is no reason other than blind faith to think gravity should be quantized, and (2) there is no direct compelling evidence that either quantum mechanics or general relativity are even wrong.

Also, we can already unify quantum mechanics and general relativity just fine. It's called semi-classical gravity and is what Hawking used to predict that black holes radiate. It makes quantum theory work just fine in a curved spacetime and is compatible with all experimental predictions to this day.

People who dislike semiclassical gravity will argue it seems to make some absurd predictions in under specific conditions we currently haven't measured. But this isn't a valid argument to dismiss it, because until you can actually demonstrate via experiment that such conditions can actually be created in physical reality, then it remains a purely metaphysical criticism and not a scientific one.

If semi-classical gravity is truly incorrect then you cannot just point to it having certain strange predictions in certain domains, you also have to demonstrate it is physically possible to actually probe them and this isn't just a metaphysical quirk of the theory of trying to make predictions to things that aren't physical possible in the first place and thus naturally what it would predict would also be physically impossible.

If you could construct such an experiment and its prediction was indeed wrong, you'd disprove it the very second you turned on the experiment. Hence, if you genuinely think semi-classical gravity is wrong and you are actually following the scientific method, you should be doing everything in your power to figure out how to probe these domains.

But instead people search for many different methods of trying to quantize gravity and then in a post-hoc fashion look for ways it could be experimentally verified, then when it is wrong they go back and tweak it so it is no longer ruled out by experiment, and zero progress has been made because this is not science. Karl Popper's impact on the sciences has been hugely detrimental because now everyone just believes if something can in principle be falsified it is suddenly "science" which has popularized incredibly unscientific methods in academia.

Sorry but both the "measurement problem" and the "unification problem" are pseudoproblems and not genuine scientific problems but both stems from biases on how we think nature should work rather than just fitting the best physical model to the evidence and accepting this is how nature works. Physics is making enormous progress and huge breakthroughs in many fields, but there has been zero "progress" in the fields of "solving the measurement "problem" or quantizing gravity because neither of these are genuine scientific problems.

They have been working at this "problem" for decades now and what "science" has come out of it? String Theory which is only applicable to an anti-de Sitter space despite our universe being a de Sitter space, meaning it only applies to a hypothetical universe we don't live in? Loop Quantum Gravity which can't even reproduce Einstein's field equations in a limiting case? The Many Worlds Interpretation which no one can even agree what assumptions need to be added to be able to mathematically derive the Born rule, and thus there is also no agreed upon derivation? What "progress" besides a lot of malarkey on people chasing a pseudoproblem?

If we want to know how nature works, we can just ask her, and that is the scientific method. The experiments are questions, the results are her answers. We should believe her answers and stop calling her a liar. The results of experimental practice---the actual real world physical data---should hold primacy above everything else. We should set all our preconceptions aside and believe whatever the data tells us. There is zero reason to try and update our theories or believe they are "incomplete" until we get an answer from mother nature that contradicts with our own theoretical predictions.

People always cry about how fundamental physics isn't "making progress," but what they have failed to justify is why it should progress in the first place. The only justification for updating a theory is, again, to better fit with experimental data, but they present no data. They just complain it doesn't fit some bias and preconception they have. That is not science.

[–] pcalau12i@lemmygrad.ml 2 points 3 weeks ago (4 children)

Eh, individuals can't compete with corpos not just because they have access to more data but because making progress in AI requires a large team of well-educated researchers and sufficient capital to be able to experiment with vast technology. It's a bit like expecting an individual or small business to be able to compete with smartphone manufacturers. It really is not feasible not simply because smartphone manufacturers are using dirty practices but because producing smartphones requires an enormous amount of labor and capital and simply cannot be physically carried out by an individual.

This criticism might be more applicable to a medium-sized business like DeepSeek that is not really "small" but smaller than the others (and definitely not a single individual) and still big enough to still compete, and we can see they still could compete just fine despite the current situation.

The truth is that both USA and China recognize all purely AI-generated work as de facto public domain. That means anything ChatGPT or whatever spits out, no matter what their licensing says, is absolutely free to use however you wish and you will win in court if they try to stop you. There is a common myth that training AI on synthetic data will always be negative. It's actually only sometimes true if you train the AI on its own synthetic data, but through a process they call "distillation" you can train a less intelligent AI on synthetic data from a more intelligent AI and it will actually improve its performance.

That means any AI made by big companies can be distilled into any other AI to improve its performance. This is because you effectively have access to all the data the big companies have access to but indirectly through the synthetic data their AI can produce. For example, if for some reason you curated the information the AI was trained on so it never encountered the concept of a dog, it simply wouldn't know what a dog is. If it encountered it a lot, it would know what a dog is and could explain it if you asked. Hence, that information is effectively accessible indirectly by simply asking the AI for it.

If you use distillation then you should can make effectively your own clones of any big company's AI model and it's perfectly legal. Not only that, but you can make improvements to it as well. You aren't just cloning models, but you have the power to modify them. during this distillation process.

Imagine if the initial model was trained using a particular technique that is rather outdated and you believe you've invented a new method that if re-trained would produce a smarter AI, but you simply lack access to the original data. What you can instead do is generate a ton of synthetic data from the AI and then train your new AI using the new method on that synthetic data. Your new AI will have access to most of the same information but now trained on a superior technique.

We have seen some smaller companies already take pre-existing models and use distillation to improve them, such as DeepSeek taking the Qwen models and distilling R1 reasoning techniques into them to improve their performance.

[–] pcalau12i@lemmygrad.ml 7 points 3 weeks ago* (last edited 3 weeks ago)

I always think articles like this are incredibly stupid, honestly. Political parties exist to push a particular ideology, not to win elections. If the communist party abandoned communism and became a neonazi party to win the election, and they did succeed in winning, did the communist party really "win"? Not really. If you have to abandon your ideology to win then you did not win.

It's pretty rare for parties to actually abandon their ideology like that. The job of a political party is not to merely win, but to convince the population that their ideology is superior so people will back them. They want to win, yes, but under the conditions that they have won because the people back their message so that they can implement it.

This is why I always find it incredibly stupid when I see all these articles and progressive political commentators saying that the Democrats are a stupid party for not shifting their rhetoric to be more pro-working class, to be anti-imperialist, etc. THE DEMOCRATS ARE NOT A WORKING CLASS PARTY. It would in fact be incredibly stupid for them to shift to be more left because doing so would abandon their values. The Democrats' values are billionaires, free market capitalism, and imperialism. These are not "stupid" decisions they're making for supporting these things, THESE ARE THE FUNDAMENTAL BELIEFS OF THE PARTY.

In normal countries if you dislike a party's ideology, you support a different party. But Americans have this weird fantasy that Democrats should just be "reasonable" and entirely abandon their core values to back their own values, and so they refuse to ever back a different party because of this ridiculous delusion. Whenever the Democrats fail to adopt working-class values, they run these stupid headlines saying the Democrats are being "unreasonable" or "stupid" or have "bad strategy" or are "incompetents" or whatever and "just don't want to fight."

Literally none of that is true. The Democrats are extremely fierce fighters when it comes to defending imperialism and the freedoms of billionaires. They aren't fighting for your values because those are not their values, and so you should back a different party.

[–] pcalau12i@lemmygrad.ml 1 points 3 weeks ago* (last edited 3 weeks ago)

On the surface, it does seem like there is a similarity. If a particle is measured over here and later over there, in quantum mechanics it doesn't necessarily have a well-defined position in between those measurements. You might then want to liken it to a game engine where the particle is only rendered when the player is looking at it. But the difference is that to compute how the particle arrived over there when it was previously over here, in quantum mechanics, you have to actually take into account all possible paths it could have taken to reach that point.

This is something game engines do not do and actually makes quantum mechanics far more computationally expensive rather than less.

view more: next ›