8uurg

joined 2 years ago
[–] 8uurg@lemmy.world 2 points 6 days ago (1 children)

At least the European Commission has their own Mastodon instance.

[–] 8uurg@lemmy.world 2 points 1 week ago

Not quite, actually. It is moreso training recursively on the output without any changes, i.e., Data -> Model A -> Data (generated by Model A) -> Model B -> Data (generated by Model B -> ..., that leads to (complete) collapse. A single step like this can still worsen performance notably, though, especially when it makes up the sheer majority of the data. [source]

And if they train using little data, you won't get anywhere near the chatbots we have now. If they fine-tune an existing model to do as they wish, it would likely have side effects. Like being more likely to introduce security bugs in generated code, generally give incorrect answers to other common sense questions, and so on. [source]

[–] 8uurg@lemmy.world 2 points 2 weeks ago

We rarely prove something correct. In mathematics, logical proofs are a thing, but in astronomy and physics it is moreso the case that we usually have a model that is accurate enough for our predictions, until we find evidence to the contrary, like here, and have an opportunity to learn and improve.

You really can't ever prove a lot of things to be correct: you would have to show that no more cases exist that are not covered. But even despite the lack of proven correctness for all cases, these models are useful and provide correct predictions (most of the time), science is constantly on the lookout for cases where the model is wrong or incorrect.

[–] 8uurg@lemmy.world 4 points 3 weeks ago

Wouldn't the algorithm that creates these models in the first place fit the bill? Given that it takes a bunch of text data, and manages to organize this in such a fashion that the resulting model can combine knowledge from pieces of text, I would argue so.

What is understanding knowledge anyways? Wouldn't humans not fit the bill either, given that for most of our knowledge we do not know why it is the way it is, or even had rules that were - in hindsight - incorrect?

If a model is more capable of solving a problem than an average human being, isn't it, in its own way, some form of intelligent? And, to take things to the utter extreme, wouldn't evolution itself be intelligent, given that it causes intelligent behavior to emerge, for example, viruses adapting to external threats? What about an (iterative) optimization algorithm that finds solutions that no human would be able to find?

Intellegence has a very clear definition.

I would disagree, it is probably one of the most hard to define things out there, which has changed greatly with time, and is core to the study of philosophy. Every time a being or thing fits a definition of intelligent, the definition often altered to exclude, as has been done many times.

[–] 8uurg@lemmy.world 2 points 3 weeks ago* (last edited 3 weeks ago)

The flute doesn't make for a good example, as the end user can take it and modify it as they wish, including third party parts.

If we force it: It would be if the manufacturer made it such that all (even third party) parts for These flutes can only be distributed through their store, and they use this restriction to force any third party to comply with additional requirements.

The key problem is isn't including third party parts, it is actively blocking the usage of third party parts, forcing additional rules (which affect existing markets, like payment processors) upon them, making use of control and market dominance to accomplish this.

The Microsoft case was, in my view, weaker than this case against Apple, but their significant market dominance in the desktop OS market made it such that it was deemed anti-competitive anyways. It probably did not help that web standards suffered greatly when MS was at the helm, and making a competitive compatible browser was nigh impossible: most websites were designed for IE, using IE specific tech, effectively locking users into using IE. Because all users were using IE, developing a website using different tech was effectively useless, as users would, for other websites, end up using IE anyways. As IE was effectively the Windows browser (ignoring the brief period for IE for Mac...), this effectively ensured the Windows dominance too. Note that, without market dominance, websites would not pander specifically to IE, and this specific tie-in would be much less problematic.

In the end, Google ended IE's reign by using Google Chrome, advertising it using the Google search engine's reach. But if Microsoft had locked down the OS, like Apple does, and required everything to go through their 'app store'. I don't doubt we would have ended up with a similar browser engine restriction that Apple has, with all browsers being effectively a wrapper around the exact same underlying browser.

[–] 8uurg@lemmy.world 2 points 3 weeks ago (13 children)

Why would company A need to accomodate any other "app store" in their product, especially if one of their product's selling point is how streamlined it is?

Why should Microsoft allow for other browsers to be installed on Windows? Why should Google allow for other search engines being selectable on Android and in Chrome? The reason in all these cases is the same: it is anti-competitive, and creates a monopoly. This results in unfairly high costs to users, where these users are 3rd party software developers or end users. Due to this countries have laws against this.

Companies obviously wouldn't want to accommodate others in ways that cost them money, but that does not make it morally acceptable from a societal point of view.

[–] 8uurg@lemmy.world 4 points 4 weeks ago

Yes, true, but that is assuming:

  1. Any potential future improvement solely comes from ingesting more useful data.
  2. That the amount of data produced is not ever increasing (even excluding AI slop).
  3. No (new) techniques that makes it more efficient in terms of data required to train are published or engineered.
  4. No (new) techniques that improve reliability are used, e.g. by specializing it for code auditing specifically.

What the author of the blogpost has shown is that it can find useful issues even now. If you apply this to a codebase, have a human categorize issues by real / fake, and train the thing to make it more likely to generate real issues and less likely to generate false positives, it could still be improved specifically for this application. That does not require nearly as much data as general improvements.

While I agree that improvements are not a given, I wouldn't assume that it could never happen anymore. Despite these companies effectively exhausting all of the text on the internet, currently improvements are still being made left-right-and-center. If the many billions they are spending improve these models such that we have a fancy new tool for ensuring our software is more safe and secure: great! If it ends up being an endless money pit, and nothing ever comes from it, oh well. I'll just wait-and-see which of the two will be the case.

[–] 8uurg@lemmy.world 4 points 4 weeks ago (2 children)

Not quite, though. In the blogpost the pentester notes that it found a similar issue (that he overlooked) that occurred elsewhere, in the logoff handler, which the pentester noted and verified when spitting through a number of the reports it generated. Additionally, the pentester noted that the fix it supplied accounted for (and documented) a issue that it accounted for, that his own suggested fix for the issue was (still) susceptible to. This shows that it could be(come) a new tool that allows us to identify issues that are not found with techniques like fuzzing and can even be overlooked by a pentester actively searching for them, never mind a kernel programmer.

Now, these models generate a ton of false positives, which make the signal-to-noise ratio still much higher than what would be preferred. But the fact that a language model can locate and identify these issues at all, even if sporadically, is already orders of magnitude more than what I would have expected initially. I would have expected it to only hallucinate issues, not finding anything that is remotely like an actual security issue. Much like the spam the curl project is experiencing.

[–] 8uurg@lemmy.world 0 points 1 month ago

The key point that is being made is that it you are doing de facto copyright infringement of plagiarism by creating a copy, it shouldn't matter whether that copy was made though copy paste, re-compressing the same image, or by using AI model. The product being the copy paste operation, the image editor or the AI model here, not the (copyrighted) image itself. You can still sell computers with copy paste (despite some attempts from large copyright holders with DRM), and you can still sell image editors.

However, unlike copy paste and the image editor, the AI model could memorize and emit training data, without the input data implying the copyrighted work. (exclude the case where the image was provided itself, or a highly detailed description describing the work was provided, as in this case it would clearly be the user that is at fault, and intending for this to happen)

At the same time, it should be noted that exact replication of training data isn't exactly desirable in any case, and online services for image generation could include a image similarity check against training data, and many probably do this already.

[–] 8uurg@lemmy.world 8 points 1 month ago (1 children)

Republicans however also: deport people with a legal right to be in the country, including citizens, without due process. Want to destroy all progress made on issues affecting the LGBTQ+ community. Wish to reduce women's rights, some including voting rights. Want to abolish the separation between church and state.

Even if there is a close resemblance between the two parties on Gaza, but there are plenty of other issues where they are still incomparable, and ignoring these differences and calling both parties equally bad does not help.

[–] 8uurg@lemmy.world 11 points 2 months ago

At least the AI runs locally, as opposed to sending everything to someone else's computer for processing. Local translation in Firefox actually works quite well.

[–] 8uurg@lemmy.world 10 points 2 months ago

That is only really a good solution for the few that live in the countryside. If sufficiently many people live close enough to one another without a shop, that is a issue that is best solved by improving planning and introducing local shops (reducing the distance all people in the community have to travel).

view more: next ›