this post was submitted on 15 Sep 2023
7 points (81.8% liked)
Futurology
1813 readers
63 users here now
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
A spellchecker takes input from humans and uses that input to match against a database of known words to suggest correct words using that word’s proximity to the known words. Modern spellcheckers are able to tokenize a corpus of words written by the device’s owner and use that corpus to determine what word is likely to follow the previous word. Most phones these days do this.
Modern AI takes a corpus of data, tokenizes it, feeds each token into a neuro-network to determine the next token that is likely to follow the previous token.
Graphical AIs do similar work but there’s more variables to alter to “weigh” what pixel value would likely be present based on surrounding pixel values and the noise present in that seed, along with the other values. The corpus in this case would be a library of digital graphical works that is interpreted as a graphical work (e.g., a matrix of pixel color values). Sound AIs work similarily but with digitized sound as data.
What do I misunderstand?
You misunderstand how the sheer magnitude and scale of the process makes it different.
Sorry. BIG spellchecker.
LARGE language model.
YUGE AI boi.
Better words. Better pictures. Better sound.
Totally different.
(/S)