this post was submitted on 21 Oct 2025
16 points (86.4% liked)

Futurology

3381 readers
14 users here now

founded 2 years ago
MODERATORS
top 4 comments
sorted by: hot top controversial new old
[–] doomsdayrs@lemmy.ml 4 points 1 week ago (2 children)

So instead of recognizing characters....

  1. Compress page / text into a handful of pixels.
  2. Feed pixels into a generative AI.
  3. Hope for the best.

I rather just use existing OCRs which can be easily backtracked in how they processed text.

[–] FauxLiving@lemmy.world 3 points 1 week ago

They were able to efficiently encode visual information to be used by further networks. In this case the further network was a language model trained on an OCR task.

The news is the technique, the OCR software is a demonstration of the technique. Encoding visual information efficiently is also key for robotics which use trained networks in their feedback control loops. Being able to process 10 times as much visual data with the same hardware is a very significant increase in capability.

[–] Moidialectica@hexbear.net 2 points 1 week ago

it doesn't actually process text, which is why it's more efficient, it can essentially take in ten times the text through images without suffering the penalties associated by having that many tokens

[–] metaStatic@kbin.earth 2 points 1 week ago