this post was submitted on 14 Jun 2024
43 points (92.2% liked)

Technology

58692 readers
4028 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Neural networks have become increasingly impressive in recent years, but there's a big catch: we don't really know what they are doing. We give them data and ways to get feedback, and somehow, they learn all kinds of tasks. It would be really useful, especially for safety purposes, to understand what they have learned and how they work after they've been trained. The ultimate goal is not only to understand in broad strokes what they're doing but to precisely reverse engineer the algorithms encoded in their parameters. This is the ambitious goal of mechanistic interpretability. As an introduction to this field, we show how researchers have been able to partly reverse-engineer how InceptionV1, a convolutional neural network, recognizes images.

you are viewing a single comment's thread
view the rest of the comments
[–] PipedLinkBot@feddit.rocks 1 points 4 months ago

Here is an alternative Piped link(s):

https://piped.video/jGCvY4gNnA8?si=S4koY5QBcuSFEfbP

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source; check me out at GitHub.