this post was submitted on 24 Jan 2024
        
      
      270 points (90.9% liked)
      Open Source
    41808 readers
  
      
      221 users here now
      All about open source! Feel free to ask questions, and share news, and interesting stuff!
Useful Links
- Open Source Initiative
 - Free Software Foundation
 - Electronic Frontier Foundation
 - Software Freedom Conservancy
 - It's FOSS
 - Android FOSS Apps Megathread
 
Rules
- Posts must be relevant to the open source ideology
 - No NSFW content
 - No hate speech, bigotry, etc
 
Related Communities
- !libre_culture@lemmy.ml
 - !libre_software@lemmy.ml
 - !libre_hardware@lemmy.ml
 - !linux@lemmy.ml
 - !technology@lemmy.ml
 
Community icon from opensource.org, but we are not affiliated with them.
        founded 6 years ago
      
      MODERATORS
      
    you are viewing a single comment's thread
view the rest of the comments
    view the rest of the comments
The reason they are blackboxes is because they are function approximators with billions of parameters. Theory has not caught up with practical results. This is why you tune hyperparameters (learning rate, number of layers, number of neurons ina layer, etc.) and have multiple iterations of training to get an approximation of the distribution of the inputs. Training is also sensitive to the order of inputs to the network. A network trained on the same training set but in a different order might converge to an entirely different function. This is why you train on the same inputs in random order over multiple episodes to hopefully average out such variations. They are blackboxes simply because you can't yet prove theoretically the function it has approximated or converged to given the input.