Sunflower 09/19/2024 (Thu) 23:48 Id: 7be093 No.9072 del
>>9069
I'm more interested in understanding how the activation function is written and why these weights work in the first place. The simplest examples are instead too simple with only 4 neurons giving a 1 or 0 as output, which doesn't make it easy to understand how this works with larger networks. I could ask Bing to write me a network like that and it would be functional, but from there to the larger ones, who's going to connect the dots there? Does anyone really know how it works? They're saying no one really knows what's in the neural networks they've created that are in general use, and that the developers don't understand them either.