home.social

#convolutionalneuralnetworks — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #convolutionalneuralnetworks, aggregated by home.social.

  1. 🚨 Breaking News: A new way to procrastinate and pretend to learn emerges! CNN Explainer promises to teach you convolutional neural networks in your browser—because reading real papers was too mainstream, and who needs depth when you have GIFs? 🤓💻📉
    poloclub.github.io/cnn-explain #BreakingNews #Procrastination #Learning #ConvolutionalNeuralNetworks #GIFs #HackerNews #ngated

  2. 🚨 Breaking News: A new way to procrastinate and pretend to learn emerges! CNN Explainer promises to teach you convolutional neural networks in your browser—because reading real papers was too mainstream, and who needs depth when you have GIFs? 🤓💻📉
    poloclub.github.io/cnn-explain #BreakingNews #Procrastination #Learning #ConvolutionalNeuralNetworks #GIFs #HackerNews #ngated

  3. 🚨 Breaking News: A new way to procrastinate and pretend to learn emerges! CNN Explainer promises to teach you convolutional neural networks in your browser—because reading real papers was too mainstream, and who needs depth when you have GIFs? 🤓💻📉
    poloclub.github.io/cnn-explain #BreakingNews #Procrastination #Learning #ConvolutionalNeuralNetworks #GIFs #HackerNews #ngated

  4. 🚨 Breaking News: A new way to procrastinate and pretend to learn emerges! CNN Explainer promises to teach you convolutional neural networks in your browser—because reading real papers was too mainstream, and who needs depth when you have GIFs? 🤓💻📉
    poloclub.github.io/cnn-explain #BreakingNews #Procrastination #Learning #ConvolutionalNeuralNetworks #GIFs #HackerNews #ngated

  5. #ConvolutionalNeuralNetworks (#CNNs in short) are immensely useful for many #imageProcessing tasks and much more...

    Yet you sometimes encounter some bits of code with little explanation. Have you ever wondered about the origins of the values for image normalization in #imagenet ?

    • Mean: [0.485, 0.456, 0.406] (for R, G and B channels respectively)
    • Std: [0.229, 0.224, 0.225]

    Strangest to me is the need for a three-digits precision. Here, after finding the origin of these numbers for MNIST and ImageNet, I am testing if that precision is really important : guess what, it is not (so much) !

    👉 if interested in more details, check-out laurentperrinet.github.io/scib

  6. How neural networks work—and why they’ve become a big business - Enlarge (credit: Aurich Lawson / Getty)
    The last decade has seen remarkable improvements in the a... more: arstechnica.com/?p=1604133 #convolutionalneuralnetworks #alexkrizhevsky #neuralnetworks #deeplearning #features #science #alexnet