home.social

#imagenet — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #imagenet, aggregated by home.social.

  1. "Ironically, several of the people who had been included in the set without any consent are known for their work critiquing surveillance and facial recognition itself, including filmmaker Laura Poitras, digital rights activist Jillian York, critic Evgeny Morozov, and author of Surveillance Capitalism Shoshana Zuboff. "

    (re Microsoft's MS-CELEB)

    excavating.ai

    #AI #Surveillance #Datasets #ImageNet #Microsoft #MS-CELEB #KateCrawford

  2. 📉🤖 Oh, look! Another treatise on why #academia swapped out rational math for the lazy allure of "good enough" #AI. Apparently, #ImageNet and the irresistible siren call of not specifying goals have won over academia's finest. Way to go, Guy Freeman, for enlightening us on how to achieve #mediocrity in the most complex way possible. 🎓💡
    gfrm.in/posts/why-decision-the #rationality #innovation #HackerNews #ngated

  3. 📉🤖 Oh, look! Another treatise on why #academia swapped out rational math for the lazy allure of "good enough" #AI. Apparently, #ImageNet and the irresistible siren call of not specifying goals have won over academia's finest. Way to go, Guy Freeman, for enlightening us on how to achieve #mediocrity in the most complex way possible. 🎓💡
    gfrm.in/posts/why-decision-the #rationality #innovation #HackerNews #ngated

  4. 📉🤖 Oh, look! Another treatise on why #academia swapped out rational math for the lazy allure of "good enough" #AI. Apparently, #ImageNet and the irresistible siren call of not specifying goals have won over academia's finest. Way to go, Guy Freeman, for enlightening us on how to achieve #mediocrity in the most complex way possible. 🎓💡
    gfrm.in/posts/why-decision-the #rationality #innovation #HackerNews #ngated

  5. 📉🤖 Oh, look! Another treatise on why #academia swapped out rational math for the lazy allure of "good enough" #AI. Apparently, #ImageNet and the irresistible siren call of not specifying goals have won over academia's finest. Way to go, Guy Freeman, for enlightening us on how to achieve #mediocrity in the most complex way possible. 🎓💡
    gfrm.in/posts/why-decision-the #rationality #innovation #HackerNews #ngated

  6. Ведущий разработчик ChatGPT и его новый проект — Безопасный Сверхинтеллект

    Многие знают об Илье Суцкевере только то, что он выдающийся учёный и программист, родился в СССР, соосновал OpenAI и входит в число тех, кто в 2023 году изгнал из компании менеджера Сэма Альтмана. А когда того вернули, Суцкевер уволился по собственному желанию в новый стартап Safe Superintelligence («Безопасный Сверхинтеллект»). Илья Суцкевер действительно организовал OpenAI вместе с Маском, Брокманом, Альтманом и другими единомышленниками, причём был главным техническим гением в компании. Ведущий учёный OpenAI сыграл ключевую роль в разработке ChatGPT и других продуктов. Сейчас Илье всего 38 лет — совсем немного для звезды мировой величины.

    habr.com/ru/companies/ruvds/ar

    #Илья_Суцкевер #Ilya_Sutskever #OpenAI #10x_engineer #AlexNet #Safe_Superintelligence #ImageNet #неокогнитрон #GPU #GPGPU #CUDA #компьютерное_зрение #LeNet #Nvidia_GTX 580 #DNNResearch #Google_Brain #Алекс_Крижевски #Джеффри_Хинтон #Seq2seq #TensorFlow #AlphaGo #Томаш_Миколов #Word2vec #fewshot_learning #машина_Больцмана #сверхинтеллект #GPT #ChatGPT #ruvds_статьи

  7. #ConvolutionalNeuralNetworks (#CNNs in short) are immensely useful for many #imageProcessing tasks and much more...

    Yet you sometimes encounter some bits of code with little explanation. Have you ever wondered about the origins of the values for image normalization in #imagenet ?

    • Mean: [0.485, 0.456, 0.406] (for R, G and B channels respectively)
    • Std: [0.229, 0.224, 0.225]

    Strangest to me is the need for a three-digits precision. Here, after finding the origin of these numbers for MNIST and ImageNet, I am testing if that precision is really important : guess what, it is not (so much) !

    👉 if interested in more details, check-out laurentperrinet.github.io/scib

  8. How a stubborn #computerscientist accidentally launched the #deeplearning boom
    "You’ve taken this idea way too far," a mentor told Prof. Fei-Fei Li, who was creating a new image #dataset that would be far larger than any that had come before: 14 million images, each labeled with one of nearly 22,000 categories. Then in 2012, a team from Univ of Toronto trained a #neura network on #ImageNet, achieving unprecedented performance in image recognition, dubbed #AlexNet.
    arstechnica.com/ai/2024/11/how #AI

  9. How a stubborn #computerscientist accidentally launched the #deeplearning boom
    "You’ve taken this idea way too far," a mentor told Prof. Fei-Fei Li, who was creating a new image #dataset that would be far larger than any that had come before: 14 million images, each labeled with one of nearly 22,000 categories. Then in 2012, a team from Univ of Toronto trained a #neura network on #ImageNet, achieving unprecedented performance in image recognition, dubbed #AlexNet.
    arstechnica.com/ai/2024/11/how #AI

  10. How a stubborn accidentally launched the boom
    "You’ve taken this idea way too far," a mentor told Prof. Fei-Fei Li, who was creating a new image that would be far larger than any that had come before: 14 million images, each labeled with one of nearly 22,000 categories. Then in 2012, a team from Univ of Toronto trained a network on , achieving unprecedented performance in image recognition, dubbed .
    arstechnica.com/ai/2024/11/how

  11. How a stubborn #computerscientist accidentally launched the #deeplearning boom
    "You’ve taken this idea way too far," a mentor told Prof. Fei-Fei Li, who was creating a new image #dataset that would be far larger than any that had come before: 14 million images, each labeled with one of nearly 22,000 categories. Then in 2012, a team from Univ of Toronto trained a #neura network on #ImageNet, achieving unprecedented performance in image recognition, dubbed #AlexNet.
    arstechnica.com/ai/2024/11/how #AI

  12. How a stubborn #computerscientist accidentally launched the #deeplearning boom
    "You’ve taken this idea way too far," a mentor told Prof. Fei-Fei Li, who was creating a new image #dataset that would be far larger than any that had come before: 14 million images, each labeled with one of nearly 22,000 categories. Then in 2012, a team from Univ of Toronto trained a #neura network on #ImageNet, achieving unprecedented performance in image recognition, dubbed #AlexNet.
    arstechnica.com/ai/2024/11/how #AI

  13. "#AI is “promising” nothing. It is #people who are promising – or not promising. AI is a piece of software. It is made by people, deployed by people and #governed by people... in terms of urgency, I’m more concerned about ameliorating the risks that are here and now [than by the risks of the techbro SkyNet singularity]."

    — Fei-Fei Li, creator of #ImageNet, whose memoir "The Worlds I See" is out now.

    theguardian.com/technology/202

  14. @lowd I remember when most ML applications were variations on #MNIST. And #Imagenet, but I only had enough computer at the time to play around with Mnist. But yea, even then "Recommendation Engines" were starting to be the first things anyone mentioned because it was low hanging fruit - something of immediately obvious commercial value with terrific training data and an easy task for deployment.

  15. Re-reading 'On the genealogy of machine learning datasets: A critical history of ImageNet' by @alexhanna. So clear the LLM debacle goes back to the start of the DL boom; it's data fetish, flat universalism, social illiteracy & contempt for workers journals.sagepub.com/doi/full/
    #AI #datasets #Imagenet #resistingAI

  16. A DNN Optimizer that Improves over AdaBelief by Suppression of the Adaptive Stepsize Range

    Guoqiang Zhang, Kenta Niwa, W. Bastiaan Kleijn

    Action editor: Rémi Flamary.

    openreview.net/forum?id=VI2JjI

    #optimizers #imagenet #optimizer

  17. A DNN Optimizer that Improves over AdaBelief by Suppression of the Adaptive Stepsize Range

    Guoqiang Zhang, Kenta Niwa, W. Bastiaan Kleijn

    Action editor: Rémi Flamary.

    openreview.net/forum?id=VI2JjI

    #optimizers #imagenet #optimizer

  18. A DNN Optimizer that Improves over AdaBelief by Suppression of the Adaptive Stepsize Range

    Guoqiang Zhang, Kenta Niwa, W. Bastiaan Kleijn

    Action editor: Rémi Flamary.

    openreview.net/forum?id=VI2JjI

    #optimizers #imagenet #optimizer

  19. A DNN Optimizer that Improves over AdaBelief by Suppression of the Adaptive Stepsize Range

    Guoqiang Zhang, Kenta Niwa, W. Bastiaan Kleijn

    Action editor: Rémi Flamary.

    openreview.net/forum?id=VI2JjI

    #optimizers #imagenet #optimizer

  20. A DNN Optimizer that Improves over AdaBelief by Suppression of the Adaptive Stepsize Range

    Guoqiang Zhang, Kenta Niwa, W. Bastiaan Kleijn

    Action editor: Rémi Flamary.

    openreview.net/forum?id=VI2JjI

    #optimizers #imagenet #optimizer

  21. Contrastive Attraction and Contrastive Repulsion for Representation Learning

    Huangjie Zheng, Xu Chen, Jiangchao Yao et al.

    Action editor: Yanwei Fu.

    openreview.net/forum?id=f39UID

    #softmax #representations #imagenet

  22. Supervised Knowledge May Hurt Novel Class Discovery Performance

    ZIYUN LI, Jona Otholt, Ben Dai, Di Hu, Christoph Meinel, Haojin Yang

    Action editor: Vikas Sindhwani.

    openreview.net/forum?id=oqOBTo

    #supervised #labeled #imagenet

  23. Towards Large Scale Transfer Learning for Differentially Private Image Classification

    Harsh Mehta, Abhradeep Guha Thakurta, Alexey Kurakin, Ashok Cutkosky

    openreview.net/forum?id=Uu8WwC

    #private #privately #imagenet

  24. Object-aware Cropping for Self-Supervised Learning

    Shlok Kumar Mishra, Anshul Shah, Ankan Bansal et al.

    openreview.net/forum?id=WXgJN7

    #cropping #imagenet #supervised

  25. For ages timm (github.com/rwightman/pytorch-i) had 2 models that were at or past 87% top-1 on #ImageNet. Today there's ~18 of them. Prepping a big update while finishing off some experiments and cleanup, I'm sitting on another 20+ (9 > 88%) weights trained / fine-tuned in timm or ported from elswhere. ImageNet-1k is heading in the direction of #CIFAR -- a side benchmark for adaptation and eval.

  26. Laut einer neuen MIT-Studie sind die zehn am häufigsten verwendeten KI-Datensätze mit vielen, teilweise eklatanten Etikettierungsfehlern behaftet.​ Falsche Trainingsdaten verzerren Güte-Einschätzung von KI-Modellen