home.social

#image-analysis — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #image-analysis, aggregated by home.social.

fetched live
  1. Registration is open!

    Want to better validate your #AI methods in #imageanalysis?

    Join our 3-part online workshop on June 3, 9 & 25 at 9-12 PM.

    Learn to choose metrics, quantify uncertainty & assess the robustness of rankings.

    ⏳ Register by May 31 👉 bit.ly/Validating-AI-for-Image

    #imaging #training

    @association @helmholtz_hmc

  2. Doing #AI for #imageanalysis? Learn how to validate your results properly:

    1️⃣Select appropriate performance metrics
    2️⃣Quantify model performance uncertainty
    3️⃣Assess the robustness of model comparisons

    🗓️ June 3 | 9 | 25, 9-12

    Registration opens May 6 👉 bit.ly/Validating-AI-for-Image

    Instructors: Annika Reinke, Helmholtz Imaging, DKFZ & Evangelia Christodoulou, DKFZ

    This course is organized in cooperation with HIDA.

    #imaging #training @association

  3. The image in question is not outdated. A thorough comparison of distance, edges, sky, and surrounding walls confirms its relevance. #ImageAnalysis #OSINT

  4. 🔬 Registration is open for our pilot Introduction to napari Workshop!

    napari is a powerful open-source image viewer for scientific data analysis in Python. This hands-on workshop will get you exploring multi-dimensional datasets fast.

    ✅ Only $20 USD
    ✅ Limited to 20 people
    ✅ Perfect for biologists, imaging specialists & data scientists

    Two workshops at two different times.

  5. What would you align sets of multiple (~20) large (2-4 Gb) #microscopy images?

    For smaller subset images ImageJ plugins for transformations based on SIFT landmark correspondence work well. However standard ImageJ (bioformats) file handling doesn’t cope well with such large files. For plugins handling large file manipulation (BigData family) or chunked (e.g. zarr) storage in turn I don’t know how to implement SIFT (or similar) - e.g. for BigWarp I can only find manual landmark annotation, i.e. no option to create landmarks via other plugins.

    My images are iterative fluorescence whole slide scans of the same slide with a constant nuclear stain and varying other stains. There is some x/y shift and rotation as well as warping - nothing major, but I need nearly pixel perfect alignment (e.g. QuPath+Warpy worked well on larger images but was too imprecise).
    Stitching happens on the fly during imaging and I’m not sure I can extract the tiles faithfully, so the ASHLAR pipeline didn’t seem applicable. I’ve seen VALIS recommended, but implementation seemed daunting and since the nuclear stain provides reasonable fiducial points the workflow seemed an overkill.

    Ideally I would want a scripted solution as this has to scale up to hundreds of such sets eventually and downstream processing is in python+R anyhow.

    #imageanalysis #spatial #imaging

  6. @simon_brooke

    Eerie… but then again context is everything. Google has access to a huge amount of information in the images and exif information if available. Correlating all of this across its huge user base provides possibilities we cannot even imagine.

    These companies and their tools already know more of us than we know about ourselves. We are the product.

    Ever realized why we need rules and regulations around privacy?

    #ai #privacy #google #ImageAnalysis

  7. I have two open positions in my lab at the Advanced Light Microscopy Unit Centre for Genomic Regulation (CRG):

    - Imaging Scientist (permanent position. Deadline 11th Nov.) recruitment.crg.eu/content/job

    - Entry-Level Imaging Scientist (12 months fixed-term position. Deadline 18th Nov.) recruitment.crg.eu/content/job

    If you have any questions don’t hesitate to reach out.

    Boosts appreciated.

    #getfedihired #fedihire #jobSearch #jobposting #Microscopy #Optics #ImageAnalysis

  8. Last week to subscribe for our FREE Webinar Series on "Mastering Colocalization Analysis"; from raw image to scientific results in minutes. Reserve your seat now!
    svi.nl/webinarinvitation
    #imaging #microscopy #cellbiology #fluorescence #imageanalysis #colocalization

  9. Today our team member Anna Breger tells her story - “Many little twists and turns have brought me to where I am now and I am absolutely thrilled about my interdisciplinary research project working on image analysis and historical music manuscripts.”

    ➡️ Find her full story at hermathsstory.eu/anna-breger/

    #AppliedMathematics #ImageAnalysis #Music #InterdisciplinaryResearch #NonTraditionalPathways #DataScience #HerMathsStory

  10. version 0.7-0 of my R package `bayesImageS' is now available on CRAN for Linux and macOS
    (Windows binaries are still being built and should be available soon)

    The main change is a reduction in the console output for the exchange algorithm. There were also some minor changes to fix WARN and NOTE due to compatibility issues with the latest RcppArmadillo, which now uses the
    Armadillo 15 linear algebra library by default.

    cran.r-project.org/package=bay

    #rstats #bayesian #ImageAnalysis

  11. Transformer-Ensemble-Based Implicit Spectral–Spatial Functions for Arbitrary-Resolution Hyperspectral Pansharpening.
    IEEE Transactions on Geoscience and Remote Sensing, vol. 63, pp. 1-19, 2025, Art no. 5519519
    doi.org/10.1109/TGRS.2025.3589
    #ai #transformers #imageanalysis
    bsky bsky.app/profile/clirspec.org

  12. Last week to apply to the Light-Sheet Image Analysis Workshop.

    A five-day practical course on the processing and analysis of light-sheet microscopy imaging data. It will take place in Santiago, Chile, from January 5–9, 2026.

    Deadline: August 8.

    Learn more and apply here: lightsheetchile.cl/light-sheet

    #Microscopy #Lightsheet #ImageProcessing #ImageAnalysis #LatinAmerica #GlobalSouth

  13. AI: Explainable Enough

    They look really juicy, she said. I was sitting in a small room with a faint chemical smell, doing one my first customer interviews. There is a sweet spot between going too deep and asserting a position. Good AI has to be just explainable enough to satisfy the user without overwhelming them with information. Luckily, I wasn’t new to the problem. 

    Nuthatcher atop Persimmons (ca. 1910) by Ohara Koson. Original from The Clark Art Institute. Digitally enhanced by rawpixel.

    Coming from a microscopy and bio background with a strong inclination towards image analysis I had picked up deep learning as a way to be lazy in lab. Why bother figuring out features of interest when you can have a computer do it for you, was my angle. The issue was that in 2015 no biologist would accept any kind of deep learning analysis and definitely not if you couldn’t explain the details. 

    What the domain expert user doesn’t want:
    – How a convolutional neural network works. Confidence scores, loss, AUC, are all meaningless to a biologist and also to a doctor. 

    What the domain expert desires: 
    – Help at the lowest level of detail that they care about. 
    – AI identifies features A, B, C, and that when you see A, B, & C it is likely to be disease X. 

    Most users don’t care how a deep learning really works. So, if you start giving them details like the IoU score of the object detection bounding box or if it was YOLO or R-CNN that you used their eyes will glaze over and you will never get a customer. Draw a bounding box, heat map, or outline, with the predicted label and stop there. It’s also bad to go to the other extreme. If the AI just states the diagnosis for the whole image then the AI might be right, but the user does not get to participate in the process. Not to mention regulatory risk goes way up.

    This applies beyong images, consider LLMs. No one with any expertise likes a black box. Today, why do LLMs generate code instead of directly doing the thing that the programmer is asking them to do? It’s because the programmer wants to ensure that the code “works” and they have the expertise to figure out if and when it goes wrong. It’s the same reason that vibe coding is great for prototyping but not for production and why frequent readers can spot AI patterns, ahem,  easily.  So in a Betty Crocker cake mix kind of way, let the user add the egg. 

    Building explainable-enough AI takes immense effort. It actually is easier to train AI to diagnose the whole image or to give details. Generating high-quality data at that just right level is very difficult and expensive. However, do it right and the effort pays off. The outcome is an AI-Human causal prediction machine. Where the causes, i.e. the median level features, inform the user and build confidence towards the final outcome. The deep learning part is still a black box but the user doesn’t mind because you aid their thinking. 

    I’m excited by some new developments like REX which sort of retro-fit causality onto usual deep learning models. With improvements in performance user preferences for detail may change, but I suspect that need for AI to be explainable enough will remain. Perhaps we will even have custom labels like ‘juicy’.

    #AI #AIAdoption #AICommunication #AIExplainability #AIForDoctors #AIInHealthcare #AIInTheWild #AIProductDesign #AIUX #artificialIntelligence #BettyCrockerThinking #BiomedicalAI #Business #CausalAI #DataProductDesign #DeepLearning #ExplainableAI #HumanAIInteraction #ImageAnalysis #LLMs #MachineLearning #StartupLessons #statistics #TechMetaphors #techPhilosophy #TrustInAI #UserCenteredAI #XAI

  14. To wrap this up: Both tools are easy to test. I highly recommend trying them on your own data to see what works best for your use case.

    I’ll include #CellSeg3D in our next #Napari #bioimage analysis course (fabriziomusacchio.com/teaching). Curious what impressions and feedback the students will share. 🧪🔍

    What I really like about @napari is how well it integrates modern #Python tools. Great to have such a flexible, evolving #opensource platform for (bio) #imageanalysis! 👌

  15. 👏 Big congrats to Annika Reinke for winning the Hector Foundation Prize 2025 for Metrics Reloaded, setting new standards for AI in image analysis.

    Learn more, explore the tool & meet all awardees in a video 👉 helmholtz-imaging.de/news/hect

    #helmholtz #helmholtzimaging #imaging #metrics #metricsreloaded #AI #imageanalysis

    @association @DKFZ

  16. Day 3 at #HIconference2025 wrapped with exciting talks on #AI for #imageanalysis, data integration & moonshot projects.

    A big thank you to all speakers, chairs & participants!

    See you next year!

    #HelmholtzImaging #imaging #Helmholtz

  17. Image analysts: What is your go to method for batch converting image files from the proprietary format to a tiff file? If this should be done at all...

    I’ve been using FIJI to open and re-save mine. Although this gives me another look at the image as a QC step, I am using Napari more and more and have found a tiff easier to load into Python.

    #Microscopy #ImageAnalysis #Napari #FIJI

  18. TechCrunch: The latest viral ChatGPT trend is doing ‘reverse location search’ from photos . “This week, OpenAI released its newest AI models, o3 and o4-mini, both of which can uniquely ‘reason’ through uploaded images. … These image-analyzing capabilities, paired with the models’ ability to search the web, make for a potent location-finding tool. Users on X quickly discovered that o3, […]

    https://rbfirehose.com/2025/04/21/techcrunch-the-latest-viral-chatgpt-trend-is-doing-reverse-location-search-from-photos/