home.social

#bayes — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #bayes, aggregated by home.social.

  1. I like nonconformists, except in my chains. WTF is happening to the gold one?! #bayes #MCMC

  2. I like nonconformists, except in my chains. WTF is happening to the gold one?! #bayes #MCMC

  3. I like nonconformists, except in my chains. WTF is happening to the gold one?! #bayes #MCMC

  4. I like nonconformists, except in my chains. WTF is happening to the gold one?! #bayes #MCMC

  5. I like nonconformists, except in my chains. WTF is happening to the gold one?! #bayes #MCMC

  6. Hehehehe, we got another reviewer confused by our use of a 89% credible interval.
    Cue the beauty of prime numbers! And it is my co-author's birth year, I am so happy that I can put this in the answer 😅!

    #bayesian #academicchatter #bayes @rlmcelreath

  7. Hehehehe, we got another reviewer confused by our use of a 89% credible interval.
    Cue the beauty of prime numbers! And it is my co-author's birth year, I am so happy that I can put this in the answer 😅!

    #bayesian #academicchatter #bayes @rlmcelreath

  8. Hehehehe, we got another reviewer confused by our use of a 89% credible interval.
    Cue the beauty of prime numbers! And it is my co-author's birth year, I am so happy that I can put this in the answer 😅!

    #bayesian #academicchatter #bayes @rlmcelreath

  9. Hehehehe, we got another reviewer confused by our use of a 89% credible interval.
    Cue the beauty of prime numbers! And it is my co-author's birth year, I am so happy that I can put this in the answer 😅!

    #bayesian #academicchatter #bayes @rlmcelreath

  10. Hehehehe, we got another reviewer confused by our use of a 89% credible interval.
    Cue the beauty of prime numbers! And it is my co-author's birth year, I am so happy that I can put this in the answer 😅!

    #bayesian #academicchatter #bayes @rlmcelreath

  11. Returning to Bayesian computation now after some time away, I was delighted to see active work on JAGS 5.0!
    sourceforge.net/projects/mcmc-
    #Bayes #JAGS

  12. Returning to Bayesian computation now after some time away, I was delighted to see active work on JAGS 5.0!
    sourceforge.net/projects/mcmc-
    #Bayes #JAGS

  13. Returning to Bayesian computation now after some time away, I was delighted to see active work on JAGS 5.0!
    sourceforge.net/projects/mcmc-
    #Bayes #JAGS

  14. Returning to Bayesian computation now after some time away, I was delighted to see active work on JAGS 5.0!
    sourceforge.net/projects/mcmc-
    #Bayes #JAGS

  15. 🤔 Ah, yet another "innovative" tool promising to fix your #non-deterministic #bugs by throwing #Bayes at #Git like it's some kind of magic wand. 🔮 Because clearly, what we all need in our #debugging toolbox is more statistical hand-waving and fewer #practical #solutions. 😂
    github.com/hauntsaninja/git_ba #innovative #tools #HackerNews #ngated

  16. 🖤💙 Oh, how nice! The International Labour Organization provided the recording of my yesterday's seminar on my new forecasting system for labour market outcomes and my R package bpvars I developed for them! It's all very good 🤍

    youtube.com/watch?v=ef3eXbqNbr8

  17. When I was a child, I thought the world had things that were true and things that were false, i.e., things were "black and white".

    Things happened to me, including reading "Gödel, Escher, Bach: an Eternal Golden Braid" #Godel #GodelEscherBach, and I realized "Oh! There’s a gray area! (and not only that, the very edges of the gray area are fuzzy!"

    And then I learned about #Bayes (and #Laplace) and realized: "Oh shit! It’s **all** gray!"

    It feels like you **know** some things to be true because have assigned them such high probabilities. So high, they seem certain. Sorry. It’s not actually 1. And always remember: probability is what you **know**; reality is outside of that (just like "is your blue the same as my blue?"). Yes! Your model is good enough to navigate the world and make good decisions; but absolutely don’t confuse that with having no room left to learn.

    I know I said this in a weird way, but keep growing.

  18. When I was a child, I thought the world had things that were true and things that were false, i.e., things were "black and white".

    Things happened to me, including reading "Gödel, Escher, Bach: an Eternal Golden Braid" #Godel #GodelEscherBach, and I realized "Oh! There’s a gray area! (and not only that, the very edges of the gray area are fuzzy!"

    And then I learned about #Bayes (and #Laplace) and realized: "Oh shit! It’s **all** gray!"

    It feels like you **know** some things to be true because have assigned them such high probabilities. So high, they seem certain. Sorry. It’s not actually 1. And always remember: probability is what you **know**; reality is outside of that (just like "is your blue the same as my blue?"). Yes! Your model is good enough to navigate the world and make good decisions; but absolutely don’t confuse that with having no room left to learn.

    I know I said this in a weird way, but keep growing.

  19. When I was a child, I thought the world had things that were true and things that were false, i.e., things were "black and white".

    Things happened to me, including reading "Gödel, Escher, Bach: an Eternal Golden Braid" , and I realized "Oh! There’s a gray area! (and not only that, the very edges of the gray area are fuzzy!"

    And then I learned about (and ) and realized: "Oh shit! It’s **all** gray!"

    It feels like you **know** some things to be true because have assigned them such high probabilities. So high, they seem certain. Sorry. It’s not actually 1. And always remember: probability is what you **know**; reality is outside of that (just like "is your blue the same as my blue?"). Yes! Your model is good enough to navigate the world and make good decisions; but absolutely don’t confuse that with having no room left to learn.

    I know I said this in a weird way, but keep growing.

  20. When I was a child, I thought the world had things that were true and things that were false, i.e., things were "black and white".

    Things happened to me, including reading "Gödel, Escher, Bach: an Eternal Golden Braid" #Godel #GodelEscherBach, and I realized "Oh! There’s a gray area! (and not only that, the very edges of the gray area are fuzzy!"

    And then I learned about #Bayes (and #Laplace) and realized: "Oh shit! It’s **all** gray!"

    It feels like you **know** some things to be true because have assigned them such high probabilities. So high, they seem certain. Sorry. It’s not actually 1. And always remember: probability is what you **know**; reality is outside of that (just like "is your blue the same as my blue?"). Yes! Your model is good enough to navigate the world and make good decisions; but absolutely don’t confuse that with having no room left to learn.

    I know I said this in a weird way, but keep growing.

  21. There’s a strong urge to believe what you wish instead of what you can prove. Computer rumors are a great example. Many rumors have no basis other than being a feature someone wants. They call it "wish casting".

    We want the world to be black and white. Some given statement is either true or false. But it’s not. Gödel #Godel describes at least three states: true, false, and unprovable (e.g., the statement "This statement is false". Can’t be true or false; it’s unprovable. Maybe there’s a better name.)

    But it’s worse than that.

    In science, a theory isn’t true … it’s just the best explanation we have so far. The whole endeavor of science is to keep finding better explanations. To make good decisions you don’t need the absolute best explanation, just one good enough to guide you to beneficial choices. (I said "prove" before, but to be more accurate I should be talking not about what you can prove, but about what you can’t disprove.)

    #Bayes (really #Laplace) says a given notion isn’t true, it’s actually true-with-some-probability. Each new thing you observe impacts that #Probability. This is the actual math behind the #ScientificMethod. And it’s the truth of the world. Your beliefs must adapt to your observations, constantly, forever.

    If you have unshakable faith in some set of "facts", you’re probably doing it wrong. Even when you’re right, you could be righter.

    Of course, if you don’t adjust your beliefs with new input, if you don’t test, if you have "facts" instead of "very probable theories". If you believe things because of how strongly the person who convinced you believed instead of what they could actually show you. If you believe simply because that’s what your parents taught you. Then, well, you **might** be right (even a stopped clock is right twice a day). But at best you’re not going to make good decisions for yourself, and at worst you’re going to try to tell others what to do based on an inaccurate understanding.

    It’s messy; and that’s just how it is.

  22. There’s a strong urge to believe what you wish instead of what you can prove. Computer rumors are a great example. Many rumors have no basis other than being a feature someone wants. They call it "wish casting".

    We want the world to be black and white. Some given statement is either true or false. But it’s not. Gödel #Godel describes at least three states: true, false, and unprovable (e.g., the statement "This statement is false". Can’t be true or false; it’s unprovable. Maybe there’s a better name.)

    But it’s worse than that.

    In science, a theory isn’t true … it’s just the best explanation we have so far. The whole endeavor of science is to keep finding better explanations. To make good decisions you don’t need the absolute best explanation, just one good enough to guide you to beneficial choices. (I said "prove" before, but to be more accurate I should be talking not about what you can prove, but about what you can’t disprove.)

    #Bayes (really #Laplace) says a given notion isn’t true, it’s actually true-with-some-probability. Each new thing you observe impacts that #Probability. This is the actual math behind the #ScientificMethod. And it’s the truth of the world. Your beliefs must adapt to your observations, constantly, forever.

    If you have unshakable faith in some set of "facts", you’re probably doing it wrong. Even when you’re right, you could be righter.

    Of course, if you don’t adjust your beliefs with new input, if you don’t test, if you have "facts" instead of "very probable theories". If you believe things because of how strongly the person who convinced you believed instead of what they could actually show you. If you believe simply because that’s what your parents taught you. Then, well, you **might** be right (even a stopped clock is right twice a day). But at best you’re not going to make good decisions for yourself, and at worst you’re going to try to tell others what to do based on an inaccurate understanding.

    It’s messy; and that’s just how it is.

  23. There’s a strong urge to believe what you wish instead of what you can prove. Computer rumors are a great example. Many rumors have no basis other than being a feature someone wants. They call it "wish casting".

    We want the world to be black and white. Some given statement is either true or false. But it’s not. Gödel describes at least three states: true, false, and unprovable (e.g., the statement "This statement is false". Can’t be true or false; it’s unprovable. Maybe there’s a better name.)

    But it’s worse than that.

    In science, a theory isn’t true … it’s just the best explanation we have so far. The whole endeavor of science is to keep finding better explanations. To make good decisions you don’t need the absolute best explanation, just one good enough to guide you to beneficial choices. (I said "prove" before, but to be more accurate I should be talking not about what you can prove, but about what you can’t disprove.)

    (really ) says a given notion isn’t true, it’s actually true-with-some-probability. Each new thing you observe impacts that . This is the actual math behind the . And it’s the truth of the world. Your beliefs must adapt to your observations, constantly, forever.

    If you have unshakable faith in some set of "facts", you’re probably doing it wrong. Even when you’re right, you could be righter.

    Of course, if you don’t adjust your beliefs with new input, if you don’t test, if you have "facts" instead of "very probable theories". If you believe things because of how strongly the person who convinced you believed instead of what they could actually show you. If you believe simply because that’s what your parents taught you. Then, well, you **might** be right (even a stopped clock is right twice a day). But at best you’re not going to make good decisions for yourself, and at worst you’re going to try to tell others what to do based on an inaccurate understanding.

    It’s messy; and that’s just how it is.

  24. There’s a strong urge to believe what you wish instead of what you can prove. Computer rumors are a great example. Many rumors have no basis other than being a feature someone wants. They call it "wish casting".

    We want the world to be black and white. Some given statement is either true or false. But it’s not. Gödel #Godel describes at least three states: true, false, and unprovable (e.g., the statement "This statement is false". Can’t be true or false; it’s unprovable. Maybe there’s a better name.)

    But it’s worse than that.

    In science, a theory isn’t true … it’s just the best explanation we have so far. The whole endeavor of science is to keep finding better explanations. To make good decisions you don’t need the absolute best explanation, just one good enough to guide you to beneficial choices. (I said "prove" before, but to be more accurate I should be talking not about what you can prove, but about what you can’t disprove.)

    #Bayes (really #Laplace) says a given notion isn’t true, it’s actually true-with-some-probability. Each new thing you observe impacts that #Probability. This is the actual math behind the #ScientificMethod. And it’s the truth of the world. Your beliefs must adapt to your observations, constantly, forever.

    If you have unshakable faith in some set of "facts", you’re probably doing it wrong. Even when you’re right, you could be righter.

    Of course, if you don’t adjust your beliefs with new input, if you don’t test, if you have "facts" instead of "very probable theories". If you believe things because of how strongly the person who convinced you believed instead of what they could actually show you. If you believe simply because that’s what your parents taught you. Then, well, you **might** be right (even a stopped clock is right twice a day). But at best you’re not going to make good decisions for yourself, and at worst you’re going to try to tell others what to do based on an inaccurate understanding.

    It’s messy; and that’s just how it is.

  25. Busy week, I have not much posted from #RSS2025. This will follow in the next days.

    For now, I enjoyed chatting with Robert Grant and am reading his co-authored book "Bayesian #MetaAnalysis" which he kindly gave me a copy of 🙇
    robertgrantstats.co.uk/bma-boo

    I am always looking combinations of unusual and cross-cutting themes for teaching/ training, so I'll do a parallel read with #MixtureModels in this area: link.springer.com/chapter/10.1

    #SysReview #SystematicReview #Bayes

  26. Dear colleagues working with Markov-chain Monte Carlo: could you share any works that explore Markov-chain "convergence", and precision of mean estimates, with methods that use *quantiles* (or interquartile range, or median absolute deviation, or similar), rather than standard deviation and similar quantities?

    Just to be clear, I don't mean estimation of quantiles, but estimation *by means of* quantiles.

    Thank you!

    #statistics #stats #probability #markovchain #mcmc #bayesian #bayes

  27. Substances that slow down migrating cancer cells could be anti-metastatic drug candidates. In this preprint we describe the first computational model for quantitative analysis of cell migration assays for substance screening. #bayes
    #cellmigration #metastasis
    biorxiv.org/content/10.1101/20

  28. #Bayesian analysis simplified (#BAYAS): our paper is out! We hope that many biologists & other users will find BAYAS helpful for experimental planning & data analysis. No programming or installation required. Will hopefully lead to reduction of lab animal numbers. #3R #bayes doi.org/10.1093/bioinformatics

  29. #Bayesian analysis simplified (#BAYAS): our paper is out! We hope that many biologists & other users will find BAYAS helpful for experimental planning & data analysis. No programming or installation required. Will hopefully lead to reduction of lab animal numbers. #3R #bayes doi.org/10.1093/bioinformatics

  30. #Bayesian analysis simplified (#BAYAS): our paper is out! We hope that many biologists & other users will find BAYAS helpful for experimental planning & data analysis. No programming or installation required. Will hopefully lead to reduction of lab animal numbers. #3R #bayes doi.org/10.1093/bioinformatics

  31. #Bayesian analysis simplified (#BAYAS): our paper is out! We hope that many biologists & other users will find BAYAS helpful for experimental planning & data analysis. No programming or installation required. Will hopefully lead to reduction of lab animal numbers. #3R #bayes doi.org/10.1093/bioinformatics

  32. Detective work with #genomes: some invasive species have been introduced deliberately, others inadvertently. A new #statistical framework helps to track what probably happened. Demonstration object is the Pacific oyster. #bayes pnas.org/doi/abs/10.1073/pnas.

  33. We'd like to benchmark our #rust DPMM sampler against some alternatives. What's the fastest you know of? #nonparametric #bayes #MCMC

  34. Immune escape of Hepatitis B virus (HBV): based on >500 HBV genomes and clinical data we could discover many mutations by which the virus can escape immune recognition. We also show how the adaptation of HBV to the immune system changes over the course of the infection. The key tool in this collaboration of virologists and bioinformaticians was our #Bayesian HAMdetector method.

    doi.org/10.1016/j.jhep.2024.10

    github.com/HAMdetector/Escape.

    #infections #HBV #immunity #medicine #Bayes #HAMdetector

  35. Immune escape of Hepatitis B virus (HBV): based on >500 HBV genomes and clinical data we could discover many mutations by which the virus can escape immune recognition. We also show how the adaptation of HBV to the immune system changes over the course of the infection. The key tool in this collaboration of virologists and bioinformaticians was our #Bayesian HAMdetector method.

    doi.org/10.1016/j.jhep.2024.10

    github.com/HAMdetector/Escape.

    #infections #HBV #immunity #medicine #Bayes #HAMdetector

  36. Immune escape of Hepatitis B virus (HBV): based on >500 HBV genomes and clinical data we could discover many mutations by which the virus can escape immune recognition. We also show how the adaptation of HBV to the immune system changes over the course of the infection. The key tool in this collaboration of virologists and bioinformaticians was our #Bayesian HAMdetector method.

    doi.org/10.1016/j.jhep.2024.10

    github.com/HAMdetector/Escape.

    #infections #HBV #immunity #medicine #Bayes #HAMdetector

  37. Immune escape of Hepatitis B virus (HBV): based on >500 HBV genomes and clinical data we could discover many mutations by which the virus can escape immune recognition. We also show how the adaptation of HBV to the immune system changes over the course of the infection. The key tool in this collaboration of virologists and bioinformaticians was our #Bayesian HAMdetector method.

    doi.org/10.1016/j.jhep.2024.10

    github.com/HAMdetector/Escape.

    #infections #HBV #immunity #medicine #Bayes #HAMdetector

  38. Immune escape of Hepatitis B virus (HBV): based on >500 HBV genomes and clinical data we could discover many mutations by which the virus can escape immune recognition. We also show how the adaptation of HBV to the immune system changes over the course of the infection. The key tool in this collaboration of virologists and bioinformaticians was our #Bayesian HAMdetector method.

    doi.org/10.1016/j.jhep.2024.10

    github.com/HAMdetector/Escape.

    #infections #HBV #immunity #medicine #Bayes #HAMdetector

  39. Наивный байесовский классификатор. Основная идея, модификации и реализация с нуля на Python

    Наивный байесовский классификатор (Naive Bayes classifier) — вероятностный классификатор на основе формулы Байеса со строгим (наивным) предположением о независимости признаков между собой при заданном классе, что сильно упрощает задачу классификации из-за оценки одномерных вероятностных плотностей вместо одной многомерной. Помимо теории и реализации с нуля на Python, в данной статье также будет приведён небольшой пример использования наивного Байеса в контексте фильтрации спама со всеми подробными расчётами вручную.

    habr.com/ru/articles/802435/

    #наивный_байесовский_классификатор #naive_bayes #принцип_работы #реализация_с_нуля #python #data_science #машинное_обучение #байес #bayes

  40. Типичные задачи аналитика данных. Часть 1. Упала метрика

    В прошлой статье мы рассматривали неочевидные проблемы АБ тестирования и как можно с ними справляться [ ссылка ]. Но часто бывает так, что при внедрении новой функциональности АБ тестирование провести нельзя. Например, это типично для маркетинговых кампаний нацеленных на массовую аудиторию. В данной ситуации существует вероятность того, что пользователи контрольной группы, которым недоступна рекламируемая функциональность, начнут массово перерегистрироваться. Также возможен сценарий, при котором возникнет значительное количество негативных отзывов из-за воспринимаемой дискриминации. Но задача оценки таких нововведений одна из наиболее частых, которые приходится решать аналитикам. Если метрики только улучшаются, то это обычно легко объяснить хорошей работой, а если метрика ухудшилась, то сразу появляется задача на аналитика. В этой заметке мы рассмотрим первую часть задачи - а действительно ли метрика упала и если да, то имеет ли смысл разбираться дальше?

    habr.com/ru/articles/787098/

    #big_data #ttest #mannwhitney #bayes #bootstrap #permutations #аналитика #analytics #analysis #data_science

  41. Типичные задачи аналитика данных. Часть 1. Упала метрика

    В прошлой статье мы рассматривали неочевидные проблемы АБ тестирования и как можно с ними справляться [ ссылка ]. Но часто бывает так, что при внедрении новой функциональности АБ тестирование провести нельзя. Например, это типично для маркетинговых кампаний нацеленных на массовую аудиторию. В данной ситуации существует вероятность того, что пользователи контрольной группы, которым недоступна рекламируемая функциональность, начнут массово перерегистрироваться. Также возможен сценарий, при котором возникнет значительное количество негативных отзывов из-за воспринимаемой дискриминации. Но задача оценки таких нововведений одна из наиболее частых, которые приходится решать аналитикам. Если метрики только улучшаются, то это обычно легко объяснить хорошей работой, а если метрика ухудшилась, то сразу появляется задача на аналитика. В этой заметке мы рассмотрим первую часть задачи - а действительно ли метрика упала и если да, то имеет ли смысл разбираться дальше?

    habr.com/ru/articles/787098/

    #big_data #ttest #mannwhitney #bayes #bootstrap #permutations #аналитика #analytics #analysis #data_science

  42. Is there a good reason to report 94% HDI (highest density intervals) for regression coefficients instead of 95% HDI? #Bayes #Bayesian #BayesianStatistics

  43. Just written a hopefully accessible introduction to the replication crisis in biology with focus on problems in statistics, and how Bayesian modeling could help. Based on a talk for a cohort of mostly experimental graduate students.
    manywordsandnumbers.org/2023/0

    #replicationcrisis #reproducibility #biology #bayes

  44. Who'd like to work out the "likelihood" that all this is simply coincidence? 😛

    William #Blake died in 1827, the same year as Pierre-Simon #LaPlace, and is buried in the same cemetery as Thomas #Bayes. In 1788 Blake wrote this:

    "...the ratio of all we have already known, is not the same that it shall be when we know more."

    #BayesianInference
    #InverseProbability
    #WilliamBlake
    #PierreSimonLaPlace
    #ThomasBayes
    #Statistics

  45. The 5th Symposium on Advances in Approximate Bayesian Inference welcomes your accepted ML journal & conf papers via fast-track! Come present your work at the poster session.

    OpenReview: openreview.net/group?id=approx

    CfP: approximateinference.org/call/
    #MachineLearning #Bayes #ICML2023 #AABI

  46. Advances on Approximate Bayesian Inference (AABI 2023) is accepting nominations for reviewers, invited speakers, panelist, and future organizing committee members. Let us know who you'd like to hear from! Self-nominations accepted.

    forms.gle/gBZUQsmXgNFmLCm2A

    More information about the symposium can be found on our homepage: approximateinference.org/

    #aabi #bayes #approximateinference #machinelearning #icml2023

  47. On Thursday afternoon (15:45) I'll host panel discussion on probabilistic programming (and what does it require to ge a new algorithm added to some PPL package) with panelists
    - Mitzi Morris, #Stan / Columbia University
    - Junpeng Lao @junpenglao, TFP / #PyMC / Google
    - Tor Fjelde, #TuringLang / University of Cambridge
    - Henri Pesonen @henri_pesonen, #ELFI / Oslo University Hospital

    #BayesComp2023 #Bayes #MCMC

  48. Im very excited to announce that everyone's favourite Bayesian symposium is back for 2023!🚀🚀
    The 5th Symposium on Advances in Approximate Bayesian Inference (AABI) will take place in 🏖️Honolulu Hawaii🌴, Sunday July 23rd, Co-Located with ICML!

    Website: approximateinference.org
    #aabi #machinelearning #bayes #icml