home.social

#linearregression — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #linearregression, aggregated by home.social.

  1. About metrics for measuring agreement on regression on continuous datasets:
    Reasons to avoid R² and use RMSE instead: feat.engineering/03-Review_of_

    From Max Kuhn @topepo, Kjell Johnson (2026), "Feature Engineering and Selection: A Practical Approach for Predictive Models"

    #prediction #dataDev #modelEvaluation #regression #modelling #linearRegression #modeling #probability #probabilities #statistics #stats #gotcha

  2. About metrics for measuring agreement on regression on continuous datasets:
    Reasons to avoid R² and use RMSE instead: feat.engineering/03-Review_of_

    From Max Kuhn @topepo, Kjell Johnson (2026), "Feature Engineering and Selection: A Practical Approach for Predictive Models"

    #prediction #dataDev #modelEvaluation #regression #modelling #linearRegression #modeling #probability #probabilities #statistics #stats #gotcha

  3. About metrics for measuring agreement on regression on continuous datasets:
    Reasons to avoid R² and use RMSE instead: feat.engineering/03-Review_of_

    From Max Kuhn @topepo, Kjell Johnson (2026), "Feature Engineering and Selection: A Practical Approach for Predictive Models"

  4. About metrics for measuring agreement on regression on continuous datasets:
    Reasons to avoid R² and use RMSE instead: feat.engineering/03-Review_of_

    From Max Kuhn @topepo, Kjell Johnson (2026), "Feature Engineering and Selection: A Practical Approach for Predictive Models"

    #prediction #dataDev #modelEvaluation #regression #modelling #linearRegression #modeling #probability #probabilities #statistics #stats #gotcha

  5. About metrics for measuring agreement on regression on continuous datasets:
    Reasons to avoid R² and use RMSE instead: feat.engineering/03-Review_of_

    From Max Kuhn @topepo, Kjell Johnson (2026), "Feature Engineering and Selection: A Practical Approach for Predictive Models"

    #prediction #dataDev #modelEvaluation #regression #modelling #linearRegression #modeling #probability #probabilities #statistics #stats #gotcha

  6. Before diving into deep learning hype, remember the power of classic algorithms. Linear regression, decision trees, and thoughtful feature engineering still drive real‑world analytics and revenue. Master these fundamentals and your neural nets will perform better, faster, and cheaper. Curious how the basics outpace the buzz? Read on. #NeuralNetworks #LinearRegression #DecisionTrees #FeatureEngineering

    🔗 aidailypost.com/news/master-fu

  7. Before diving into deep learning hype, remember the power of classic algorithms. Linear regression, decision trees, and thoughtful feature engineering still drive real‑world analytics and revenue. Master these fundamentals and your neural nets will perform better, faster, and cheaper. Curious how the basics outpace the buzz? Read on. #NeuralNetworks #LinearRegression #DecisionTrees #FeatureEngineering

    🔗 aidailypost.com/news/master-fu

  8. @data @datadon 🧵

    How to assess a statistical model?
    How to choose between variables?

    Pearson's #correlation is irrelevant if you suspect that the relationship is not a straight line.

    If monotonic relationship:
    "#Spearman’s rho is particularly useful for small samples where weak correlations are expected, as it can detect subtle monotonic trends." It is "widespread across disciplines where the measurement precision is not guaranteed".
    "#Kendall’s Tau-b is less affected [than Spearman’s rho] by outliers in the data, making it a robust option for datasets with extreme values."
    Ref: statisticseasily.com/kendall-t

    #normality #normalDistribution #modeling #dataDev #AIDev #ML #modelEvaluation #regression #modelling #dataLearning #featureEngineering #linearRegression #modeling #probability #probabilities #statistics #stats #correctionRatio #ML #Pearson #bias #regressionRedress #distributions

  9. @data @datadon 🧵

    How to assess a statistical model?
    How to choose between variables?

    Pearson's #correlation is irrelevant if you suspect that the relationship is not a straight line.

    If monotonic relationship:
    "#Spearman’s rho is particularly useful for small samples where weak correlations are expected, as it can detect subtle monotonic trends." It is "widespread across disciplines where the measurement precision is not guaranteed".
    "#Kendall’s Tau-b is less affected [than Spearman’s rho] by outliers in the data, making it a robust option for datasets with extreme values."
    Ref: statisticseasily.com/kendall-t

    #normality #normalDistribution #modeling #dataDev #AIDev #ML #modelEvaluation #regression #modelling #dataLearning #featureEngineering #linearRegression #modeling #probability #probabilities #statistics #stats #correctionRatio #ML #Pearson #bias #regressionRedress #distributions

  10. @[email protected] @[email protected] 🧵

    How to assess a statistical model?
    How to choose between variables?

    Pearson's is irrelevant if you suspect that the relationship is not a straight line.

    If monotonic relationship:
    "’s rho is particularly useful for small samples where weak correlations are expected, as it can detect subtle monotonic trends." It is "widespread across disciplines where the measurement precision is not guaranteed".
    "’s Tau-b is less affected [than Spearman’s rho] by outliers in the data, making it a robust option for datasets with extreme values."
    Ref: statisticseasily.com/kendall-t

  11. @data @datadon 🧵

    How to assess a statistical model?
    How to choose between variables?

    Pearson's #correlation is irrelevant if you suspect that the relationship is not a straight line.

    If monotonic relationship:
    "#Spearman’s rho is particularly useful for small samples where weak correlations are expected, as it can detect subtle monotonic trends." It is "widespread across disciplines where the measurement precision is not guaranteed".
    "#Kendall’s Tau-b is less affected [than Spearman’s rho] by outliers in the data, making it a robust option for datasets with extreme values."
    Ref: statisticseasily.com/kendall-t

    #normality #normalDistribution #modeling #dataDev #AIDev #ML #modelEvaluation #regression #modelling #dataLearning #featureEngineering #linearRegression #modeling #probability #probabilities #statistics #stats #correctionRatio #ML #Pearson #bias #regressionRedress #distributions

  12. @data @datadon 🧵

    How to assess a statistical model?
    How to choose between variables?

    Pearson's #correlation is irrelevant if you suspect that the relationship is not a straight line.

    If monotonic relationship:
    "#Spearman’s rho is particularly useful for small samples where weak correlations are expected, as it can detect subtle monotonic trends." It is "widespread across disciplines where the measurement precision is not guaranteed".
    "#Kendall’s Tau-b is less affected [than Spearman’s rho] by outliers in the data, making it a robust option for datasets with extreme values."
    Ref: statisticseasily.com/kendall-t

    #normality #normalDistribution #modeling #dataDev #AIDev #ML #modelEvaluation #regression #modelling #dataLearning #featureEngineering #linearRegression #modeling #probability #probabilities #statistics #stats #correctionRatio #ML #Pearson #bias #regressionRedress #distributions

  13. "In real life, we weigh the anticipated consequences of the decisions that we are about to make. That approach is much more rational than limiting the percentage of making the error of one kind in an artificial (null hypothesis) setting or using a measure of evidence for each model as the weight."
    Longford (2005) stat.columbia.edu/~gelman/stuf

    #modeling #nullHypothesis #probability #probabilities #pValues #statistics #stats #statisticalLiteracy #bias #inference #modelling #regression #linearRegression

  14. @datadon

    #Lasso #LinearRegression "is useful in some contexts due to its tendency to prefer solutions with fewer non-zero coefficients, effectively reducing the number of features upon which the given solution is dependent"

    scikit-learn.org/stable/module 🧵

    #dataDev #AIDev #ML #sklearn #python #interpretability

  15. @datadon

    #Lasso #LinearRegression "is useful in some contexts due to its tendency to prefer solutions with fewer non-zero coefficients, effectively reducing the number of features upon which the given solution is dependent"

    scikit-learn.org/stable/module 🧵

    #dataDev #AIDev #ML #sklearn #python #interpretability

  16. @[email protected]

    "is useful in some contexts due to its tendency to prefer solutions with fewer non-zero coefficients, effectively reducing the number of features upon which the given solution is dependent"

    scikit-learn.org/stable/module 🧵

  17. @datadon

    #Lasso #LinearRegression "is useful in some contexts due to its tendency to prefer solutions with fewer non-zero coefficients, effectively reducing the number of features upon which the given solution is dependent"

    scikit-learn.org/stable/module 🧵

    #dataDev #AIDev #ML #sklearn #python #interpretability

  18. @datadon

    #Lasso #LinearRegression "is useful in some contexts due to its tendency to prefer solutions with fewer non-zero coefficients, effectively reducing the number of features upon which the given solution is dependent"

    scikit-learn.org/stable/module 🧵

    #dataDev #AIDev #ML #sklearn #python #interpretability

  19. For the next few months, Dr. Andrej-Nikolai Spiess (openalex.org/works?page=1&filt) will be a guest in my working group.

    We are working on a paper where we show that 29 % of papers in top journals like Science, Nature & PNAS were skewed by a single influential data point! Time to rethink our reliance on p-values and explore alternative measures like #dfstat. #reproducibilitycrisis #linearregression #rstats

    Moreover, we will work on #qPCR related software like PCRedux (joss.theoj.org/papers/10.21105)

    #JOSS

  20. For the next few months, Dr. Andrej-Nikolai Spiess (openalex.org/works?page=1&filt) will be a guest in my working group.

    We are working on a paper where we show that 29 % of papers in top journals like Science, Nature & PNAS were skewed by a single influential data point! Time to rethink our reliance on p-values and explore alternative measures like #dfstat. #reproducibilitycrisis #linearregression #rstats

    Moreover, we will work on #qPCR related software like PCRedux (joss.theoj.org/papers/10.21105)

    #JOSS

  21. For the next few months, Dr. Andrej-Nikolai Spiess (openalex.org/works?page=1&filt) will be a guest in my working group.

    We are working on a paper where we show that 29 % of papers in top journals like Science, Nature & PNAS were skewed by a single influential data point! Time to rethink our reliance on p-values and explore alternative measures like #dfstat. #reproducibilitycrisis #linearregression #rstats

    Moreover, we will work on #qPCR related software like PCRedux (joss.theoj.org/papers/10.21105)

    #JOSS

  22. For the next few months, Dr. Andrej-Nikolai Spiess (openalex.org/works?page=1&filt) will be a guest in my working group.

    We are working on a paper where we show that 29 % of papers in top journals like Science, Nature & PNAS were skewed by a single influential data point! Time to rethink our reliance on p-values and explore alternative measures like #dfstat. #reproducibilitycrisis #linearregression #rstats

    Moreover, we will work on #qPCR related software like PCRedux (joss.theoj.org/papers/10.21105)

    #JOSS

  23. For the next few months, Dr. Andrej-Nikolai Spiess (openalex.org/works?page=1&filt) will be a guest in my working group.

    We are working on a paper where we show that 29 % of papers in top journals like Science, Nature & PNAS were skewed by a single influential data point! Time to rethink our reliance on p-values and explore alternative measures like #dfstat. #reproducibilitycrisis #linearregression #rstats

    Moreover, we will work on #qPCR related software like PCRedux (joss.theoj.org/papers/10.21105)

    #JOSS

  24. @datadon

    "The following sections discuss several state-of-the-art interpretable and explainable #ML methods. The selection of works does not comprise an exhaustive survey of the literature. Instead, it is meant to illustrate the commonest properties and inductive biases behind interpretable models and [black-box] explanation methods using concrete instances."
    wires.onlinelibrary.wiley.com/ 🧵

    #interpretability #explainability #aiethics #compliance #taxonomy #ethicalai #aievaluation #linearRegression