home.social

#global-temperature — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #global-temperature, aggregated by home.social.

fetched live
  1. #Berkeley #BerkeleyEarth #GlobalTemperature #RobertRohde #ZekeHausfather

    February 2026 nominally 2nd warmest February on record
    Average
    - global temperature 1.55 ± 0.12 °C (2.78 ± 0.22 °F)
    - land temperatures 2.55 ± 0.30 °C (4.59 ± 0.55 °F)
    - ocean temperatures 1.04 ± 0.17 °C (1.87 ± 0.30 °F)
    above 1850-1900 average

    berkeleyearth.org/february-202

    #climate #ClimateScience #climatechange #ClimateCrisis #ClimateBreakdown #ClimateDisruption #globalWarming #globalHeating #ExtremeWeather #polycrisis

  2. And now, the moment you’ve all been waiting for…

    Previously on “merging global temperatures”, we looked at different ways of hierarchically grouping global temperature datasets to get a reasonable best estimate and uncertainty range. There were three principle ways to do that grouping: by SST data set, by LSAT dataset, and by INERTPolation method (the errant capitalisation will have a purposed later).

    I put these different groupings through my code for calculating an ensemble of ensembles and got the following summary statistics. I show here the annual means and a lowess smoothed series to highlight any differences in shorter and longer term behaviour.

    First plot shows the means of the three ensembles and, as you can see, there is very little difference between them, so I won’t analyse this in detail.

    Next up the standard deviation of the ensemble. There are some small differences here which are interesting.

    The SST ensemble has a higher standard deviation in the 1880-1915 window possibly reflective of differences between marine datasets around this period. The LSAT ensemble has a larger standard deviation in the post 1930 period. Even though the differences are visible, they are still relatively small. The rule of thumb is that uncertainties in uncertainties are usually worse than 10% so we’re in that fuzzy zone where we can maybe explain why we see differences, but at the same time, we maybe don’t need to worry about them too much.

    So, after all that, not much difference. This is a good thing though. It suggests that we’re not overly sensitive to reasonable choices about how to split up the ensemble.

    We can also compare to what would happen if we just treated each dataset equally, as if they were all independent.

    It doesn’t make much difference to the mean, but the uncertainty…

    There’s a big difference there, with equal weighting generally coming in with a lower estimate of the uncertainty vs all the other combinations. This partly comes from burying DCENT in a mound of datasets that, for all their differences, are quite similar particularly in the early 20th century. I think this is a vote in favour of a more complex weighting.

    -fin-

    #climate #climateChange #globalTemperature

  3. Bear with

    It started with a trifling dissatisfaction with how the IPCC arrived at their composite global temperature series which then developed as new datasets came out. Or perhaps even before then, with a similarly trifling dissatisfaction on the very same topic. My blog doesn’t get a lot of comments, but the two more recent posts have had a lot of very interesting and technical comments from Bruce Calvert (Thanks Bruce) on how to formalise some of the ideas. My latest post on the topic largely ignored the formalisms because I have a preference for simple methods (and a small brain).

    What both are trying to do is satisfy a bunch of criteria. We have a set of different global temperature datasets, but what we want is:

    1. A single dataset…
    2. That integrates all of the information that the individual datasets provide
    3. Also, integrating all the knowledge we have that isn’t necessarily tied up in those datasets
    4. With a reasonable central estimate
    5. And an uncertainty range that represents our uncertainty
    6. which can be used to generate samples that are representative of uncertainty at all time scales
    7. and are representative of actual global temperature variability

    These criteria would make a useful dataset with broad utility.

    My method (as it has developed) provides 1, 4, 5, and 6, but falls short on 2, 3 and 7 by throwing out some information and mixing together datasets that represent somewhat different things. One could quibble about 4, 5, and 6 of course.

    The Guttorp and Craigmile method (see also) provides 1, 4, 6, and 7, but does less well (in my assessment, see the links above) on 2, 3 and 5. In places their central estimate is likely compromised by poor dataset choices and they ignore information that is available in the datasets. These issues could be remedied.

    Is it reasonable? Well, it includes some older datasets (e.g. GETQUOCS) that have old bias adjustments because they have a nice uncertainty analysis. One might even argue that with the publication of DCENT, all other datasets are questionable. I would counter that by noting that the major compelling improvements from DCENT really affect the early 20th century warming, but prior to that it just widens the uncertainty range.

    Does it really represent our uncertainty? Again, it’s hard to say. We have an ensemble of opportunity and rather a poor one at that. The hierarchical grouping I suggested is healthier than it was when I first suggested it. We now have DCENT and COBE-STEMP3, which broaden the range of estimates, but we are still trying to estimate a broad distribution with a handful of samples. My method is only as broad as the range of the datasets we have but this is partly by design. Another thing missing is the fact that we know that mixing and matching the land and ocean components of NOAAGlobalTemp and HadCRUT would widen the spread.

    Does it use all the information? No. The hierarchy tries to encode the major covariances that define the structural uncertainties, assuming these come from the choice of SST (or marine temperature) dataset. We know that datasets use similar land temperature datasets and largely the same sea ice datasets. I also don’t use uncertainty ranges if they’re not represented by an ensemble. This is partly in order to avoid having to make assumptions about the correlation structures of the errors and partly because I don’t know what those structures are. I’m also missing information from the NOAAGlobalTemp ensemble. That would be a very useful addition. The Vaccaro dataset also has an ensemble and an interestingly different interpolation approach. And now there is a new dataset in preprint, GloSAT, which combines marine air temperatures with land air temperatures to give a completely new beast.

    How to do better?

    One obvious way is to get those missing ensembles.

    Another is to employ the more formal statistical approach

    Sticking with my simplistic approach, Bruce came up with an interestingly objective way to weight datasets using the estimated covariances between them. This would rely on expert judgement and it seems like this would be a difficult issue. There’s not a single covariance between datasets. Say two datasets use the same SST dataset, but different interpolation methods and land temperatures. At any time step, the two datasets will effectively give the SST dataset different weights and those weights will change over time. That means the covariance will change over time too. The temporal structure will also vary with time. It’s complex but we could come up with reasonable approximations. We could weight land and ocean as 30:70 representing the ratio, or have some simple smoothed representation. We could develop a hierarchy of hierarchies. We could take a survey of experts, asking them to make their covariance estimates. etc.

    So, a first minimal extension is to include GloSAT and Vaccaro ensembles, because the data are just there begging to be used. I rearranged the hierarchy to put Vaccaro and GETQUOCS in the same category and separated them from the HadCRUT5 datasets. I also jacked the ensemble up to 50,000 members because I can and I want to make matplotlib explode.

    The shape of the uncertainty curve might look odd, but it’s just a consequence of using 1850-1900 as a baseline. Uncertainty is generally smaller during the baseline period because each ensemble member is forced to average to zero during that period. It increases afterwards because there is a lot of uncertainty in the early 20th century.

    Till next time…

    #climate #climateChange #globalTemperature #python

  4. Sunday Monday and Tuesday this week all exceeded Global Daily Average Temperature Record set in July 2023. Welcome to the Anthropocene.

    “Monday 22 July revised to 17.16C, as Tuesday comes in at 17.15C. Both break the Sunday global temperature record and all break the global temperature record set last year
    17.15C 23 July 2024
    17.16C 22 July 2024
    17.09C 21 July 2024
    17.08C 6 July 2023 “ - 🇦🇺climatologist Andrew Watkins
    #GlobalTemperature #climatecrisis
    pulse.climate.copernicus.eu/

  5. #GlobalTemperature Anomalies from 1880 to 2023
    From the #NASA Scientific Visualisation Studio

    From blue, to yellow, to burnt orange in 140 years
    svs.gsfc.nasa.gov/5207/

    courtesy Gerald Kutney

  6. Global 2m surface temperatures spiked to 1.98°C above the 1850-1900 IPCC baseline on Nov 17 according to Prof Eliot Jacobson at the birdsite.

    Only one day since 1940 has been more extreme: Feb. 28, 2016, with an anomaly of 1.99°C.

    Update: The global 2m temperature on Nov. 18 was 2.01°C.

    Based on the first 17 days of November, this month is heading towards a new global heat record.

    #ClimateCrisis #GlobalTemperature

    twitter.com/EliotJacobson/stat

  7. The era of global boiling has arrived, says UN Secretary General, as July is the hottest month in recorded history. Cartoon for Trouw: trouw.nl/opinie/spotprent~bbc7

    I'm taking a break for summer. I'll be back with new cartoons at the end of August. See you then!

    #climate #extremeheat #heatwaves #globaltemperature

  8. Climate scientists need to be conveying daily info to the public in the same way that news organisations supply the business world with info on FTSE index and stocks and shares etc. So on news bulletins, we need regular updates on CO2 ppm; global temperature; stats on sea ice, sea surface temperatures etc. Is there an easy way to find this data and convey the info to general public? #climatescience #news #globaltemperature #seaice #statistics #ipcc #unitednations

  9. sciencealert.com/its-official-

    Monday July 3rd, was the hottest day on record, since we've begun keeping records An average of 17.01 degrees Celsius (62.6 F). Last year held the record before this, at 16.92 C.

    The average global temperature typically tends to rise until the end of July or beginning of august.

    #ClimateChange #GlobalWarming #Science #Temperature #GlobalTemperature #NOAA

  10. 2021 obeyed physics, was one of the warmest years on record - Enlarge (credit: NOAA)

    We are still in the midst of running a ... - arstechnica.com/?p=1825722 #globaltemperature #climatechange #science

  11. Can temperature patterns predict next year’s global average? - Enlarge / One interesting way to look at the world: the darker the red, the closer the correlation ... more: arstechnica.com/?p=1685153 #globaltemperature #climatechange #science

  12. 2019 was likely Earth’s second-hottest year on record - Enlarge / Temperature above or below the 1950-1981 average, in kelvins (equivalent to degrees C). (... more: arstechnica.com/?p=1644167 #globaltemperature #climatechange #science

  13. Natural cycles had little to do with 20th-century temperature trends - Enlarge (credit: UpNorthMemories)
    Reconstructing crime scenes is more or less what most geoscient... more: arstechnica.com/?p=1508283 #globaltemperature #climatechange #science