home.social

#annualreport — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #annualreport, aggregated by home.social.

  1. ‘Deepfakes spreading and more AI companions’: seven takeaways from the latest artificial intelligence safety report – AI (artificial intelligence) – The Guardian

    AI (artificial intelligence), Explainer

    AI image – The Guardian

    ‘Deepfakes spreading and more AI companions’: seven takeaways from the latest artificial intelligence safety report

    Annual review highlights growing capabilities of AI models, while examining issues from cyber-attacks to job disruption

    By Dan Milmo, Global technology editor, Tue 3 Feb 2026 00.00 EST Share

    The International AI Safety report is an annual survey of technological progress and the risks it is creating across multiple areas, from deepfakes to the jobs market.

    Commissioned at the 2023 global AI safety summit, it is chaired by the Canadian computer scientist Yoshua Bengio, who describes the “daunting challenges” posed by rapid developments in the field. The report is also guided by senior advisers, including Nobel laureates Geoffrey Hinton and Daron Acemoglu.

    Here are some of the key points from the second annual report, published on Tuesday. It stresses that it is a state-of-play document, rather than a vehicle for making specific policy recommendations to governments. Nonetheless, it is likely to help frame the debate for policymakers, tech executives and NGOs attending the next global AI summit in India this month.

    Anthropic has released models with heightened safety measures. Photograph: Dado Ruvić /Reuters
    1. 1. The capabilities of AI models are improving A host of new AI models – the technology that underpins tools like chatbots – were released last year, including OpenAI’s GPT-5, Anthropic’s Claude Opus 4.5 and Google’s Gemini 3. The report points to new “reasoning systems” – which solve problems by breaking them down into smaller steps – showing improved performance in maths, coding and science. Bengio said there has been a “very significant jump” in AI reasoning. Last year, systems developed by Google and OpenAI achieved a gold-level performance in the International Mathematical Olympiad – a first for AI.However, the report says AI capabilities remain “jagged”, referring to systems displaying astonishing prowess in some areas but not in others. While advanced AI systems are impressive at maths, science, coding and creating images, they remain prone to making false statements, or “hallucinations”, and cannot carry out lengthy projects autonomously.Nonetheless, the report cites a study showing that AI systems are rapidly improving their ability to carry out certain software engineering tasks – with their duration doubling every seven months. If that rate of progress continues, AI systems could complete tasks lasting several hours by 2027 and several days by 2030. This is the scenario under which AI becomes a real threat to jobs.But for now, says the report, “reliable automation of long or complex tasks remains infeasible”.

    Continue/Read Original Article Here: ‘Deepfakes spreading and more AI companions’: seven takeaways from the latest artificial intelligence safety report | AI (artificial intelligence) | The Guardian

    Tags: AI, Annual Report, Artifical Intelligence, Dan Milmo, International AI Safety Report, Risks, Safety, Safety Report, Senior Advisors, Summit, Survey, Tech Progress, The Guardian
    #AI #AnnualReport #ArtificalIntelligence #DanMilmo #InternationalAISafetyReport #Risks #Safety #SafetyReport #SeniorAdvisors #Summit #Survey #TechProgress #TheGuardian
  2. ‘Deepfakes spreading and more AI companions’: seven takeaways from the latest artificial intelligence safety report – AI (artificial intelligence) – The Guardian

    AI (artificial intelligence), Explainer

    AI image – The Guardian

    ‘Deepfakes spreading and more AI companions’: seven takeaways from the latest artificial intelligence safety report

    Annual review highlights growing capabilities of AI models, while examining issues from cyber-attacks to job disruption

    By Dan Milmo, Global technology editor, Tue 3 Feb 2026 00.00 EST Share

    The International AI Safety report is an annual survey of technological progress and the risks it is creating across multiple areas, from deepfakes to the jobs market.

    Commissioned at the 2023 global AI safety summit, it is chaired by the Canadian computer scientist Yoshua Bengio, who describes the “daunting challenges” posed by rapid developments in the field. The report is also guided by senior advisers, including Nobel laureates Geoffrey Hinton and Daron Acemoglu.

    Here are some of the key points from the second annual report, published on Tuesday. It stresses that it is a state-of-play document, rather than a vehicle for making specific policy recommendations to governments. Nonetheless, it is likely to help frame the debate for policymakers, tech executives and NGOs attending the next global AI summit in India this month.

    Anthropic has released models with heightened safety measures. Photograph: Dado Ruvić /Reuters
    1. 1. The capabilities of AI models are improving A host of new AI models – the technology that underpins tools like chatbots – were released last year, including OpenAI’s GPT-5, Anthropic’s Claude Opus 4.5 and Google’s Gemini 3. The report points to new “reasoning systems” – which solve problems by breaking them down into smaller steps – showing improved performance in maths, coding and science. Bengio said there has been a “very significant jump” in AI reasoning. Last year, systems developed by Google and OpenAI achieved a gold-level performance in the International Mathematical Olympiad – a first for AI.However, the report says AI capabilities remain “jagged”, referring to systems displaying astonishing prowess in some areas but not in others. While advanced AI systems are impressive at maths, science, coding and creating images, they remain prone to making false statements, or “hallucinations”, and cannot carry out lengthy projects autonomously.Nonetheless, the report cites a study showing that AI systems are rapidly improving their ability to carry out certain software engineering tasks – with their duration doubling every seven months. If that rate of progress continues, AI systems could complete tasks lasting several hours by 2027 and several days by 2030. This is the scenario under which AI becomes a real threat to jobs.But for now, says the report, “reliable automation of long or complex tasks remains infeasible”.

    Continue/Read Original Article Here: ‘Deepfakes spreading and more AI companions’: seven takeaways from the latest artificial intelligence safety report | AI (artificial intelligence) | The Guardian

    Tags: AI, Annual Report, Artifical Intelligence, Dan Milmo, International AI Safety Report, Risks, Safety, Safety Report, Senior Advisors, Summit, Survey, Tech Progress, The Guardian
    #AI #AnnualReport #ArtificalIntelligence #DanMilmo #InternationalAISafetyReport #Risks #Safety #SafetyReport #SeniorAdvisors #Summit #Survey #TechProgress #TheGuardian
  3. ‘Deepfakes spreading and more AI companions’: seven takeaways from the latest artificial intelligence safety report – AI (artificial intelligence) – The Guardian

    AI (artificial intelligence), Explainer

    AI image – The Guardian

    ‘Deepfakes spreading and more AI companions’: seven takeaways from the latest artificial intelligence safety report

    Annual review highlights growing capabilities of AI models, while examining issues from cyber-attacks to job disruption

    By Dan Milmo, Global technology editor, Tue 3 Feb 2026 00.00 EST Share

    The International AI Safety report is an annual survey of technological progress and the risks it is creating across multiple areas, from deepfakes to the jobs market.

    Commissioned at the 2023 global AI safety summit, it is chaired by the Canadian computer scientist Yoshua Bengio, who describes the “daunting challenges” posed by rapid developments in the field. The report is also guided by senior advisers, including Nobel laureates Geoffrey Hinton and Daron Acemoglu.

    Here are some of the key points from the second annual report, published on Tuesday. It stresses that it is a state-of-play document, rather than a vehicle for making specific policy recommendations to governments. Nonetheless, it is likely to help frame the debate for policymakers, tech executives and NGOs attending the next global AI summit in India this month.

    Anthropic has released models with heightened safety measures. Photograph: Dado Ruvić /Reuters
    1. 1. The capabilities of AI models are improving A host of new AI models – the technology that underpins tools like chatbots – were released last year, including OpenAI’s GPT-5, Anthropic’s Claude Opus 4.5 and Google’s Gemini 3. The report points to new “reasoning systems” – which solve problems by breaking them down into smaller steps – showing improved performance in maths, coding and science. Bengio said there has been a “very significant jump” in AI reasoning. Last year, systems developed by Google and OpenAI achieved a gold-level performance in the International Mathematical Olympiad – a first for AI.However, the report says AI capabilities remain “jagged”, referring to systems displaying astonishing prowess in some areas but not in others. While advanced AI systems are impressive at maths, science, coding and creating images, they remain prone to making false statements, or “hallucinations”, and cannot carry out lengthy projects autonomously.Nonetheless, the report cites a study showing that AI systems are rapidly improving their ability to carry out certain software engineering tasks – with their duration doubling every seven months. If that rate of progress continues, AI systems could complete tasks lasting several hours by 2027 and several days by 2030. This is the scenario under which AI becomes a real threat to jobs.But for now, says the report, “reliable automation of long or complex tasks remains infeasible”.

    Continue/Read Original Article Here: ‘Deepfakes spreading and more AI companions’: seven takeaways from the latest artificial intelligence safety report | AI (artificial intelligence) | The Guardian

    Tags: AI, Annual Report, Artifical Intelligence, Dan Milmo, International AI Safety Report, Risks, Safety, Safety Report, Senior Advisors, Summit, Survey, Tech Progress, The Guardian
    #AI #AnnualReport #ArtificalIntelligence #DanMilmo #InternationalAISafetyReport #Risks #Safety #SafetyReport #SeniorAdvisors #Summit #Survey #TechProgress #TheGuardian
  4. ‘Deepfakes spreading and more AI companions’: seven takeaways from the latest artificial intelligence safety report – AI (artificial intelligence) – The Guardian

    AI (artificial intelligence), Explainer

    AI image – The Guardian

    ‘Deepfakes spreading and more AI companions’: seven takeaways from the latest artificial intelligence safety report

    Annual review highlights growing capabilities of AI models, while examining issues from cyber-attacks to job disruption

    By Dan Milmo, Global technology editor, Tue 3 Feb 2026 00.00 EST Share

    The International AI Safety report is an annual survey of technological progress and the risks it is creating across multiple areas, from deepfakes to the jobs market.

    Commissioned at the 2023 global AI safety summit, it is chaired by the Canadian computer scientist Yoshua Bengio, who describes the “daunting challenges” posed by rapid developments in the field. The report is also guided by senior advisers, including Nobel laureates Geoffrey Hinton and Daron Acemoglu.

    Here are some of the key points from the second annual report, published on Tuesday. It stresses that it is a state-of-play document, rather than a vehicle for making specific policy recommendations to governments. Nonetheless, it is likely to help frame the debate for policymakers, tech executives and NGOs attending the next global AI summit in India this month.

    Anthropic has released models with heightened safety measures. Photograph: Dado Ruvić /Reuters
    1. 1. The capabilities of AI models are improving A host of new AI models – the technology that underpins tools like chatbots – were released last year, including OpenAI’s GPT-5, Anthropic’s Claude Opus 4.5 and Google’s Gemini 3. The report points to new “reasoning systems” – which solve problems by breaking them down into smaller steps – showing improved performance in maths, coding and science. Bengio said there has been a “very significant jump” in AI reasoning. Last year, systems developed by Google and OpenAI achieved a gold-level performance in the International Mathematical Olympiad – a first for AI.However, the report says AI capabilities remain “jagged”, referring to systems displaying astonishing prowess in some areas but not in others. While advanced AI systems are impressive at maths, science, coding and creating images, they remain prone to making false statements, or “hallucinations”, and cannot carry out lengthy projects autonomously.Nonetheless, the report cites a study showing that AI systems are rapidly improving their ability to carry out certain software engineering tasks – with their duration doubling every seven months. If that rate of progress continues, AI systems could complete tasks lasting several hours by 2027 and several days by 2030. This is the scenario under which AI becomes a real threat to jobs.But for now, says the report, “reliable automation of long or complex tasks remains infeasible”.

    Continue/Read Original Article Here: ‘Deepfakes spreading and more AI companions’: seven takeaways from the latest artificial intelligence safety report | AI (artificial intelligence) | The Guardian

    Tags: AI, Annual Report, Artifical Intelligence, Dan Milmo, International AI Safety Report, Risks, Safety, Safety Report, Senior Advisors, Summit, Survey, Tech Progress, The Guardian
    #AI #AnnualReport #ArtificalIntelligence #DanMilmo #InternationalAISafetyReport #Risks #Safety #SafetyReport #SeniorAdvisors #Summit #Survey #TechProgress #TheGuardian
  5. ‘Deepfakes spreading and more AI companions’: seven takeaways from the latest artificial intelligence safety report – AI (artificial intelligence) – The Guardian

    AI (artificial intelligence), Explainer

    AI image – The Guardian

    ‘Deepfakes spreading and more AI companions’: seven takeaways from the latest artificial intelligence safety report

    Annual review highlights growing capabilities of AI models, while examining issues from cyber-attacks to job disruption

    By Dan Milmo, Global technology editor, Tue 3 Feb 2026 00.00 EST Share

    The International AI Safety report is an annual survey of technological progress and the risks it is creating across multiple areas, from deepfakes to the jobs market.

    Commissioned at the 2023 global AI safety summit, it is chaired by the Canadian computer scientist Yoshua Bengio, who describes the “daunting challenges” posed by rapid developments in the field. The report is also guided by senior advisers, including Nobel laureates Geoffrey Hinton and Daron Acemoglu.

    Here are some of the key points from the second annual report, published on Tuesday. It stresses that it is a state-of-play document, rather than a vehicle for making specific policy recommendations to governments. Nonetheless, it is likely to help frame the debate for policymakers, tech executives and NGOs attending the next global AI summit in India this month.

    Anthropic has released models with heightened safety measures. Photograph: Dado Ruvić /Reuters
    1. 1. The capabilities of AI models are improving A host of new AI models – the technology that underpins tools like chatbots – were released last year, including OpenAI’s GPT-5, Anthropic’s Claude Opus 4.5 and Google’s Gemini 3. The report points to new “reasoning systems” – which solve problems by breaking them down into smaller steps – showing improved performance in maths, coding and science. Bengio said there has been a “very significant jump” in AI reasoning. Last year, systems developed by Google and OpenAI achieved a gold-level performance in the International Mathematical Olympiad – a first for AI.However, the report says AI capabilities remain “jagged”, referring to systems displaying astonishing prowess in some areas but not in others. While advanced AI systems are impressive at maths, science, coding and creating images, they remain prone to making false statements, or “hallucinations”, and cannot carry out lengthy projects autonomously.Nonetheless, the report cites a study showing that AI systems are rapidly improving their ability to carry out certain software engineering tasks – with their duration doubling every seven months. If that rate of progress continues, AI systems could complete tasks lasting several hours by 2027 and several days by 2030. This is the scenario under which AI becomes a real threat to jobs.But for now, says the report, “reliable automation of long or complex tasks remains infeasible”.

    Continue/Read Original Article Here: ‘Deepfakes spreading and more AI companions’: seven takeaways from the latest artificial intelligence safety report | AI (artificial intelligence) | The Guardian

    Tags: AI, Annual Report, Artifical Intelligence, Dan Milmo, International AI Safety Report, Risks, Safety, Safety Report, Senior Advisors, Summit, Survey, Tech Progress, The Guardian
    #AI #AnnualReport #ArtificalIntelligence #DanMilmo #InternationalAISafetyReport #Risks #Safety #SafetyReport #SeniorAdvisors #Summit #Survey #TechProgress #TheGuardian
  6. 🎉 The 2025 OpenSSF Annual Report has officially arrived!!!

    We invite you to celebrate another year of progress, creativity, and collaboration shaping a safer, more resilient open source community.

    Download the report: openssf.org/download-the-2025-

    #AnnualReport #OSSSecurity

  7. "We launched in Sep/2024 with a bold mission: to foster an #open, #decentralized, and #usercentric #socialweb. In just the few short months that remained in 2024, we made meaningful progress. Our participation at #W3C #TPAC and collaborations with major stakeholders like #Mastodon, #Ghost, and #Automattic have helped spark momentum for a healthier, more resilient online ecosystem.
    Today we are proud to publish our 2024 #AnnualReport– the first of many."
    #SWF
    socialwebfoundation.org/2025/0

  8. Today, we published our ☀️2024 Annual Report🌕!!

    To learn more about our achievements from last year, as well as our renewed commitment to work in community to shape a more equitable, open, and transparent scholarly evaluation landscape, check out our blog here: content.prereview.org/2024-a-y, and read our full report here: zenodo.org/records/14675020.

    We thank everyone in our community, our collaborators, and funders! We could not do this work without you!

    #prereview #annualreport #expansion #reflection

  9. Today, we published our ☀️2024 Annual Report🌕!!

    To learn more about our achievements from last year, as well as our renewed commitment to work in community to shape a more equitable, open, and transparent scholarly evaluation landscape, check out our blog here: content.prereview.org/2024-a-y, and read our full report here: zenodo.org/records/14675020.

    We thank everyone in our community, our collaborators, and funders! We could not do this work without you!

    #prereview #annualreport #expansion #reflection

  10. Today, we published our ☀️2024 Annual Report🌕!!

    To learn more about our achievements from last year, as well as our renewed commitment to work in community to shape a more equitable, open, and transparent scholarly evaluation landscape, check out our blog here: content.prereview.org/2024-a-y, and read our full report here: zenodo.org/records/14675020.

    We thank everyone in our community, our collaborators, and funders! We could not do this work without you!

    #prereview #annualreport #expansion #reflection

  11. Today, we published our ☀️2024 Annual Report🌕!!

    To learn more about our achievements from last year, as well as our renewed commitment to work in community to shape a more equitable, open, and transparent scholarly evaluation landscape, check out our blog here: content.prereview.org/2024-a-y, and read our full report here: zenodo.org/records/14675020.

    We thank everyone in our community, our collaborators, and funders! We could not do this work without you!

  12. Today, we published our ☀️2024 Annual Report🌕!!

    To learn more about our achievements from last year, as well as our renewed commitment to work in community to shape a more equitable, open, and transparent scholarly evaluation landscape, check out our blog here: content.prereview.org/2024-a-y, and read our full report here: zenodo.org/records/14675020.

    We thank everyone in our community, our collaborators, and funders! We could not do this work without you!

    #prereview #annualreport #expansion #reflection

  13. Our Work in 2023 – Annual Report

    2023 was a pivotal year for TEDIC, marked by our continued dedication to defending human rights in digital environments. In a world undergoing unprecedented digital transformation, where digital spaces are crucial for expression, interaction, and development, we believe our mission is critically relevant.

    From conducting research a

    tedic.org/our-work-in-2023-ann

    #Blog #Comunity #2023 #advocacy #AnnualReport #report #tedic

  14. Last year i realised my sprawling web of #projects and #involvements resemble an evil megacorp, so i felt it necessary to produce some #corporate #bullshit. Enjoy:

    THE FLORI CONGLOMERATE ANNUAL REPORT 2023

    #AnnualReport
    #AnnualReport
    #ArtData
    #ArtsInsights

    1/10

  15. Last year i realised my sprawling web of #projects and #involvements resemble an evil megacorp, so i felt it necessary to produce some #corporate #bullshit. Enjoy:

    THE FLORI CONGLOMERATE ANNUAL REPORT 2023

    #AnnualReport
    #AnnualReport
    #ArtData
    #ArtsInsights

    1/10

  16. Last year i realised my sprawling web of #projects and #involvements resemble an evil megacorp, so i felt it necessary to produce some #corporate #bullshit. Enjoy:

    THE FLORI CONGLOMERATE ANNUAL REPORT 2023

    #AnnualReport
    #AnnualReport
    #ArtData
    #ArtsInsights

    1/10

  17. Last year i realised my sprawling web of #projects and #involvements resemble an evil megacorp, so i felt it necessary to produce some #corporate #bullshit. Enjoy:

    THE FLORI CONGLOMERATE ANNUAL REPORT 2023

    #AnnualReport
    #AnnualReport
    #ArtData
    #ArtsInsights

    1/10

  18. Last year i realised my sprawling web of #projects and #involvements resemble an evil megacorp, so i felt it necessary to produce some #corporate #bullshit. Enjoy:

    THE FLORI CONGLOMERATE ANNUAL REPORT 2023

    #AnnualReport
    #AnnualReport
    #ArtData
    #ArtsInsights

    1/10

  19. @ULLDN
    Our 2022–2023 Annual Report is now available!🎉

    Read up on all the exciting things we were up to from June 2022–May 2023, from an All-Candidate Meeting recap, our yearly stats, funding recipients, and much more!

    Read now 👉UrbanLeague.CA/annual-reports #LdnOnt #ReadNow #AnnualReport