home.social

#scenarios — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #scenarios, aggregated by home.social.

  1. EU Warns Airlines, Member States to Prepare for All Scenarios As Jet Fuel Crisis Remains Uncertain

    Islam Times – The European Commission warned Monday that airlines and member states should pr…
    #Europe #EU #Airlines #all #and #as #Commission #Crisis #European #EuropeanUnion #for #Fuel #how #Islam #jet #last #long #member #monday #over #persists #prepare #remains #Scenarios #should #states #that #the #Times #to #Uncertain #uncertainty #warned #warns #will
    europesays.com/europe/31671/

  2. “The present is pregnant with the future”*…

    The estimable Tim O’Reilly uses scenario planning to create an insightful look at AI, our futures, and the choices that will define them…

    We all read it in the daily news. The New York Times reports that economists who once dismissed the AI job threat are now taking it seriously. In February, Jack Dorsey cut 40% of Block’s workforce, telling shareholders that “intelligence tools have changed what it means to build and run a company.” Block’s stock rose 20%. Salesforce has shed thousands of customer support workers, saying AI was already doing half the work. And a Stanford study found that software developers aged 22 to 25 saw employment drop nearly 20% from its peak, while developers over 26 were doing fine.

    But how are we to square this news with a Vanguard study that found that the 100 occupations most exposed to AI were actually outperforming the rest of the labor market in both job growth and wages, and a rigorous NBER study of 25,000 Danish workers that found zero measurable effect of AI on earnings or hours?

    Other studies could contribute to either side of the argument. For example, PwC’s 2025 Global AI Jobs Barometer, analyzing close to a billion job ads across six continents, found that workers with AI skills earn a 56% wage premium, and that productivity growth has nearly quadrupled in the industries most exposed to AI.

    This is exactly the kind of contradictory, uncertain landscape that scenario planning was designed for. Scenario planning doesn’t ask you to predict what the future will be. It asks you to imagine divergent possible futures and to develop a strategy that improves your odds of success across all of them. I’ve used it many times at O’Reilly and have written about it before with COVID and climate change as illustrative examples. The argument between those who say AI will cause mass unemployment and those who insist technology always creates more jobs than it destroys is a debate that will only be resolved by time. Both sides have evidence. Both are probably right at some level. And both framings are not terribly helpful for anyone trying to figure out what to do next…

    [O’Reilly explains the scenario approach, then applies it to our future with AI (see the image above), astutely assessing the conflicting signals that we’ve experiencing; he explores the “robust strategy” for our uncertian future (strategic choices that make sense regardless of which future unfolds); then he concludes…

    … I’ll return to the theme that I sounded in my book WTF? What’s the Future and Why It’s Up To Us.

    Every time a company uses AI to do what it was already doing with fewer people, it is making a choice for the lower half of the scenario grid. Every time a company uses AI to do something that wasn’t previously possible, to serve a customer who wasn’t previously served, to solve a problem that wasn’t previously solvable, it is making a choice for the upper half. These choices compound, for good or ill. An economy that uses AI primarily for efficiency will slowly hollow itself out.

    Looking at the news from the future, both sets of signals are present. The question is which will dominate. AI will give us both the Augmentation Economy and the Displacement Crisis, in different measures in different places, depending on the choices we make.

    Scenario planning teaches us that we don’t have to predict which future we’ll get. We do have to prepare for a very uncertain future. But the robust strategy, the one that works across every quadrant, is to focus on doing more, not just doing the same with less, and to find ways that human taste still matters in what is created. As long as there is unmet demand, as long as there are problems we haven’t solved and people we haven’t served, AI will augment human work rather than replacing it. It’s only when we stop looking for new things to do that the machines come for the jobs…

    Eminently worth reading in full. Indeed, speaking as a long-time scenario planner, your correspondent can only wish that everyone who wields “scenarios” applies the approach as appropriately, adriotly, and acutely as Tim has: “Scenario Planning for AI and the ‘Jobless Future‘,” from @timoreilly.bsky.social.

    * Voltaire

    ###

    As we take the long view, we might send formative birthday greetings to Mark Pinsker; he was born on this date in 1923. A mathematician, he made impoprtant contributions to the fields of information theory, probability theory, coding theory, ergodic theory, mathematical statistics, and communication networks. This work, which helped lay the foundation for AI-as-we-know-it, earned him the IEEE Claude E. Shannon Award in 1978, and the IEEE Richard W. Hamming Medal in 1996, among other honors.

    source

    #AI #artificialIntelligence #business #culture #economics #employment #future #history #informationTheory #jobs #MarkPinsker #Mathematics #politics #scenarioPlanning #scenarios #Science #society #Technology #TimOReilly
  3. Experts weigh potential scenarios for oil if Strait of Hormuz closes

    misryoum.com/us/markets/expert

    Tankers are seen at the Khor Fakkan Container Terminal, the only natural deep-sea port in the region and one of the major container ports in the Sharjah Emirate, along the Strait of Hormuz, a waterway through which one-fifth of global...

    #Experts #weigh #potential #scenarios #for #oil #Strait #Hormuz #closes #US_News_Hub #misryoum_com

  4. “The best way to predict the future is to invent it”*…

    Dario Amodei, the CEO of AI purveyor Anthropic, has recently published a long (nearly 20,000 word) essay on the risks of artificial intelligence that he fears: Will AI become autonomous (and if so, to what ends)? Will AI be used for destructive pursposes (e.g., war or terrorism)? Will AI allow one or a small number of “actors” (corporations or states) to seize power? Will AI cause economic disruption (mass unemployment, radically-concentrated wealth, disruption in capital flows)? Will AI indirect effects (on our societies and individual lives) be destabilizing? (Perhaps tellingly, he doesn’t explore the prospect of an economic crash on the back of an AI bubble, should one burst– but that might be considered an “indirect effect,” as AI development would likely continue, but in fewer hands [consolidation] and on the heels of destabilizing financial turbulence.)

    The essay is worth reading. At the same time, as Matt Levine suggests, we might wonder why pieces like this come not from AI nay-sayers, but from those rushing to build it…

    … in fact there seems to be a surprisingly strong positive correlation between noisily worrying about AI and being good at building AI. Probably the three most famous AI worriers in the world are Sam Altman, Dario Amodei, and Elon Musk, who are also the chief executive officers of three of the biggest AI labs; they take time out from their busy schedules of warning about the risks of AI to raise money to build AI faster. And they seem to hire a lot of their best researchers from, you know, worrying-about-AI forums on the internet. You could have different models here too. “Worrying about AI demonstrates the curiosity and epistemic humility and care that make a good AI researcher,” maybe. Or “performatively worrying about AI is actually a perverse form of optimism about the power and imminence of AI, and we want those sorts of optimists.” I don’t know. It’s just a strange little empirical fact about modern workplace culture that I find delightful, though I suppose I’ll regret saying this when the robots enslave us.

    Anyway if you run an AI lab and are trying to recruit the best researchers, you might promise them obvious perks like “the smartest colleagues” and “the most access to chips” and “$50 million,” but if you are creative you might promise the less obvious perks like “the most opportunities to raise red flags.” They love that…

    – source

    In any case, precaution and prudence in the pursuit of AI advances seems wise. But perhaps even more, Tim O’Reilly and Mike Loukides suggest, we’d profit from some disciplined foresight:

    The market is betting that AI is an unprecedented technology breakthrough, valuing Sam Altman and Jensen Huang like demigods already astride the world. The slow progress of enterprise AI adoption from pilot to production, however, still suggests at least the possibility of a less earthshaking future. Which is right?

    At O’Reilly, we don’t believe in predicting the future. But we do believe you can see signs of the future in the present. Every day, news items land, and if you read them with a kind of soft focus, they slowly add up. Trends are vectors with both a magnitude and a direction, and by watching a series of data points light up those vectors, you can see possible futures taking shape…

    For AI in 2026 and beyond, we see two fundamentally different scenarios that have been competing for attention. Nearly every debate about AI, whether about jobs, about investment, about regulation, or about the shape of the economy to come, is really an argument about which of these scenarios is correct…

    [Tim and Mike explore an “AGI is an economic singularity” scenario (see also here, here, and Amodei’s essay, linked above), then an “AI is a normal technology” future (see also here); they enumerate signs and indicators to track; then consider 10 “what if” questions in order to explore the implications of the scenarios, honing in one “robust” implications for each– answers that are smart whichever way the future breaks. They conclude…]

    The future isn’t something that happens to us; it’s something we create. The most robust strategy of all is to stop asking “What will happen?” and start asking “What future do we want to build?”

    As Alan Kay once said, “The best way to predict the future is to invent it.” Don’t wait for the AI future to happen to you. Do what you can to shape it. Build the future you want to live in…

    Read in full– the essay is filled with deep insight. Taking the long view: “What If? AI in 2026 and Beyond,” from @timoreilly.bsky.social and @mikeloukides.hachyderm.io.ap.brid.gy.

    [Image above: source]

    Alan Kay

    ###

    As we pave our own paths, we might send world-changing birthday greetings to a man who personified Alan’s injunction, Doug Engelbart; he was born on this date in 1925.  An engineer and inventor who was a computing and internet pioneer, Doug is best remembered for his seminal work on human-computer interface issues, and for “the Mother of All Demos” in 1968, at which he demonstrated for the first time the computer mouse, hypertext, networked computers, and the earliest versions of graphical user interfaces… that’s to say, computing as we know it, and all that computing enables.

    https://youtu.be/B6rKUf9DWRI?si=nL09hD5GQD670AQO

    #AI #AIRisk #artificalIntelligence #computerMouse #culture #DarioAmodei #DougEngelbart #graphicalUserInterfaces #history #hypertext #MikeLoukides #mouse #networkedComputers #scenarioPlanning #scenarios #Singularity #Technology #TimOReilly
  5. “The best way to predict the future is to invent it”*…

    Dario Amodei, the CEO of AI purveyor Anthropic, has recently published a long (nearly 20,000 word) essay on the risks of artificial intelligence that he fears: Will AI become autonomous (and if so, to what ends)? Will AI be used for destructive pursposes (e.g., war or terrorism)? Will AI allow one or a small number of “actors” (corporations or states) to seize power? Will AI cause economic disruption (mass unemployment, radically-concentrated wealth, disruption in capital flows)? Will AI indirect effects (on our societies and individual lives) be destabilizing? (Perhaps tellingly, he doesn’t explore the prospect of an economic crash on the back of an AI bubble, should one burst– but that might be considered an “indirect effect,” as AI development would likely continue, but in fewer hands [consolidation] and on the heels of destabilizing financial turbulence.)

    The essay is worth reading. At the same time, as Matt Levine suggests, we might wonder why pieces like this come not from AI nay-sayers, but from those rushing to build it…

    … in fact there seems to be a surprisingly strong positive correlation between noisily worrying about AI and being good at building AI. Probably the three most famous AI worriers in the world are Sam Altman, Dario Amodei, and Elon Musk, who are also the chief executive officers of three of the biggest AI labs; they take time out from their busy schedules of warning about the risks of AI to raise money to build AI faster. And they seem to hire a lot of their best researchers from, you know, worrying-about-AI forums on the internet. You could have different models here too. “Worrying about AI demonstrates the curiosity and epistemic humility and care that make a good AI researcher,” maybe. Or “performatively worrying about AI is actually a perverse form of optimism about the power and imminence of AI, and we want those sorts of optimists.” I don’t know. It’s just a strange little empirical fact about modern workplace culture that I find delightful, though I suppose I’ll regret saying this when the robots enslave us.

    Anyway if you run an AI lab and are trying to recruit the best researchers, you might promise them obvious perks like “the smartest colleagues” and “the most access to chips” and “$50 million,” but if you are creative you might promise the less obvious perks like “the most opportunities to raise red flags.” They love that…

    – source

    In any case, precaution and prudence in the pursuit of AI advances seems wise. But perhaps even more, Tim O’Reilly and Mike Loukides suggest, we’d profit from some disciplined foresight:

    The market is betting that AI is an unprecedented technology breakthrough, valuing Sam Altman and Jensen Huang like demigods already astride the world. The slow progress of enterprise AI adoption from pilot to production, however, still suggests at least the possibility of a less earthshaking future. Which is right?

    At O’Reilly, we don’t believe in predicting the future. But we do believe you can see signs of the future in the present. Every day, news items land, and if you read them with a kind of soft focus, they slowly add up. Trends are vectors with both a magnitude and a direction, and by watching a series of data points light up those vectors, you can see possible futures taking shape…

    For AI in 2026 and beyond, we see two fundamentally different scenarios that have been competing for attention. Nearly every debate about AI, whether about jobs, about investment, about regulation, or about the shape of the economy to come, is really an argument about which of these scenarios is correct…

    [Tim and Mike explore an “AGI is an economic singularity” scenario (see also here, here, and Amodei’s essay, linked above), then an “AI is a normal technology” future (see also here); they enumerate signs and indicators to track; then consider 10 “what if” questions in order to explore the implications of the scenarios, honing in one “robust” implications for each– answers that are smart whichever way the future breaks. They conclude…]

    The future isn’t something that happens to us; it’s something we create. The most robust strategy of all is to stop asking “What will happen?” and start asking “What future do we want to build?”

    As Alan Kay once said, “The best way to predict the future is to invent it.” Don’t wait for the AI future to happen to you. Do what you can to shape it. Build the future you want to live in…

    Read in full– the essay is filled with deep insight. Taking the long view: “What If? AI in 2026 and Beyond,” from @timoreilly.bsky.social and @mikeloukides.hachyderm.io.ap.brid.gy.

    [Image above: source]

    Alan Kay

    ###

    As we pave our own paths, we might send world-changing birthday greetings to a man who personified Alan’s injunction, Doug Engelbart; he was born on this date in 1925.  An engineer and inventor who was a computing and internet pioneer, Doug is best remembered for his seminal work on human-computer interface issues, and for “the Mother of All Demos” in 1968, at which he demonstrated for the first time the computer mouse, hypertext, networked computers, and the earliest versions of graphical user interfaces… that’s to say, computing as we know it, and all that computing enables.

    https://youtu.be/B6rKUf9DWRI?si=nL09hD5GQD670AQO

    #AI #AIRisk #artificalIntelligence #computerMouse #culture #DarioAmodei #DougEngelbart #graphicalUserInterfaces #history #hypertext #MikeLoukides #mouse #networkedComputers #scenarioPlanning #scenarios #Singularity #Technology #TimOReilly
  6. “The best way to predict the future is to invent it”*…

    Dario Amodei, the CEO of AI purveyor Anthropic, has recently published a long (nearly 20,000 word) essay on the risks of artificial intelligence that he fears: Will AI become autonomous (and if so, to what ends)? Will AI be used for destructive pursposes (e.g., war or terrorism)? Will AI allow one or a small number of “actors” (corporations or states) to seize power? Will AI cause economic disruption (mass unemployment, radically-concentrated wealth, disruption in capital flows)? Will AI indirect effects (on our societies and individual lives) be destabilizing? (Perhaps tellingly, he doesn’t explore the prospect of an economic crash on the back of an AI bubble, should one burst– but that might be considered an “indirect effect,” as AI development would likely continue, but in fewer hands [consolidation] and on the heels of destabilizing financial turbulence.)

    The essay is worth reading. At the same time, as Matt Levine suggests, we might wonder why pieces like this come not from AI nay-sayers, but from those rushing to build it…

    … in fact there seems to be a surprisingly strong positive correlation between noisily worrying about AI and being good at building AI. Probably the three most famous AI worriers in the world are Sam Altman, Dario Amodei, and Elon Musk, who are also the chief executive officers of three of the biggest AI labs; they take time out from their busy schedules of warning about the risks of AI to raise money to build AI faster. And they seem to hire a lot of their best researchers from, you know, worrying-about-AI forums on the internet. You could have different models here too. “Worrying about AI demonstrates the curiosity and epistemic humility and care that make a good AI researcher,” maybe. Or “performatively worrying about AI is actually a perverse form of optimism about the power and imminence of AI, and we want those sorts of optimists.” I don’t know. It’s just a strange little empirical fact about modern workplace culture that I find delightful, though I suppose I’ll regret saying this when the robots enslave us.

    Anyway if you run an AI lab and are trying to recruit the best researchers, you might promise them obvious perks like “the smartest colleagues” and “the most access to chips” and “$50 million,” but if you are creative you might promise the less obvious perks like “the most opportunities to raise red flags.” They love that…

    – source

    In any case, precaution and prudence in the pursuit of AI advances seems wise. But perhaps even more, Tim O’Reilly and Mike Loukides suggest, we’d profit from some disciplined foresight:

    The market is betting that AI is an unprecedented technology breakthrough, valuing Sam Altman and Jensen Huang like demigods already astride the world. The slow progress of enterprise AI adoption from pilot to production, however, still suggests at least the possibility of a less earthshaking future. Which is right?

    At O’Reilly, we don’t believe in predicting the future. But we do believe you can see signs of the future in the present. Every day, news items land, and if you read them with a kind of soft focus, they slowly add up. Trends are vectors with both a magnitude and a direction, and by watching a series of data points light up those vectors, you can see possible futures taking shape…

    For AI in 2026 and beyond, we see two fundamentally different scenarios that have been competing for attention. Nearly every debate about AI, whether about jobs, about investment, about regulation, or about the shape of the economy to come, is really an argument about which of these scenarios is correct…

    [Tim and Mike explore an “AGI is an economic singularity” scenario (see also here, here, and Amodei’s essay, linked above), then an “AI is a normal technology” future (see also here); they enumerate signs and indicators to track; then consider 10 “what if” questions in order to explore the implications of the scenarios, honing in one “robust” implications for each– answers that are smart whichever way the future breaks. They conclude…]

    The future isn’t something that happens to us; it’s something we create. The most robust strategy of all is to stop asking “What will happen?” and start asking “What future do we want to build?”

    As Alan Kay once said, “The best way to predict the future is to invent it.” Don’t wait for the AI future to happen to you. Do what you can to shape it. Build the future you want to live in…

    Read in full– the essay is filled with deep insight. Taking the long view: “What If? AI in 2026 and Beyond,” from @timoreilly.bsky.social and @mikeloukides.hachyderm.io.ap.brid.gy.

    [Image above: source]

    Alan Kay

    ###

    As we pave our own paths, we might send world-changing birthday greetings to a man who personified Alan’s injunction, Doug Engelbart; he was born on this date in 1925.  An engineer and inventor who was a computing and internet pioneer, Doug is best remembered for his seminal work on human-computer interface issues, and for “the Mother of All Demos” in 1968, at which he demonstrated for the first time the computer mouse, hypertext, networked computers, and the earliest versions of graphical user interfaces… that’s to say, computing as we know it, and all that computing enables.

    https://youtu.be/B6rKUf9DWRI?si=nL09hD5GQD670AQO

    #AI #AIRisk #artificalIntelligence #computerMouse #culture #DarioAmodei #DougEngelbart #graphicalUserInterfaces #history #hypertext #MikeLoukides #mouse #networkedComputers #scenarioPlanning #scenarios #Singularity #Technology #TimOReilly
  7. “The best way to predict the future is to invent it”*…

    Dario Amodei, the CEO of AI purveyor Anthropic, has recently published a long (nearly 20,000 word) essay on the risks of artificial intelligence that he fears: Will AI become autonomous (and if so, to what ends)? Will AI be used for destructive pursposes (e.g., war or terrorism)? Will AI allow one or a small number of “actors” (corporations or states) to seize power? Will AI cause economic disruption (mass unemployment, radically-concentrated wealth, disruption in capital flows)? Will AI indirect effects (on our societies and individual lives) be destabilizing? (Perhaps tellingly, he doesn’t explore the prospect of an economic crash on the back of an AI bubble, should one burst– but that might be considered an “indirect effect,” as AI development would likely continue, but in fewer hands [consolidation] and on the heels of destabilizing financial turbulence.)

    The essay is worth reading. At the same time, as Matt Levine suggests, we might wonder why pieces like this come not from AI nay-sayers, but from those rushing to build it…

    … in fact there seems to be a surprisingly strong positive correlation between noisily worrying about AI and being good at building AI. Probably the three most famous AI worriers in the world are Sam Altman, Dario Amodei, and Elon Musk, who are also the chief executive officers of three of the biggest AI labs; they take time out from their busy schedules of warning about the risks of AI to raise money to build AI faster. And they seem to hire a lot of their best researchers from, you know, worrying-about-AI forums on the internet. You could have different models here too. “Worrying about AI demonstrates the curiosity and epistemic humility and care that make a good AI researcher,” maybe. Or “performatively worrying about AI is actually a perverse form of optimism about the power and imminence of AI, and we want those sorts of optimists.” I don’t know. It’s just a strange little empirical fact about modern workplace culture that I find delightful, though I suppose I’ll regret saying this when the robots enslave us.

    Anyway if you run an AI lab and are trying to recruit the best researchers, you might promise them obvious perks like “the smartest colleagues” and “the most access to chips” and “$50 million,” but if you are creative you might promise the less obvious perks like “the most opportunities to raise red flags.” They love that…

    – source

    In any case, precaution and prudence in the pursuit of AI advances seems wise. But perhaps even more, Tim O’Reilly and Mike Loukides suggest, we’d profit from some disciplined foresight:

    The market is betting that AI is an unprecedented technology breakthrough, valuing Sam Altman and Jensen Huang like demigods already astride the world. The slow progress of enterprise AI adoption from pilot to production, however, still suggests at least the possibility of a less earthshaking future. Which is right?

    At O’Reilly, we don’t believe in predicting the future. But we do believe you can see signs of the future in the present. Every day, news items land, and if you read them with a kind of soft focus, they slowly add up. Trends are vectors with both a magnitude and a direction, and by watching a series of data points light up those vectors, you can see possible futures taking shape…

    For AI in 2026 and beyond, we see two fundamentally different scenarios that have been competing for attention. Nearly every debate about AI, whether about jobs, about investment, about regulation, or about the shape of the economy to come, is really an argument about which of these scenarios is correct…

    [Tim and Mike explore an “AGI is an economic singularity” scenario (see also here, here, and Amodei’s essay, linked above), then an “AI is a normal technology” future (see also here); they enumerate signs and indicators to track; then consider 10 “what if” questions in order to explore the implications of the scenarios, honing in one “robust” implications for each– answers that are smart whichever way the future breaks. They conclude…]

    The future isn’t something that happens to us; it’s something we create. The most robust strategy of all is to stop asking “What will happen?” and start asking “What future do we want to build?”

    As Alan Kay once said, “The best way to predict the future is to invent it.” Don’t wait for the AI future to happen to you. Do what you can to shape it. Build the future you want to live in…

    Read in full– the essay is filled with deep insight. Taking the long view: “What If? AI in 2026 and Beyond,” from @timoreilly.bsky.social and @mikeloukides.hachyderm.io.ap.brid.gy.

    [Image above: source]

    Alan Kay

    ###

    As we pave our own paths, we might send world-changing birthday greetings to a man who personified Alan’s injunction, Doug Engelbart; he was born on this date in 1925.  An engineer and inventor who was a computing and internet pioneer, Doug is best remembered for his seminal work on human-computer interface issues, and for “the Mother of All Demos” in 1968, at which he demonstrated for the first time the computer mouse, hypertext, networked computers, and the earliest versions of graphical user interfaces… that’s to say, computing as we know it, and all that computing enables.

    https://youtu.be/B6rKUf9DWRI?si=nL09hD5GQD670AQO

    #AI #AIRisk #artificalIntelligence #computerMouse #culture #DarioAmodei #DougEngelbart #graphicalUserInterfaces #history #hypertext #MikeLoukides #mouse #networkedComputers #scenarioPlanning #scenarios #Singularity #Technology #TimOReilly
  8. “The best way to predict the future is to invent it”*…

    Dario Amodei, the CEO of AI purveyor Anthropic, has recently published a long (nearly 20,000 word) essay on the risks of artificial intelligence that he fears: Will AI become autonomous (and if so, to what ends)? Will AI be used for destructive pursposes (e.g., war or terrorism)? Will AI allow one or a small number of “actors” (corporations or states) to seize power? Will AI cause economic disruption (mass unemployment, radically-concentrated wealth, disruption in capital flows)? Will AI indirect effects (on our societies and individual lives) be destabilizing? (Perhaps tellingly, he doesn’t explore the prospect of an economic crash on the back of an AI bubble, should one burst– but that might be considered an “indirect effect,” as AI development would likely continue, but in fewer hands [consolidation] and on the heels of destabilizing financial turbulence.)

    The essay is worth reading. At the same time, as Matt Levine suggests, we might wonder why pieces like this come not from AI nay-sayers, but from those rushing to build it…

    … in fact there seems to be a surprisingly strong positive correlation between noisily worrying about AI and being good at building AI. Probably the three most famous AI worriers in the world are Sam Altman, Dario Amodei, and Elon Musk, who are also the chief executive officers of three of the biggest AI labs; they take time out from their busy schedules of warning about the risks of AI to raise money to build AI faster. And they seem to hire a lot of their best researchers from, you know, worrying-about-AI forums on the internet. You could have different models here too. “Worrying about AI demonstrates the curiosity and epistemic humility and care that make a good AI researcher,” maybe. Or “performatively worrying about AI is actually a perverse form of optimism about the power and imminence of AI, and we want those sorts of optimists.” I don’t know. It’s just a strange little empirical fact about modern workplace culture that I find delightful, though I suppose I’ll regret saying this when the robots enslave us.

    Anyway if you run an AI lab and are trying to recruit the best researchers, you might promise them obvious perks like “the smartest colleagues” and “the most access to chips” and “$50 million,” but if you are creative you might promise the less obvious perks like “the most opportunities to raise red flags.” They love that…

    – source

    In any case, precaution and prudence in the pursuit of AI advances seems wise. But perhaps even more, Tim O’Reilly and Mike Loukides suggest, we’d profit from some disciplined foresight:

    The market is betting that AI is an unprecedented technology breakthrough, valuing Sam Altman and Jensen Huang like demigods already astride the world. The slow progress of enterprise AI adoption from pilot to production, however, still suggests at least the possibility of a less earthshaking future. Which is right?

    At O’Reilly, we don’t believe in predicting the future. But we do believe you can see signs of the future in the present. Every day, news items land, and if you read them with a kind of soft focus, they slowly add up. Trends are vectors with both a magnitude and a direction, and by watching a series of data points light up those vectors, you can see possible futures taking shape…

    For AI in 2026 and beyond, we see two fundamentally different scenarios that have been competing for attention. Nearly every debate about AI, whether about jobs, about investment, about regulation, or about the shape of the economy to come, is really an argument about which of these scenarios is correct…

    [Tim and Mike explore an “AGI is an economic singularity” scenario (see also here, here, and Amodei’s essay, linked above), then an “AI is a normal technology” future (see also here); they enumerate signs and indicators to track; then consider 10 “what if” questions in order to explore the implications of the scenarios, honing in one “robust” implications for each– answers that are smart whichever way the future breaks. They conclude…]

    The future isn’t something that happens to us; it’s something we create. The most robust strategy of all is to stop asking “What will happen?” and start asking “What future do we want to build?”

    As Alan Kay once said, “The best way to predict the future is to invent it.” Don’t wait for the AI future to happen to you. Do what you can to shape it. Build the future you want to live in…

    Read in full– the essay is filled with deep insight. Taking the long view: “What If? AI in 2026 and Beyond,” from @timoreilly.bsky.social and @mikeloukides.hachyderm.io.ap.brid.gy.

    [Image above: source]

    Alan Kay

    ###

    As we pave our own paths, we might send world-changing birthday greetings to a man who personified Alan’s injunction, Doug Engelbart; he was born on this date in 1925.  An engineer and inventor who was a computing and internet pioneer, Doug is best remembered for his seminal work on human-computer interface issues, and for “the Mother of All Demos” in 1968, at which he demonstrated for the first time the computer mouse, hypertext, networked computers, and the earliest versions of graphical user interfaces… that’s to say, computing as we know it, and all that computing enables.

    https://youtu.be/B6rKUf9DWRI?si=nL09hD5GQD670AQO

    #AI #AIRisk #artificalIntelligence #computerMouse #culture #DarioAmodei #DougEngelbart #graphicalUserInterfaces #history #hypertext #MikeLoukides #mouse #networkedComputers #scenarioPlanning #scenarios #Singularity #Technology #TimOReilly
  9. Dumb Guys & Big Oxy

    I’ve finished up my new adventure for Cepheus Engine. As I mentioned in a previous post, I’m planning on taking some of the more self-contained adventures I’ve created for my current Traveller campaign and releasing them as individual zines.

    It’s called Dumb Guys & Big Oxy, and it’s available for sale at DriveThru –https://www.drivethrurpg.com/en/product/552504/dumb-guys-big-oxy and Itch – https://ng76.itch.io/dumb-guys-big-oxy

    I designed it to be a quick, slightly silly action scenario. Here’s some design notes:

    Art

    There isn’t much art in this one. I wanted to keep it cheap, so the cover image is a Creative Commons image from Strega Wolf van den Berg – https://www.stregawolf.art/

    The interior has a 3D view of the facility that I designed in TinkerCAD, . For the tower floor plans, I used Affinity Designer.

    I wanted to use some of publicly available Traveller geomorphs for the floorplans, but most of them are using Creative Commons Non-Commercial licenses, so I ended up creating my own.

    The text is mostly Oregon LDO, which is a free font that’s a close match to Optima, the font used in the original Traveller books.The titles are in Bungee, which is one of Google’s free fonts.

    Format

    I made it in 8.5″ x 5.5″ size, basically half-US letter size. I made three versions – a regular one with just the pages, a wide version that’s best for screen display, and a booklet version for printing at home.

    I’m eventually going to get around to making a print version available in DriveThru.

    Handouts

    I like games that include handouts to players, so I included a few. There’s a map of the adventure location, pregen character sheets, and list of equipment the players have access to.

    I also have a combat sheet for the GM, with all of the stats for NPCs, including check boxes for their health.

    Pregens

    While this scenario was designed to be plugged into anybody’s campaign, I did want to make it easier for GMs who wanted to run this as a one-shot. So I included 5 pregen characters, supposedly the crew of the starship Loki’s Folly. I also included the contents of their ship’s locker, so the players start out with at least some gear.

    What’s Next?

    I really enjoyed creating this, and I plan to continue. Here’s a quick preview of the next one:

    #CepheusEngine #RPG #Scenarios #SciFi #Traveller #ttrpg

  10. ''In their worst-case, both #scenarios would necessitate protective actions, such as #evacuations and #sheltering of the #population or the need to take stable #iodine, with the reach extending to distances from a few to several hundred kilometres.''

  11. What is the value of imaging the worst possible outcomes in the current crisis? I think it can be the psychological equivalent of lancing a boil--enabling us to set aside what we can't control and focus on what we can do. #foresight #scenarios #museums aam-us.org/2025/03/04/living-i

  12. Nice coverage of our recent article tracing transformational change across the last decade of #IPCC mitigation #scenarios. It found that reference scenarios without additional #climate protection measures consistently show lower emissions in more recent IPCC reports cordis.europa.eu/article/id/45

  13. Prochain pallier un module #foundryvtt . Je ne joue à rêve de dragon que sous foundry en tant que joueur c'est vraiment super cool.
    En tant que mj le papier c'est mon truc et surtout bcp moins de travail donc je n'y suis pas passé.

    Les Miroirs des Terres Médianes pour Rêve de Dragon (Scriptarium) • Game On Table Top
    gameontabletop.com/cf3971/les-

    #jdr #jeux #foundryvtt #Revededragon #GameOnTabletop #scenarios #campagne

  14. Prochain pallier un module #foundryvtt . Je ne joue à rêve de dragon que sous foundry en tant que joueur c'est vraiment super cool.
    En tant que mj le papier c'est mon truc et surtout bcp moins de travail donc je n'y suis pas passé.

    Les Miroirs des Terres Médianes pour Rêve de Dragon (Scriptarium) • Game On Table Top
    gameontabletop.com/cf3971/les-

    #jdr #jeux #foundryvtt #Revededragon #GameOnTabletop #scenarios #campagne

  15. Prochain pallier un module #foundryvtt . Je ne joue à rêve de dragon que sous foundry en tant que joueur c'est vraiment super cool.
    En tant que mj le papier c'est mon truc et surtout bcp moins de travail donc je n'y suis pas passé.

    Les Miroirs des Terres Médianes pour Rêve de Dragon (Scriptarium) • Game On Table Top
    gameontabletop.com/cf3971/les-

    #jdr #jeux #foundryvtt #Revededragon #GameOnTabletop #scenarios #campagne

  16. Shouldn't western powers not mark some kind of red line with regard to the russian advance in western-Ukraine, with the assumption that Russia will not risk world war 3 for conquering that part ? Or will in the end EU and NATO just watch while Kyiv and Lviv are overrun by Russian forces ?
    The risks are enormous i am aware but that's also true for the scenarios where we let Russia just conquer Ukraine.
    #Ukraine #Russia #geopolitics @geopolitics #scenarios

  17. RT by @ecb: 📗 The #NGFS has today published the fourth phase of its long-term macro-financial #climate #scenarios.
    The new vintage of the NGFS scenarios will be presented and discussed during a launch #event on 9th November.

    Register to the event: ngfs.net/en/ngfs-climate-scena

    Read more⤵️

    🐦🔗: nitter.cz/NGFS_/status/1721841

    [2023-11-07 10:45 UTC]

  18. Many widely used scenarios, (including NGFS ones) significantly underestimate climate risk. Carbon budgets are probably smaller than we thought, so climate risks will develop more quickly than anticipated. This is very important because of the group-think problem - if all the financial institutions and corporate reporting through TCFD, are using the same (misleading) scenarios, the financial system and indeed the whole economy is extremely vulnerable.

    #ClimateCrisis #scenarios #actuaries

    2/

  19. The Emperor's New Climate Scenarios, or what's wrong with the climate change financial scenarios that are currently used - an excellent report just out from Institute and Faculty of Actuaries and the University of Exeter. Spoiler alert: there's a lot wrong with them. And a lot of reasons why they are so wrong. But there's also a reason for optimism - we can choose what scenarios we use! actuaries.org.uk/news-and-medi

    #ClimateCrisis #scenarios #actuaries

    1/