home.social

Search

114 results for “Aschenbrenner”

  1. As AI bubble warnings mount, a 23-year-old’s $1.5 billion hedge fund shows how prophecy turns into profits

    Welcome to Eye on AI, with AI reporter Sharon Goldman. In this edition…the rise of Leopold Aschenbrenner, a…
    #NewsBeep #News #US #USA #UnitedStates #UnitedStatesOfAmerica #Artificialintelligence #AI #ArtificialIntelligence #EyeonAI #hedgefunds #OpenAI #Technology
    newsbeep.com/us/213912/

  2. As AI bubble warnings mount, a 23-year-old’s $1.5 billion hedge fund shows how prophecy turns into profits

    Welcome to Eye on AI, with AI reporter Sharon Goldman. In this edition…the rise of Leopold Aschenbrenner, a…
    #NewsBeep #News #US #USA #UnitedStates #UnitedStatesOfAmerica #Artificialintelligence #AI #ArtificialIntelligence #EyeonAI #hedgefunds #OpenAI #Technology
    newsbeep.com/us/213912/

  3. #kt22: Was ist eigentlich ein #Algorithmus und wie funktioniert er? Im Workshop von Prof. @andreasbuesch und Prof. Dr. Doris Aschenbrenner
    heute um 14 Uhr in der Liederhalle #Stuttgart geht es um Grundlagen der Digitalisierung. Wir freuen uns auf euch! #lebenteilen - Webcode YG84

  4. Ängste und Depressionen

    Das Leid der jungen Generation – Copsy-Studie — Ängste und Depressionen sind unter Kindern und Jugendlichen weit verbreitet. Doch es gibt zu wenig psychologische Hilfsangebote

    Immer mehr Kinder und Jugendliche haben Ängste, weigern sich aufzustehen, wollen nicht rausgehen. Sie meiden Kontakte und fühlen sich überfordert, vor ihrer Klasse zu sprechen. Manche müssen oft weinen, andere leiden an Panikattacken, bekommen Herzrasen und Schweißausbrüche und sind davon überzeugt, den schulischen Anforderungen nie gerecht zu werden. Viele Jungen versacken vorm Computer und spielen exzessiv online. Auch Mädchen verlieren sich im Internet, beginnen zu hungern oder zwingen sich zu erbrechen. Versteinern, plötzliche Wutausbrüche und Aggressionen zählen ebenso zu den zunehmenden Verhaltens-uffälligkeiten wie Selbstverletzungen. Nicht wenige junge Menschen äußern sogar Suizidgedanken.

    Ulrike Ravens-Sieberer erforscht seit über 20 Jahren, wie es um die seelische Gesundheit von Kindern und Jugendlichen in Deutschland bestellt ist. Seit Beginn der Pandemie Anfang 2020 befragt ihr Team von der Uniklinik in Hamburg-Eppendorf regelmäßig Eltern und Jugendliche aus allen Bevölkerungsschichten zur mentalen Lage der 7 bis 17-Jährigen. Bei kleineren Kindern füllen die Eltern die online-Fragebögen aus, ab elf schreiben die Kinder selbst. “Wir erfassen Auffälligkeiten, keine Diagnosen”, betont die Wissenschaftlerin. Weil die Kinder älter werden und es darum geht, ihre Entwicklungen zu verfolgen, gehören inzwischen auch 23-Jährige zur untersuchten Gruppe.

    Über 3.000 Familien sind an der Copsy-Studie beteiligt, dem einzigen bundesweiten Langzeitmonitoring zu diesem Thema. Ende vergangenen Jahres wurden zum achten Mal Ergebnisse veröffentlicht. Demnach zeigen 22 Prozent der Kinder und Jugendlichen psychische Auffälligkeiten. Das ist zwar ein deutlich niedrigerer Wert, als die Wissenschaftler*innen während der harten Corona-Lockdown-Phasen gemessen haben. Doch im Vergleich zu Vor-Corona-Zeiten sieht es nicht gut aus mit der Stabilität der nachwachsenden Generation.

    Krieg, Klima, Jobs

    Kein Wunder – viele Krisen spitzen sich zu. “Das Gefühl von Machtlosigkeit ist weit verbreitet – die Welt ist ein gefährlicher Ort”, fasst Bernd Aschenbrenner vom Bundesverband der Vertragspsychotherapeuten zusammen. Weit über die Hälfte der jungen Generation hat Angst vor Krieg, auch die voranschreitende Klimaerhitzung empfindet ein Großteil als bedrohlich. Ebenfalls Sorge bereitet die wirtschaftliche Lage. Viele zweifeln, ob sie ihren Wunschberuf ergreifen und später eine Wohnung finden und bezahlen können. Oft setzen sie sich deshalb in der Schule unter enormen Leistungsdruck. Während die Belastungsreaktionen bei Jungen stagnieren, haben Depressionen und Ängste bei Mädchen im vergangenen Jahr noch einmal deutlich zugenommen.

    “Mit Gesundheitsförderung und Prävention kann man viel tun, nicht alle diese Kinder brauchen Behandlung”, betont Ravens-Sieberer. Gesellschaftspolitisch sei es aber extrem wichtig, die Schutzfaktoren der Kinder und Jugendlichen zu stärken. Dazu gehört die Erfahrung, Dinge gestalten und beeinflussen zu können. Auch der familiäre Zusammenhalt und ein stabiles soziales Umfeld in Schule und Freundeskreis sind Schlüssel, um mit den berechtigten Ängsten klarzukommen. Kinder mit guten Schutzfaktoren haben ein zehnfach geringeres Risiko, psychisch auffällig zu werden, hat die Copsy-Forschungsgruppe herausgefunden.

    Bei der “Nummer gegen Kummer” können junge Menschen anonym anrufen. Deutschlandweit an 76 Standorten sitzen montags bis samstags von 14 bis 20 Uhr Ehrenamtliche am Telefon. Sie hören zu, fragen nach, versuchen zu entlasten. Beide Seiten wissen nicht, wo das Gegenüber lebt – der Kontakt bleibt einmalig. Gerade dadurch überwinden viele ihre Scham. Immer beliebter geworden ist die Möglichkeit zu chatten. Die Hürde ist niedriger, sich zu melden.

    “Früher ging es überwiegend um Liebe und Sexualität, heute sind Ängste, Stress und Einsamkeit ebenso oft Thema”, berichtet die Sozialpädagogin Birte Freudenberg, die das Hilfsteam in Potsdam leitet. Das jugendliche Lebensgefühl habe sich geändert.“Die Leichtigkeit ist weg.”

    Bei vielen jungen Menschen wirken außerdem die Erfahrungen während der Pandemie bis heute nach. In einer Phase, in der sonst die ersten zarten Flirtversuche stattfinden, mussten sie mit Masken rumlaufen und zu Hause hocken. “Ein Stück Entwicklung wurde damals abrupt gestoppt, sie konnten bestimmte altersangemessene Erfahrungen nicht sammeln”, so Freudenberg. Das verunsichert jetzt.

    Fehlende Hilfe

    Der Bedarf nach Therapien hat zugenommen, das Angebot nicht. Die Zahl der Kassenzulassungen ist gedeckelt. “Nicht selten habe ich fünf neue Anfragen pro Tag”, berichtet eine Berliner Kinder- und Jugend-Psychologin, die damit keine Ausnahme ist. Selbst Mädchen und Jungen mit schweren Störungen müssen oft monatelang auf einen Klinikplatz warten. Laut statistischem Bundesamt sind psychische Erkrankungen die häufigste Ursache für einen Krankenhausaufenthalt von Kindern und Jugendlichen – und die Zahlen wachsen rasant. Zwischen 2022 und 2024 stieg die Zahl der Einweisungen um 40 Prozent.

    Die Bundesschülerkonferenz hat im vergangenen Jahr auf die mentale Lage von 7,5 Millionen jungen Menschen in Deutschland aufmerksam gemacht mit der Kampagne “Uns gehts gut?”– und die Replik gleich mitgeliefert: “Die einzig ehrliche Antwort lautet: Nein.” Ziel war zum einen, das Thema zu enttabuisieren. Zugleich stellte die Schüler*innenvertretung konkrete politische Forderungen auf: Mehr Psycholog*innen und Sozialarbeiter*innen an Lehranstalten, niedrigschwellige Therapieangebote, Präventionsprogramme und die gesetzliche Verankerung mentaler Gesundheit als Bildungsauftrag.

    Im Koalitionsvertrag hat die Bundesregierung festgeschrieben, die seelische Gesundheit junger Menschen stärken zu wollen. Trotzdem hat sie das 2023 gestartete Projekt “Mental Health Coaches” gestoppt, bei dem psychologisch ausgebildetes Personal Workshops und Gesprächskreise an Schulen durchgeführt hat. Die Universität Leipzig hatte dem Programm eine sehr gute Wirkung attestiert, 90 Prozent der Beteiligten wünschten eine Fortsetzung. Stattdessen arbeitet die Regierung nun an einer neuen Strategie, teilt eine Sprecherin des Bundesministeriums für Bildung, Familie, Senioren, Frauen und Jugend mit: “Ziel ist, im Jahr 2026 erste konkrete Schritte und Maßnahmen zu erreichen.” Auch eine Bedarfsplanung für Psychotherapeut*innen stecke noch in der Pipeline. Angesichts der wachsenden seelischen Not vieler Kinder und Jugendlicher erscheint das Vorgehen der Regierung sehr behäbig.

    Nummer gegen Kummer: 116 111

    Dieser Beitrag ist eine Übernahme von ver.di-publik, mit freundlicher Genehmigung der Redaktion. Links wurden nachträglich eingefügt.

  5. “Don’t eat your seed corn”*…

    AI doesn’t really “think.” Rather, it remembers how we thought together. Are we’re about to stop giving it anything worth remembering? Bright Simons with a provocative analysis…

    We are on the verge of the age of human redundancy. In 2023, IBM’s chief executive told Bloomberg that soon some 7,800 roles might be replaced by AI. The following year, Duolingo cut a tenth of its contractor workforce; it needed to free up desks for AI. Atlassian followed. Klarna announced that its AI assistant was performing work equivalent to 700 customer-service employees and that reducing the size of its workforce to under 2000 is now its North Star. And Jack Dorsey has been forthright about wanting to hold Block’s headcount flat while AI shoulders the growth.

    The trajectory has a compelling internal logic. Routine cognitive work gets automated; junior roles thin out; productivity gains compound year on year. For boards reviewing cost structures, it is the cleanest investment proposition since the internal combustion engine retired the horse, topped up with a kind of moral momentum. Hesitate, the thinking goes, and fall behind.

    But the research results of a team in the UK should give us pause. In the spring of 2024, they asked around 300 writers to produce short fiction. Some were aided by GPT-4 and others worked alone. Which stories, the researchers wanted to know, would be more creative? On average, the writers with AI help produced stories that independent judges rated as more creative than those written without it.

    So far, so on message: a familiar story about the inevitable takeover by intelligent machines. But when the researchers examined the full body of stories rather than individual ones, the picture became murky. The AI-assisted stories were more similar to each other. Each writer had been individually elevated; collectively, they had converged. Anil R  Doshi and Oliver Hauser, who published the study in Science Advances, reached for a phrase from ecology to explain this: a tragedy of the commons.

    Hold that result in mind: individual gain, collective loss. It describes something far more consequential than a writing experiment—it describes the hidden logic of our entire relationship with artificial intelligence. And it suggests that the most successful organizations of the coming decade will be the ones that do something profoundly counterintuitive: instead of using AI to eliminate human interaction by firing droves of workers, they will use it to create more human interaction. IBM has reversed course on its earlier human redundancy fantasies. I bet more will in due course…

    [Simons sketches the history of humans’ intertwined development of both social/organizational and utile technologies, concluding…]

    … What the chain reveals is a dependency the AI industry has largely declined to examine. The underlying intelligence of a large language model isn’t a function of its architecture, its parameter count, or the volume of compute thrown at its training. It is not even about the training data. It is a function of the social complexity of the civilization whose language it digested.

    Each epoch advanced the cognitive frontier through something far richer and more complex than the isolated genius of an individual guru or machine. It did so through new forms of collective problem-solving. Think new institutions (the Greek agora, the Roman lex, the medieval university, the scientific society, the modern corporation, and the social internet) that demanded and rewarded ever more sophisticated uses of language.

    The cognitive anthropologist Edwin Hutchins studied how Navy navigation teams actually think. In his 1995 book Cognition in the Wild, he wrote something that reads today like an accidental prophecy. The physical symbol system, he observed, is “a model of the operation of the sociocultural system from which the human actor has been removed.”

    That is, with eerie precision, a description of what a large language model (LLM) really is, stripped of all the unapproachable jargon and mathematical wizardry. An LLM like ChatGPT is a model of human social reasoning with the human wrangled out. And the question nobody in Silicon Valley is asking with sufficient urgency is: What happens to the model when the social reasoning that produced its training data begins to thin?…

    [Simons explores evidence that this may already be materially underway, then explores what that “atrophy” might mean …]

    … If AI capability depends on the social complexity of human language production—and if AI deployment systematically reduces that complexity through cognitive offloading, homogenization of creative output, and the elimination of interaction-dense work—then the technology is gradually undermining the conditions for its own advancement. Its successes, rather than failures, create a spiral: a slow attenuation of the very substrate it feeds on, spelling doom.

    This is the Social Edge Paradox, and the intellectual tradition it draws from is older and more interdisciplinary than most AI commentary acknowledges…

    [Simons unpacks that heritage, and puts it into dialogues with recent thoughts from Dario Amodei, Leopold Aschenbrenner, and Sam Altman, concluding…]

    … The Social Edge Framework outlined here is a direct counterpoint to Amodei, Aschenbrenner, and Altman. It is a program of action to counter the human redundancy fantasy.  It challenges the self-fulfilling doom-spirals created by the premature reallocation of material resources to a vision of AI. I speak of the philosophy that underestimates the sheer amount of human priming needed to support the Great Recode of legacy infrastructure before our current civilization can even benefit substantially from AI advances.

    By “Great Recode,” I am paying homage to the simple but widely ignored fact that the overwhelming number of tools and services that advanced AI models still need to produce useful outputs for users are not themselves AI-like and most were built before the high-intensity computing era began with AI. In the unsexy but critical field of PDF parsing—one of the ways in which AI consumes large amounts of historical data to get smart—studies show that only a very small proportion of tools were created using techniques like deep learning that characterize the AI age. And in some important cases, the older tools remain indispensable. Vast investments are thus required to upgrade all or most of these tools—from PDF parsers to database schemas—to align with the pace of high-intensity computing driven by the power-thirst of AI. Yet, we are not at the point where AI can simply create its own dependencies.

    Indeed, the so-called “legacy tech debt” supposedly hampering the faster adoption of AI has in many instances been revealed as a problem of mediation and translation. AI companies are learning that they need to hire people who deeply understand legacy systems to guide this Recoding effort. A whole new “digital archaeology” field is emerging where cutting-edge tools like ArgonSense are deployed to try to excavate the latent intelligence in legacy systems and code often after rushed modernization efforts have failed. In many cases, swashbuckling new-age AI adventurers have found that mainframe specialists of a bygone age remain critical, and multidisciplinary dialogues and contentions are essential to progress on the frontier. Hence the strange phenomenon of the COBOL hiring boom. New knowledge must keep feeding on old.

    The Social Edge Framework says: yes, scaling matters, architecture matters, and compute matters. But none of these will continue to deliver if the social substrate—the complex, argumentative, institutionally diverse, perspectivally rich fabric of human interaction—is allowed to thin. And thinning is very possible…

    … The Social Edge prescription is that organizations that hire more people to work in AI-enriched, high-interaction, and transmediary roles—where AI scaffolds learning rather than substituting it—will derive greater long-term advantage than those that treat the technology as a headcount-reduction device. In a world where raw cognitive throughput has been commodified, the value arc shifts to something considerably harder to replicate: the capacity to coordinate human intent with precision, speed, and genuine depth. That edge lies in trans-mediation and high human interactionism.

    The AI industry is telling a story about the future of work that goes roughly like this: automate what can be automated, augment what remains, and trust that the productivity gains will compound into a wealthier, more efficient world.

    The Social Edge Framework tells a different story. It says: the intelligence we are automating was never ours alone. It was forged in conversation, argument, institutional friction, and collaborative struggle. It lives in the spaces between people, and it shows up in AI capabilities only because those spaces were rich enough to leave linguistic traces worth learning from.

    Every time a company automates an entry-level role, it saves a salary and loses a learning curve, unless it compensates. Every time a knowledge worker delegates a draft to an AI without engaging critically, the statistical thinning of the organizational record advances by an imperceptible increment. Every time an organization mistakes polished output for strategic progress, it consumes cognitive surplus without generating new knowledge.

    None of these individual acts is catastrophic. However, their compound effect may be.

    The organizations that will thrive in the next decade are not those with the highest AI utilization rates. They are those that understand something the epoch-chaining thought experiment makes vivid: that AI’s capabilities are an inheritance from the complexity of human social life. And inheritances, if consumed without reinvestment, eventually run out. This is particularly critical as AI becomes heavily customized for our organizational culture.

    Making the right strategic choices about AI is going to become a defining trait in leadership. Bloom et al. cross-country research has long established that management quality explains a substantial share of productivity variance between teams and organizations, and even countries.

    In the AI age, small differences in leadership quality can generate large differences in outcomes—a non-linear payoff I call convex leadership. The term is borrowed from options mathematics, where a convex payoff is one whose upside accelerates faster than the downside decelerates. Convex leaders convert cognitive abundance into structural ambition and thus avoid turning their creative and discovery pipelines into stagnant pools of polished busywork. Conversely, in organizations led by what we might call concave leaders—cautious, procedurally anchored, optimizing for error-avoidance—AI would tend to produce more noise than signal. Because leadership is such a major shaper of all our lives, it is in our interest to pay serious attention to its evolution in this new age.

    The Social Edge is more than a metaphor. It is the literal boundary between what AI can do well and what it will keep struggling with due to fundamental internal contradictions. Furthermore, the framework asks us all to pay attention to how the very investment thesis behind AI also contains the seeds of its own failure. And it reminds leaders that AI’s frontier today is set by the richness of the social world that produced the data it learned from…

    Eminently worth reading in full: “The Social Edge of Intelligence.”

    Consider also the complementary perspectives in “What will be scarce?,” from Alex Imas (via Tim O’Reilly/ @timoreilly.bsky.social)… and in the second piece featured last Monday: ““Curiosity Is No Solo Act.“

    Apposite: “Some Unintended Consequences Of AI,” from Quentin Hardy.

    And finally, from the estimable Nathan Gardels, a suggestion that Open AI’s recent paper on industrial policy for the Age of AI fills a vacuum left by an unimaginative political class and should be taken seriously, at least as a conversation starter: “OpenAI Proposes A ‘Social Contract’ For The Intelligence Age.”

    * Old agricultural proverb

    ###

    As we take the long view, we might recall that today is the anniverary of a techological advance that both fed the social edge and encouraged the build out of the technostructure from which today’s AI hatched: on this date in 1993 Version 1.0 of the web browser Mosaic was released by the National Center for Supercomputing Applications. It was the first software to provide a graphical user interface for the emerging World Wide Web, including the ability to display inline graphics. 

    The lead Mosaic developer was Marc Andreesen, one of the future founders of Netscape, and now a principal at the venture capital firm Andreessen Horowitz (AKA “a16z”)… where he has been become a major investor in, promoter of, and politicial champion of the current crop of AI firms.

    source

    #AI #artificialIntelligence #browswer #culture #history #humans #learning #MarcAndreesen #NationalCenterForSupercomputingApplications #politics #progress #Technology #web #webBrowser
  6. “Don’t eat your seed corn”*…

    AI doesn’t really “think.” Rather, it remembers how we thought together. Are we’re about to stop giving it anything worth remembering? Bright Simons with a provocative analysis…

    We are on the verge of the age of human redundancy. In 2023, IBM’s chief executive told Bloomberg that soon some 7,800 roles might be replaced by AI. The following year, Duolingo cut a tenth of its contractor workforce; it needed to free up desks for AI. Atlassian followed. Klarna announced that its AI assistant was performing work equivalent to 700 customer-service employees and that reducing the size of its workforce to under 2000 is now its North Star. And Jack Dorsey has been forthright about wanting to hold Block’s headcount flat while AI shoulders the growth.

    The trajectory has a compelling internal logic. Routine cognitive work gets automated; junior roles thin out; productivity gains compound year on year. For boards reviewing cost structures, it is the cleanest investment proposition since the internal combustion engine retired the horse, topped up with a kind of moral momentum. Hesitate, the thinking goes, and fall behind.

    But the research results of a team in the UK should give us pause. In the spring of 2024, they asked around 300 writers to produce short fiction. Some were aided by GPT-4 and others worked alone. Which stories, the researchers wanted to know, would be more creative? On average, the writers with AI help produced stories that independent judges rated as more creative than those written without it.

    So far, so on message: a familiar story about the inevitable takeover by intelligent machines. But when the researchers examined the full body of stories rather than individual ones, the picture became murky. The AI-assisted stories were more similar to each other. Each writer had been individually elevated; collectively, they had converged. Anil R  Doshi and Oliver Hauser, who published the study in Science Advances, reached for a phrase from ecology to explain this: a tragedy of the commons.

    Hold that result in mind: individual gain, collective loss. It describes something far more consequential than a writing experiment—it describes the hidden logic of our entire relationship with artificial intelligence. And it suggests that the most successful organizations of the coming decade will be the ones that do something profoundly counterintuitive: instead of using AI to eliminate human interaction by firing droves of workers, they will use it to create more human interaction. IBM has reversed course on its earlier human redundancy fantasies. I bet more will in due course…

    [Simons sketches the history of humans’ intertwined development of both social/organizational and utile technologies, concluding…]

    … What the chain reveals is a dependency the AI industry has largely declined to examine. The underlying intelligence of a large language model isn’t a function of its architecture, its parameter count, or the volume of compute thrown at its training. It is not even about the training data. It is a function of the social complexity of the civilization whose language it digested.

    Each epoch advanced the cognitive frontier through something far richer and more complex than the isolated genius of an individual guru or machine. It did so through new forms of collective problem-solving. Think new institutions (the Greek agora, the Roman lex, the medieval university, the scientific society, the modern corporation, and the social internet) that demanded and rewarded ever more sophisticated uses of language.

    The cognitive anthropologist Edwin Hutchins studied how Navy navigation teams actually think. In his 1995 book Cognition in the Wild, he wrote something that reads today like an accidental prophecy. The physical symbol system, he observed, is “a model of the operation of the sociocultural system from which the human actor has been removed.”

    That is, with eerie precision, a description of what a large language model (LLM) really is, stripped of all the unapproachable jargon and mathematical wizardry. An LLM like ChatGPT is a model of human social reasoning with the human wrangled out. And the question nobody in Silicon Valley is asking with sufficient urgency is: What happens to the model when the social reasoning that produced its training data begins to thin?…

    [Simons explores evidence that this may already be materially underway, then explores what that “atrophy” might mean …]

    … If AI capability depends on the social complexity of human language production—and if AI deployment systematically reduces that complexity through cognitive offloading, homogenization of creative output, and the elimination of interaction-dense work—then the technology is gradually undermining the conditions for its own advancement. Its successes, rather than failures, create a spiral: a slow attenuation of the very substrate it feeds on, spelling doom.

    This is the Social Edge Paradox, and the intellectual tradition it draws from is older and more interdisciplinary than most AI commentary acknowledges…

    [Simons unpacks that heritage, and puts it into dialogues with recent thoughts from Dario Amodei, Leopold Aschenbrenner, and Sam Altman, concluding…]

    … The Social Edge Framework outlined here is a direct counterpoint to Amodei, Aschenbrenner, and Altman. It is a program of action to counter the human redundancy fantasy.  It challenges the self-fulfilling doom-spirals created by the premature reallocation of material resources to a vision of AI. I speak of the philosophy that underestimates the sheer amount of human priming needed to support the Great Recode of legacy infrastructure before our current civilization can even benefit substantially from AI advances.

    By “Great Recode,” I am paying homage to the simple but widely ignored fact that the overwhelming number of tools and services that advanced AI models still need to produce useful outputs for users are not themselves AI-like and most were built before the high-intensity computing era began with AI. In the unsexy but critical field of PDF parsing—one of the ways in which AI consumes large amounts of historical data to get smart—studies show that only a very small proportion of tools were created using techniques like deep learning that characterize the AI age. And in some important cases, the older tools remain indispensable. Vast investments are thus required to upgrade all or most of these tools—from PDF parsers to database schemas—to align with the pace of high-intensity computing driven by the power-thirst of AI. Yet, we are not at the point where AI can simply create its own dependencies.

    Indeed, the so-called “legacy tech debt” supposedly hampering the faster adoption of AI has in many instances been revealed as a problem of mediation and translation. AI companies are learning that they need to hire people who deeply understand legacy systems to guide this Recoding effort. A whole new “digital archaeology” field is emerging where cutting-edge tools like ArgonSense are deployed to try to excavate the latent intelligence in legacy systems and code often after rushed modernization efforts have failed. In many cases, swashbuckling new-age AI adventurers have found that mainframe specialists of a bygone age remain critical, and multidisciplinary dialogues and contentions are essential to progress on the frontier. Hence the strange phenomenon of the COBOL hiring boom. New knowledge must keep feeding on old.

    The Social Edge Framework says: yes, scaling matters, architecture matters, and compute matters. But none of these will continue to deliver if the social substrate—the complex, argumentative, institutionally diverse, perspectivally rich fabric of human interaction—is allowed to thin. And thinning is very possible…

    … The Social Edge prescription is that organizations that hire more people to work in AI-enriched, high-interaction, and transmediary roles—where AI scaffolds learning rather than substituting it—will derive greater long-term advantage than those that treat the technology as a headcount-reduction device. In a world where raw cognitive throughput has been commodified, the value arc shifts to something considerably harder to replicate: the capacity to coordinate human intent with precision, speed, and genuine depth. That edge lies in trans-mediation and high human interactionism.

    The AI industry is telling a story about the future of work that goes roughly like this: automate what can be automated, augment what remains, and trust that the productivity gains will compound into a wealthier, more efficient world.

    The Social Edge Framework tells a different story. It says: the intelligence we are automating was never ours alone. It was forged in conversation, argument, institutional friction, and collaborative struggle. It lives in the spaces between people, and it shows up in AI capabilities only because those spaces were rich enough to leave linguistic traces worth learning from.

    Every time a company automates an entry-level role, it saves a salary and loses a learning curve, unless it compensates. Every time a knowledge worker delegates a draft to an AI without engaging critically, the statistical thinning of the organizational record advances by an imperceptible increment. Every time an organization mistakes polished output for strategic progress, it consumes cognitive surplus without generating new knowledge.

    None of these individual acts is catastrophic. However, their compound effect may be.

    The organizations that will thrive in the next decade are not those with the highest AI utilization rates. They are those that understand something the epoch-chaining thought experiment makes vivid: that AI’s capabilities are an inheritance from the complexity of human social life. And inheritances, if consumed without reinvestment, eventually run out. This is particularly critical as AI becomes heavily customized for our organizational culture.

    Making the right strategic choices about AI is going to become a defining trait in leadership. Bloom et al. cross-country research has long established that management quality explains a substantial share of productivity variance between teams and organizations, and even countries.

    In the AI age, small differences in leadership quality can generate large differences in outcomes—a non-linear payoff I call convex leadership. The term is borrowed from options mathematics, where a convex payoff is one whose upside accelerates faster than the downside decelerates. Convex leaders convert cognitive abundance into structural ambition and thus avoid turning their creative and discovery pipelines into stagnant pools of polished busywork. Conversely, in organizations led by what we might call concave leaders—cautious, procedurally anchored, optimizing for error-avoidance—AI would tend to produce more noise than signal. Because leadership is such a major shaper of all our lives, it is in our interest to pay serious attention to its evolution in this new age.

    The Social Edge is more than a metaphor. It is the literal boundary between what AI can do well and what it will keep struggling with due to fundamental internal contradictions. Furthermore, the framework asks us all to pay attention to how the very investment thesis behind AI also contains the seeds of its own failure. And it reminds leaders that AI’s frontier today is set by the richness of the social world that produced the data it learned from…

    Eminently worth reading in full: “The Social Edge of Intelligence.”

    Consider also the complementary perspectives in “What will be scarce?,” from Alex Imas (via Tim O’Reilly/ @timoreilly.bsky.social)… and in the second piece featured last Monday: ““Curiosity Is No Solo Act.“

    Apposite: “Some Unintended Consequences Of AI,” from Quentin Hardy.

    And finally, from the estimable Nathan Gardels, a suggestion that Open AI’s recent paper on industrial policy for the Age of AI fills a vacuum left by an unimaginative political class and should be taken seriously, at least as a conversation starter: “OpenAI Proposes A ‘Social Contract’ For The Intelligence Age.”

    * Old agricultural proverb

    ###

    As we take the long view, we might recall that today is the anniverary of a techological advance that both fed the social edge and encouraged the build out of the technostructure from which today’s AI hatched: on this date in 1993 Version 1.0 of the web browser Mosaic was released by the National Center for Supercomputing Applications. It was the first software to provide a graphical user interface for the emerging World Wide Web, including the ability to display inline graphics. 

    The lead Mosaic developer was Marc Andreesen, one of the future founders of Netscape, and now a principal at the venture capital firm Andreessen Horowitz (AKA “a16z”)… where he has been become a major investor in, promoter of, and politicial champion of the current crop of AI firms.

    source

    #AI #artificialIntelligence #browswer #culture #history #humans #learning #MarcAndreesen #NationalCenterForSupercomputingApplications #politics #progress #Technology #web #webBrowser
  7. “Don’t eat your seed corn”*…

    AI doesn’t really “think.” Rather, it remembers how we thought together. Are we’re about to stop giving it anything worth remembering? Bright Simons with a provocative analysis…

    We are on the verge of the age of human redundancy. In 2023, IBM’s chief executive told Bloomberg that soon some 7,800 roles might be replaced by AI. The following year, Duolingo cut a tenth of its contractor workforce; it needed to free up desks for AI. Atlassian followed. Klarna announced that its AI assistant was performing work equivalent to 700 customer-service employees and that reducing the size of its workforce to under 2000 is now its North Star. And Jack Dorsey has been forthright about wanting to hold Block’s headcount flat while AI shoulders the growth.

    The trajectory has a compelling internal logic. Routine cognitive work gets automated; junior roles thin out; productivity gains compound year on year. For boards reviewing cost structures, it is the cleanest investment proposition since the internal combustion engine retired the horse, topped up with a kind of moral momentum. Hesitate, the thinking goes, and fall behind.

    But the research results of a team in the UK should give us pause. In the spring of 2024, they asked around 300 writers to produce short fiction. Some were aided by GPT-4 and others worked alone. Which stories, the researchers wanted to know, would be more creative? On average, the writers with AI help produced stories that independent judges rated as more creative than those written without it.

    So far, so on message: a familiar story about the inevitable takeover by intelligent machines. But when the researchers examined the full body of stories rather than individual ones, the picture became murky. The AI-assisted stories were more similar to each other. Each writer had been individually elevated; collectively, they had converged. Anil R  Doshi and Oliver Hauser, who published the study in Science Advances, reached for a phrase from ecology to explain this: a tragedy of the commons.

    Hold that result in mind: individual gain, collective loss. It describes something far more consequential than a writing experiment—it describes the hidden logic of our entire relationship with artificial intelligence. And it suggests that the most successful organizations of the coming decade will be the ones that do something profoundly counterintuitive: instead of using AI to eliminate human interaction by firing droves of workers, they will use it to create more human interaction. IBM has reversed course on its earlier human redundancy fantasies. I bet more will in due course…

    [Simons sketches the history of humans’ intertwined development of both social/organizational and utile technologies, concluding…]

    … What the chain reveals is a dependency the AI industry has largely declined to examine. The underlying intelligence of a large language model isn’t a function of its architecture, its parameter count, or the volume of compute thrown at its training. It is not even about the training data. It is a function of the social complexity of the civilization whose language it digested.

    Each epoch advanced the cognitive frontier through something far richer and more complex than the isolated genius of an individual guru or machine. It did so through new forms of collective problem-solving. Think new institutions (the Greek agora, the Roman lex, the medieval university, the scientific society, the modern corporation, and the social internet) that demanded and rewarded ever more sophisticated uses of language.

    The cognitive anthropologist Edwin Hutchins studied how Navy navigation teams actually think. In his 1995 book Cognition in the Wild, he wrote something that reads today like an accidental prophecy. The physical symbol system, he observed, is “a model of the operation of the sociocultural system from which the human actor has been removed.”

    That is, with eerie precision, a description of what a large language model (LLM) really is, stripped of all the unapproachable jargon and mathematical wizardry. An LLM like ChatGPT is a model of human social reasoning with the human wrangled out. And the question nobody in Silicon Valley is asking with sufficient urgency is: What happens to the model when the social reasoning that produced its training data begins to thin?…

    [Simons explores evidence that this may already be materially underway, then explores what that “atrophy” might mean …]

    … If AI capability depends on the social complexity of human language production—and if AI deployment systematically reduces that complexity through cognitive offloading, homogenization of creative output, and the elimination of interaction-dense work—then the technology is gradually undermining the conditions for its own advancement. Its successes, rather than failures, create a spiral: a slow attenuation of the very substrate it feeds on, spelling doom.

    This is the Social Edge Paradox, and the intellectual tradition it draws from is older and more interdisciplinary than most AI commentary acknowledges…

    [Simons unpacks that heritage, and puts it into dialogues with recent thoughts from Dario Amodei, Leopold Aschenbrenner, and Sam Altman, concluding…]

    … The Social Edge Framework outlined here is a direct counterpoint to Amodei, Aschenbrenner, and Altman. It is a program of action to counter the human redundancy fantasy.  It challenges the self-fulfilling doom-spirals created by the premature reallocation of material resources to a vision of AI. I speak of the philosophy that underestimates the sheer amount of human priming needed to support the Great Recode of legacy infrastructure before our current civilization can even benefit substantially from AI advances.

    By “Great Recode,” I am paying homage to the simple but widely ignored fact that the overwhelming number of tools and services that advanced AI models still need to produce useful outputs for users are not themselves AI-like and most were built before the high-intensity computing era began with AI. In the unsexy but critical field of PDF parsing—one of the ways in which AI consumes large amounts of historical data to get smart—studies show that only a very small proportion of tools were created using techniques like deep learning that characterize the AI age. And in some important cases, the older tools remain indispensable. Vast investments are thus required to upgrade all or most of these tools—from PDF parsers to database schemas—to align with the pace of high-intensity computing driven by the power-thirst of AI. Yet, we are not at the point where AI can simply create its own dependencies.

    Indeed, the so-called “legacy tech debt” supposedly hampering the faster adoption of AI has in many instances been revealed as a problem of mediation and translation. AI companies are learning that they need to hire people who deeply understand legacy systems to guide this Recoding effort. A whole new “digital archaeology” field is emerging where cutting-edge tools like ArgonSense are deployed to try to excavate the latent intelligence in legacy systems and code often after rushed modernization efforts have failed. In many cases, swashbuckling new-age AI adventurers have found that mainframe specialists of a bygone age remain critical, and multidisciplinary dialogues and contentions are essential to progress on the frontier. Hence the strange phenomenon of the COBOL hiring boom. New knowledge must keep feeding on old.

    The Social Edge Framework says: yes, scaling matters, architecture matters, and compute matters. But none of these will continue to deliver if the social substrate—the complex, argumentative, institutionally diverse, perspectivally rich fabric of human interaction—is allowed to thin. And thinning is very possible…

    … The Social Edge prescription is that organizations that hire more people to work in AI-enriched, high-interaction, and transmediary roles—where AI scaffolds learning rather than substituting it—will derive greater long-term advantage than those that treat the technology as a headcount-reduction device. In a world where raw cognitive throughput has been commodified, the value arc shifts to something considerably harder to replicate: the capacity to coordinate human intent with precision, speed, and genuine depth. That edge lies in trans-mediation and high human interactionism.

    The AI industry is telling a story about the future of work that goes roughly like this: automate what can be automated, augment what remains, and trust that the productivity gains will compound into a wealthier, more efficient world.

    The Social Edge Framework tells a different story. It says: the intelligence we are automating was never ours alone. It was forged in conversation, argument, institutional friction, and collaborative struggle. It lives in the spaces between people, and it shows up in AI capabilities only because those spaces were rich enough to leave linguistic traces worth learning from.

    Every time a company automates an entry-level role, it saves a salary and loses a learning curve, unless it compensates. Every time a knowledge worker delegates a draft to an AI without engaging critically, the statistical thinning of the organizational record advances by an imperceptible increment. Every time an organization mistakes polished output for strategic progress, it consumes cognitive surplus without generating new knowledge.

    None of these individual acts is catastrophic. However, their compound effect may be.

    The organizations that will thrive in the next decade are not those with the highest AI utilization rates. They are those that understand something the epoch-chaining thought experiment makes vivid: that AI’s capabilities are an inheritance from the complexity of human social life. And inheritances, if consumed without reinvestment, eventually run out. This is particularly critical as AI becomes heavily customized for our organizational culture.

    Making the right strategic choices about AI is going to become a defining trait in leadership. Bloom et al. cross-country research has long established that management quality explains a substantial share of productivity variance between teams and organizations, and even countries.

    In the AI age, small differences in leadership quality can generate large differences in outcomes—a non-linear payoff I call convex leadership. The term is borrowed from options mathematics, where a convex payoff is one whose upside accelerates faster than the downside decelerates. Convex leaders convert cognitive abundance into structural ambition and thus avoid turning their creative and discovery pipelines into stagnant pools of polished busywork. Conversely, in organizations led by what we might call concave leaders—cautious, procedurally anchored, optimizing for error-avoidance—AI would tend to produce more noise than signal. Because leadership is such a major shaper of all our lives, it is in our interest to pay serious attention to its evolution in this new age.

    The Social Edge is more than a metaphor. It is the literal boundary between what AI can do well and what it will keep struggling with due to fundamental internal contradictions. Furthermore, the framework asks us all to pay attention to how the very investment thesis behind AI also contains the seeds of its own failure. And it reminds leaders that AI’s frontier today is set by the richness of the social world that produced the data it learned from…

    Eminently worth reading in full: “The Social Edge of Intelligence.”

    Consider also the complementary perspectives in “What will be scarce?,” from Alex Imas (via Tim O’Reilly/ @timoreilly.bsky.social)… and in the second piece featured last Monday: ““Curiosity Is No Solo Act.“

    Apposite: “Some Unintended Consequences Of AI,” from Quentin Hardy.

    And finally, from the estimable Nathan Gardels, a suggestion that Open AI’s recent paper on industrial policy for the Age of AI fills a vacuum left by an unimaginative political class and should be taken seriously, at least as a conversation starter: “OpenAI Proposes A ‘Social Contract’ For The Intelligence Age.”

    * Old agricultural proverb

    ###

    As we take the long view, we might recall that today is the anniverary of a techological advance that both fed the social edge and encouraged the build out of the technostructure from which today’s AI hatched: on this date in 1993 Version 1.0 of the web browser Mosaic was released by the National Center for Supercomputing Applications. It was the first software to provide a graphical user interface for the emerging World Wide Web, including the ability to display inline graphics. 

    The lead Mosaic developer was Marc Andreesen, one of the future founders of Netscape, and now a principal at the venture capital firm Andreessen Horowitz (AKA “a16z”)… where he has been become a major investor in, promoter of, and politicial champion of the current crop of AI firms.

    source

    #AI #artificialIntelligence #browswer #culture #history #humans #learning #MarcAndreesen #NationalCenterForSupercomputingApplications #politics #progress #Technology #web #webBrowser
  8. “Don’t eat your seed corn”*…

    AI doesn’t really “think.” Rather, it remembers how we thought together. Are we’re about to stop giving it anything worth remembering? Bright Simons with a provocative analysis…

    We are on the verge of the age of human redundancy. In 2023, IBM’s chief executive told Bloomberg that soon some 7,800 roles might be replaced by AI. The following year, Duolingo cut a tenth of its contractor workforce; it needed to free up desks for AI. Atlassian followed. Klarna announced that its AI assistant was performing work equivalent to 700 customer-service employees and that reducing the size of its workforce to under 2000 is now its North Star. And Jack Dorsey has been forthright about wanting to hold Block’s headcount flat while AI shoulders the growth.

    The trajectory has a compelling internal logic. Routine cognitive work gets automated; junior roles thin out; productivity gains compound year on year. For boards reviewing cost structures, it is the cleanest investment proposition since the internal combustion engine retired the horse, topped up with a kind of moral momentum. Hesitate, the thinking goes, and fall behind.

    But the research results of a team in the UK should give us pause. In the spring of 2024, they asked around 300 writers to produce short fiction. Some were aided by GPT-4 and others worked alone. Which stories, the researchers wanted to know, would be more creative? On average, the writers with AI help produced stories that independent judges rated as more creative than those written without it.

    So far, so on message: a familiar story about the inevitable takeover by intelligent machines. But when the researchers examined the full body of stories rather than individual ones, the picture became murky. The AI-assisted stories were more similar to each other. Each writer had been individually elevated; collectively, they had converged. Anil R  Doshi and Oliver Hauser, who published the study in Science Advances, reached for a phrase from ecology to explain this: a tragedy of the commons.

    Hold that result in mind: individual gain, collective loss. It describes something far more consequential than a writing experiment—it describes the hidden logic of our entire relationship with artificial intelligence. And it suggests that the most successful organizations of the coming decade will be the ones that do something profoundly counterintuitive: instead of using AI to eliminate human interaction by firing droves of workers, they will use it to create more human interaction. IBM has reversed course on its earlier human redundancy fantasies. I bet more will in due course…

    [Simons sketches the history of humans’ intertwined development of both social/organizational and utile technologies, concluding…]

    … What the chain reveals is a dependency the AI industry has largely declined to examine. The underlying intelligence of a large language model isn’t a function of its architecture, its parameter count, or the volume of compute thrown at its training. It is not even about the training data. It is a function of the social complexity of the civilization whose language it digested.

    Each epoch advanced the cognitive frontier through something far richer and more complex than the isolated genius of an individual guru or machine. It did so through new forms of collective problem-solving. Think new institutions (the Greek agora, the Roman lex, the medieval university, the scientific society, the modern corporation, and the social internet) that demanded and rewarded ever more sophisticated uses of language.

    The cognitive anthropologist Edwin Hutchins studied how Navy navigation teams actually think. In his 1995 book Cognition in the Wild, he wrote something that reads today like an accidental prophecy. The physical symbol system, he observed, is “a model of the operation of the sociocultural system from which the human actor has been removed.”

    That is, with eerie precision, a description of what a large language model (LLM) really is, stripped of all the unapproachable jargon and mathematical wizardry. An LLM like ChatGPT is a model of human social reasoning with the human wrangled out. And the question nobody in Silicon Valley is asking with sufficient urgency is: What happens to the model when the social reasoning that produced its training data begins to thin?…

    [Simons explores evidence that this may already be materially underway, then explores what that “atrophy” might mean …]

    … If AI capability depends on the social complexity of human language production—and if AI deployment systematically reduces that complexity through cognitive offloading, homogenization of creative output, and the elimination of interaction-dense work—then the technology is gradually undermining the conditions for its own advancement. Its successes, rather than failures, create a spiral: a slow attenuation of the very substrate it feeds on, spelling doom.

    This is the Social Edge Paradox, and the intellectual tradition it draws from is older and more interdisciplinary than most AI commentary acknowledges…

    [Simons unpacks that heritage, and puts it into dialogues with recent thoughts from Dario Amodei, Leopold Aschenbrenner, and Sam Altman, concluding…]

    … The Social Edge Framework outlined here is a direct counterpoint to Amodei, Aschenbrenner, and Altman. It is a program of action to counter the human redundancy fantasy.  It challenges the self-fulfilling doom-spirals created by the premature reallocation of material resources to a vision of AI. I speak of the philosophy that underestimates the sheer amount of human priming needed to support the Great Recode of legacy infrastructure before our current civilization can even benefit substantially from AI advances.

    By “Great Recode,” I am paying homage to the simple but widely ignored fact that the overwhelming number of tools and services that advanced AI models still need to produce useful outputs for users are not themselves AI-like and most were built before the high-intensity computing era began with AI. In the unsexy but critical field of PDF parsing—one of the ways in which AI consumes large amounts of historical data to get smart—studies show that only a very small proportion of tools were created using techniques like deep learning that characterize the AI age. And in some important cases, the older tools remain indispensable. Vast investments are thus required to upgrade all or most of these tools—from PDF parsers to database schemas—to align with the pace of high-intensity computing driven by the power-thirst of AI. Yet, we are not at the point where AI can simply create its own dependencies.

    Indeed, the so-called “legacy tech debt” supposedly hampering the faster adoption of AI has in many instances been revealed as a problem of mediation and translation. AI companies are learning that they need to hire people who deeply understand legacy systems to guide this Recoding effort. A whole new “digital archaeology” field is emerging where cutting-edge tools like ArgonSense are deployed to try to excavate the latent intelligence in legacy systems and code often after rushed modernization efforts have failed. In many cases, swashbuckling new-age AI adventurers have found that mainframe specialists of a bygone age remain critical, and multidisciplinary dialogues and contentions are essential to progress on the frontier. Hence the strange phenomenon of the COBOL hiring boom. New knowledge must keep feeding on old.

    The Social Edge Framework says: yes, scaling matters, architecture matters, and compute matters. But none of these will continue to deliver if the social substrate—the complex, argumentative, institutionally diverse, perspectivally rich fabric of human interaction—is allowed to thin. And thinning is very possible…

    … The Social Edge prescription is that organizations that hire more people to work in AI-enriched, high-interaction, and transmediary roles—where AI scaffolds learning rather than substituting it—will derive greater long-term advantage than those that treat the technology as a headcount-reduction device. In a world where raw cognitive throughput has been commodified, the value arc shifts to something considerably harder to replicate: the capacity to coordinate human intent with precision, speed, and genuine depth. That edge lies in trans-mediation and high human interactionism.

    The AI industry is telling a story about the future of work that goes roughly like this: automate what can be automated, augment what remains, and trust that the productivity gains will compound into a wealthier, more efficient world.

    The Social Edge Framework tells a different story. It says: the intelligence we are automating was never ours alone. It was forged in conversation, argument, institutional friction, and collaborative struggle. It lives in the spaces between people, and it shows up in AI capabilities only because those spaces were rich enough to leave linguistic traces worth learning from.

    Every time a company automates an entry-level role, it saves a salary and loses a learning curve, unless it compensates. Every time a knowledge worker delegates a draft to an AI without engaging critically, the statistical thinning of the organizational record advances by an imperceptible increment. Every time an organization mistakes polished output for strategic progress, it consumes cognitive surplus without generating new knowledge.

    None of these individual acts is catastrophic. However, their compound effect may be.

    The organizations that will thrive in the next decade are not those with the highest AI utilization rates. They are those that understand something the epoch-chaining thought experiment makes vivid: that AI’s capabilities are an inheritance from the complexity of human social life. And inheritances, if consumed without reinvestment, eventually run out. This is particularly critical as AI becomes heavily customized for our organizational culture.

    Making the right strategic choices about AI is going to become a defining trait in leadership. Bloom et al. cross-country research has long established that management quality explains a substantial share of productivity variance between teams and organizations, and even countries.

    In the AI age, small differences in leadership quality can generate large differences in outcomes—a non-linear payoff I call convex leadership. The term is borrowed from options mathematics, where a convex payoff is one whose upside accelerates faster than the downside decelerates. Convex leaders convert cognitive abundance into structural ambition and thus avoid turning their creative and discovery pipelines into stagnant pools of polished busywork. Conversely, in organizations led by what we might call concave leaders—cautious, procedurally anchored, optimizing for error-avoidance—AI would tend to produce more noise than signal. Because leadership is such a major shaper of all our lives, it is in our interest to pay serious attention to its evolution in this new age.

    The Social Edge is more than a metaphor. It is the literal boundary between what AI can do well and what it will keep struggling with due to fundamental internal contradictions. Furthermore, the framework asks us all to pay attention to how the very investment thesis behind AI also contains the seeds of its own failure. And it reminds leaders that AI’s frontier today is set by the richness of the social world that produced the data it learned from…

    Eminently worth reading in full: “The Social Edge of Intelligence.”

    Consider also the complementary perspectives in “What will be scarce?,” from Alex Imas (via Tim O’Reilly/ @timoreilly.bsky.social)… and in the second piece featured last Monday: ““Curiosity Is No Solo Act.“

    Apposite: “Some Unintended Consequences Of AI,” from Quentin Hardy.

    And finally, from the estimable Nathan Gardels, a suggestion that Open AI’s recent paper on industrial policy for the Age of AI fills a vacuum left by an unimaginative political class and should be taken seriously, at least as a conversation starter: “OpenAI Proposes A ‘Social Contract’ For The Intelligence Age.”

    * Old agricultural proverb

    ###

    As we take the long view, we might recall that today is the anniverary of a techological advance that both fed the social edge and encouraged the build out of the technostructure from which today’s AI hatched: on this date in 1993 Version 1.0 of the web browser Mosaic was released by the National Center for Supercomputing Applications. It was the first software to provide a graphical user interface for the emerging World Wide Web, including the ability to display inline graphics. 

    The lead Mosaic developer was Marc Andreesen, one of the future founders of Netscape, and now a principal at the venture capital firm Andreessen Horowitz (AKA “a16z”)… where he has been become a major investor in, promoter of, and politicial champion of the current crop of AI firms.

    source

    #AI #artificialIntelligence #browswer #culture #history #humans #learning #MarcAndreesen #NationalCenterForSupercomputingApplications #politics #progress #Technology #web #webBrowser
  9. “Don’t eat your seed corn”*…

    AI doesn’t really “think.” Rather, it remembers how we thought together. Are we’re about to stop giving it anything worth remembering? Bright Simons with a provocative analysis…

    We are on the verge of the age of human redundancy. In 2023, IBM’s chief executive told Bloomberg that soon some 7,800 roles might be replaced by AI. The following year, Duolingo cut a tenth of its contractor workforce; it needed to free up desks for AI. Atlassian followed. Klarna announced that its AI assistant was performing work equivalent to 700 customer-service employees and that reducing the size of its workforce to under 2000 is now its North Star. And Jack Dorsey has been forthright about wanting to hold Block’s headcount flat while AI shoulders the growth.

    The trajectory has a compelling internal logic. Routine cognitive work gets automated; junior roles thin out; productivity gains compound year on year. For boards reviewing cost structures, it is the cleanest investment proposition since the internal combustion engine retired the horse, topped up with a kind of moral momentum. Hesitate, the thinking goes, and fall behind.

    But the research results of a team in the UK should give us pause. In the spring of 2024, they asked around 300 writers to produce short fiction. Some were aided by GPT-4 and others worked alone. Which stories, the researchers wanted to know, would be more creative? On average, the writers with AI help produced stories that independent judges rated as more creative than those written without it.

    So far, so on message: a familiar story about the inevitable takeover by intelligent machines. But when the researchers examined the full body of stories rather than individual ones, the picture became murky. The AI-assisted stories were more similar to each other. Each writer had been individually elevated; collectively, they had converged. Anil R  Doshi and Oliver Hauser, who published the study in Science Advances, reached for a phrase from ecology to explain this: a tragedy of the commons.

    Hold that result in mind: individual gain, collective loss. It describes something far more consequential than a writing experiment—it describes the hidden logic of our entire relationship with artificial intelligence. And it suggests that the most successful organizations of the coming decade will be the ones that do something profoundly counterintuitive: instead of using AI to eliminate human interaction by firing droves of workers, they will use it to create more human interaction. IBM has reversed course on its earlier human redundancy fantasies. I bet more will in due course…

    [Simons sketches the history of humans’ intertwined development of both social/organizational and utile technologies, concluding…]

    … What the chain reveals is a dependency the AI industry has largely declined to examine. The underlying intelligence of a large language model isn’t a function of its architecture, its parameter count, or the volume of compute thrown at its training. It is not even about the training data. It is a function of the social complexity of the civilization whose language it digested.

    Each epoch advanced the cognitive frontier through something far richer and more complex than the isolated genius of an individual guru or machine. It did so through new forms of collective problem-solving. Think new institutions (the Greek agora, the Roman lex, the medieval university, the scientific society, the modern corporation, and the social internet) that demanded and rewarded ever more sophisticated uses of language.

    The cognitive anthropologist Edwin Hutchins studied how Navy navigation teams actually think. In his 1995 book Cognition in the Wild, he wrote something that reads today like an accidental prophecy. The physical symbol system, he observed, is “a model of the operation of the sociocultural system from which the human actor has been removed.”

    That is, with eerie precision, a description of what a large language model (LLM) really is, stripped of all the unapproachable jargon and mathematical wizardry. An LLM like ChatGPT is a model of human social reasoning with the human wrangled out. And the question nobody in Silicon Valley is asking with sufficient urgency is: What happens to the model when the social reasoning that produced its training data begins to thin?…

    [Simons explores evidence that this may already be materially underway, then explores what that “atrophy” might mean …]

    … If AI capability depends on the social complexity of human language production—and if AI deployment systematically reduces that complexity through cognitive offloading, homogenization of creative output, and the elimination of interaction-dense work—then the technology is gradually undermining the conditions for its own advancement. Its successes, rather than failures, create a spiral: a slow attenuation of the very substrate it feeds on, spelling doom.

    This is the Social Edge Paradox, and the intellectual tradition it draws from is older and more interdisciplinary than most AI commentary acknowledges…

    [Simons unpacks that heritage, and puts it into dialogues with recent thoughts from Dario Amodei, Leopold Aschenbrenner, and Sam Altman, concluding…]

    … The Social Edge Framework outlined here is a direct counterpoint to Amodei, Aschenbrenner, and Altman. It is a program of action to counter the human redundancy fantasy.  It challenges the self-fulfilling doom-spirals created by the premature reallocation of material resources to a vision of AI. I speak of the philosophy that underestimates the sheer amount of human priming needed to support the Great Recode of legacy infrastructure before our current civilization can even benefit substantially from AI advances.

    By “Great Recode,” I am paying homage to the simple but widely ignored fact that the overwhelming number of tools and services that advanced AI models still need to produce useful outputs for users are not themselves AI-like and most were built before the high-intensity computing era began with AI. In the unsexy but critical field of PDF parsing—one of the ways in which AI consumes large amounts of historical data to get smart—studies show that only a very small proportion of tools were created using techniques like deep learning that characterize the AI age. And in some important cases, the older tools remain indispensable. Vast investments are thus required to upgrade all or most of these tools—from PDF parsers to database schemas—to align with the pace of high-intensity computing driven by the power-thirst of AI. Yet, we are not at the point where AI can simply create its own dependencies.

    Indeed, the so-called “legacy tech debt” supposedly hampering the faster adoption of AI has in many instances been revealed as a problem of mediation and translation. AI companies are learning that they need to hire people who deeply understand legacy systems to guide this Recoding effort. A whole new “digital archaeology” field is emerging where cutting-edge tools like ArgonSense are deployed to try to excavate the latent intelligence in legacy systems and code often after rushed modernization efforts have failed. In many cases, swashbuckling new-age AI adventurers have found that mainframe specialists of a bygone age remain critical, and multidisciplinary dialogues and contentions are essential to progress on the frontier. Hence the strange phenomenon of the COBOL hiring boom. New knowledge must keep feeding on old.

    The Social Edge Framework says: yes, scaling matters, architecture matters, and compute matters. But none of these will continue to deliver if the social substrate—the complex, argumentative, institutionally diverse, perspectivally rich fabric of human interaction—is allowed to thin. And thinning is very possible…

    … The Social Edge prescription is that organizations that hire more people to work in AI-enriched, high-interaction, and transmediary roles—where AI scaffolds learning rather than substituting it—will derive greater long-term advantage than those that treat the technology as a headcount-reduction device. In a world where raw cognitive throughput has been commodified, the value arc shifts to something considerably harder to replicate: the capacity to coordinate human intent with precision, speed, and genuine depth. That edge lies in trans-mediation and high human interactionism.

    The AI industry is telling a story about the future of work that goes roughly like this: automate what can be automated, augment what remains, and trust that the productivity gains will compound into a wealthier, more efficient world.

    The Social Edge Framework tells a different story. It says: the intelligence we are automating was never ours alone. It was forged in conversation, argument, institutional friction, and collaborative struggle. It lives in the spaces between people, and it shows up in AI capabilities only because those spaces were rich enough to leave linguistic traces worth learning from.

    Every time a company automates an entry-level role, it saves a salary and loses a learning curve, unless it compensates. Every time a knowledge worker delegates a draft to an AI without engaging critically, the statistical thinning of the organizational record advances by an imperceptible increment. Every time an organization mistakes polished output for strategic progress, it consumes cognitive surplus without generating new knowledge.

    None of these individual acts is catastrophic. However, their compound effect may be.

    The organizations that will thrive in the next decade are not those with the highest AI utilization rates. They are those that understand something the epoch-chaining thought experiment makes vivid: that AI’s capabilities are an inheritance from the complexity of human social life. And inheritances, if consumed without reinvestment, eventually run out. This is particularly critical as AI becomes heavily customized for our organizational culture.

    Making the right strategic choices about AI is going to become a defining trait in leadership. Bloom et al. cross-country research has long established that management quality explains a substantial share of productivity variance between teams and organizations, and even countries.

    In the AI age, small differences in leadership quality can generate large differences in outcomes—a non-linear payoff I call convex leadership. The term is borrowed from options mathematics, where a convex payoff is one whose upside accelerates faster than the downside decelerates. Convex leaders convert cognitive abundance into structural ambition and thus avoid turning their creative and discovery pipelines into stagnant pools of polished busywork. Conversely, in organizations led by what we might call concave leaders—cautious, procedurally anchored, optimizing for error-avoidance—AI would tend to produce more noise than signal. Because leadership is such a major shaper of all our lives, it is in our interest to pay serious attention to its evolution in this new age.

    The Social Edge is more than a metaphor. It is the literal boundary between what AI can do well and what it will keep struggling with due to fundamental internal contradictions. Furthermore, the framework asks us all to pay attention to how the very investment thesis behind AI also contains the seeds of its own failure. And it reminds leaders that AI’s frontier today is set by the richness of the social world that produced the data it learned from…

    Eminently worth reading in full: “The Social Edge of Intelligence.”

    Consider also the complementary perspectives in “What will be scarce?,” from Alex Imas (via Tim O’Reilly/ @timoreilly.bsky.social)… and in the second piece featured last Monday: ““Curiosity Is No Solo Act.“

    Apposite: “Some Unintended Consequences Of AI,” from Quentin Hardy.

    And finally, from the estimable Nathan Gardels, a suggestion that Open AI’s recent paper on industrial policy for the Age of AI fills a vacuum left by an unimaginative political class and should be taken seriously, at least as a conversation starter: “OpenAI Proposes A ‘Social Contract’ For The Intelligence Age.”

    * Old agricultural proverb

    ###

    As we take the long view, we might recall that today is the anniverary of a techological advance that both fed the social edge and encouraged the build out of the technostructure from which today’s AI hatched: on this date in 1993 Version 1.0 of the web browser Mosaic was released by the National Center for Supercomputing Applications. It was the first software to provide a graphical user interface for the emerging World Wide Web, including the ability to display inline graphics. 

    The lead Mosaic developer was Marc Andreesen, one of the future founders of Netscape, and now a principal at the venture capital firm Andreessen Horowitz (AKA “a16z”)… where he has been become a major investor in, promoter of, and politicial champion of the current crop of AI firms.

    source

    #AI #artificialIntelligence #browswer #culture #history #humans #learning #MarcAndreesen #NationalCenterForSupercomputingApplications #politics #progress #Technology #web #webBrowser
  10. That time when you use #ChatGPT for requirements analysis and test-case generation, and #Codex for the actual implementation. #DreamTeam