home.social

#publicbenefitcorporation — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #publicbenefitcorporation, aggregated by home.social.

  1. There's a pretty neat app called #Finch that does #gamification for #taskManagement / #organizing / #selfcare stuff.

    It helps me keep track of what I should be posting when and where and reminds me of some of the things I need to do. All while sharing #todo items with people.

    Finch Cares is a #PublicBenefitCorporation and the application is gratis. I've paid for it because it's useful and I wanted more vanity items. lol

    PM your friend code or ask for mine.

    finchcare.com
    finch.fandom.com/wiki/Finch:_S

  2. There's a pretty neat app called #Finch that does #gamification for #taskManagement / #organizing / #selfcare stuff.

    It helps me keep track of what I should be posting when and where and reminds me of some of the things I need to do. All while sharing #todo items with people.

    Finch Cares is a #PublicBenefitCorporation and the application is gratis. I've paid for it because it's useful and I wanted more vanity items. lol

    PM your friend code or ask for mine.

    finchcare.com
    finch.fandom.com/wiki/Finch:_S

  3. There's a pretty neat app called #Finch that does #gamification for #taskManagement / #organizing / #selfcare stuff.

    It helps me keep track of what I should be posting when and where and reminds me of some of the things I need to do. All while sharing #todo items with people.

    Finch Cares is a #PublicBenefitCorporation and the application is gratis. I've paid for it because it's useful and I wanted more vanity items. lol

    PM your friend code or ask for mine.

    finchcare.com
    finch.fandom.com/wiki/Finch:_S

  4. There's a pretty neat app called #Finch that does #gamification for #taskManagement / #organizing / #selfcare stuff.

    It helps me keep track of what I should be posting when and where and reminds me of some of the things I need to do. All while sharing #todo items with people.

    Finch Cares is a #PublicBenefitCorporation and the application is gratis. I've paid for it because it's useful and I wanted more vanity items. lol

    PM your friend code or ask for mine.

    finchcare.com
    finch.fandom.com/wiki/Finch:_S

  5. There's a pretty neat app called #Finch that does #gamification for #taskManagement / #organizing / #selfcare stuff.

    It helps me keep track of what I should be posting when and where and reminds me of some of the things I need to do. All while sharing #todo items with people.

    Finch Cares is a #PublicBenefitCorporation and the application is gratis. I've paid for it because it's useful and I wanted more vanity items. lol

    PM your friend code or ask for mine.

    finchcare.com
    finch.fandom.com/wiki/Finch:_S

  6. CW: super-technical discussion on Bluesky moderation

    @michael

    "The only way out of this box is if independent/OSS developers build on top of the underlying AT protocol to create a new AppView, Relay, etc., that can federate with the BlueSky PBC."

    But would Bluesky corporate even allow "federation" with the mothership Bluesky DID:PLC through a non-corporate PDS?

    That would seem to be the question of the hour

    #Bluesky #ATProtocol #PDS #PLC #DID #PublicBenefitCorporation

  7. @fixatedpersonsunit

    "Keeping reading 'decentralised' in relation to Bluesky, which I assume refers to its infrastructure in some way..."

    *sigh*

    This "decentralized" / "federated" myth was a smokescreen developed around the time that Bluesky began to allow individuals to host their own PDS

    "Self-hosting a Bluesky PDS means running your own Personal Data Server that is capable of federating with the wider Bluesky social network."

    Except there never was any "wider Bluesky social network" because every PDS had to communicate --> back <-- to the single, one-and-only Bluesky DID

    Of course, this minor detail /sarcasm is almost undocumented

    See here for gory details: web.plc.staging.bsky.dev/

    #Bluesky #ATProtocol #PDS #PLC #DID #PublicBenefitCorporation

  8. @Leeisme

    "More importantly though this user was supposedly on a separate instance. She was still able to lock him out of his account and perma ban him because #BlueSky isn't decentralized."

    Let me take a wild shot at translation:

    "this user" was running / was on his own #PDS but any PDS is isolated and alone unless it communicates back to Bluesky itself through the single Bluesky #DID

    You can see a suggestion of this in any Bluesky URL:

    bsky . app / profile / did:plc: ...

    Also see on Github, PDS Support, here: discord.com/channels/120702437

    #Bluesky #ATProtocol #PDS #PLC #DID #PublicBenefitCorporation

  9. OpenAI & Microsoft Sign MOU for PBC Structure

    OpenAI nonprofit retains control and gains PBC equity exceeding US$100 billion through new MOU with Microsoft.

    olamnews.com/technology/ai/171

  10. Why public benefit corporations won’t fix the ethics of platform capitalism

    I wrote a couple of months ago about my scepticism that Bluesky will retain its ethical stances in the face of investor pressure. There’s no path to federation they’ve committed to, at a point where they’d be relatively free to do so, making it seem unlikely they’ll gut the commercialisation model at a future point when investors could push back. The obvious retort to this is that Bluesky is a public benefit corporation but, as Catherine Bracy points out in the (excellent) World Eaters, from pg 189:

    While PBCs are a positive development in corporate governance, moving away from the misguided concept of shareholder supremacy that has dominated capitalism for the last century, they still have significant shortcomings. The biggest is that they don’t require companies to behave a certain way. They just provide protection for those executives who choose to put mission over profit. The companies that want to enact stricter protocols that mandate certain behavior no matter who is in charge are mostly left to create their own governance structures.

    In other words it provides internal cover for sustaining commitment to a mission but it’s still dependent on motivated actors, who are operating within a system of incentives which makes it difficult to sustain a mission beyond growth and profitability. It doesn’t ‘lock in’ the mission, only ensures that it remains formally on the agenda in a discursive sense. Consider OpenAI’s hybrid structure which is arguably closer to a ‘lock in’ than being a public benefit corporation. From pg 189 of the same book:

    There are a few notable examples of these bespoke structures in tech, most famously the one employed by OpenAI, which puts the for-profit entity that develops and markets ChatGPT under the control of a nonprofit whose mission is to “ensure that artificial general intelligence benefits all of humanity.” The company also places a cap on the amount of returns that investors in the for-profit entity can make, an interesting indicator that it understands just how much investor returns can influence product and business model decisions.

    And Anthropic’s even more onerous hybrid structure, from pg 190:

    One of OpenAI’s main competitors, Anthropic AI (which was founded by a breakaway faction of OpenAI employees who were even more concerned about AI safety risks), also has constructed a bespoke governance model with the intention of protecting the company’s mission from the vagaries of investor demands. Anthropic’s model is a hybrid. They are incorporated as a Public Benefit Corporation in Delaware, but they have also created what they call a Long-Term Benefit Trust (LTBT) that, by 2027, will have the authority to select a majority of the company’s board members. The trustees who oversee the LTBT are selected based on their commitment to and expertise around the safe deployment of artificial intelligence and will have no financial stake in the company. The terms of the trust arrangement also require the company to report to the trustees “actions that could significantly alter the corporation or its business.”

    We’ve already seen Altman begin to dismantle OpenAI’s governance structure, supported by a workforce who, Bracy suggests, rallied around him after the sacking due to concerns about the value of their stock options. I think Altman’s motives have as much to do with power, particularly vis-a-vis the board, as profit in driving this dismantling of governance structures he played a significant role in designing. But fund raising will generically play a role in driving resistance to these governance structures, as Bracy notes on pg 192:

    The ability to raise money while adopting an alternative structure also reflects an enormous amount of privilege on the part of these companies’ founders. The vast majority of entrepreneurs are not able to drive the kind of bargain Altman and the Anthropic team did with their investors, even in times when VCs have more money to invest than they know what to do with. Even Altman found it difficult, telling me, “It was very hard to raise under this structure. Most investors looked at it and said ‘absolutely not, I’m not capping my profits.’ ” Creating a system in which any founder can do what Altman and his cofounders did will require much deeper structural change.

    While I hope Anthropic’s governance structure remains intact, not least of all because I think a reactionary Claude would be the most dangerous of the frontier models, the idea that public benefit corporations and complex governance mechanisms (consider Meta’s oversight board as well) will be sufficient to produce ethical outcomes is self-evidently implausible. The problem, as Bracy argues, in a really incisive book arises from, the incentive structure of the innovation ecosystem itself. From pg 169:

    That process, of continuously raising more venture capital in order to demonstrate value to future-round funders rather than focusing on building a solid business with strong fundamentals, is what creates bubbles. It is, more than any inherent risk associated with investing in startups, why Silicon Valley is such a boom-bust sector. Given what’s at stake for venture capitalists, it is extremely difficult for founders to find off-ramps that might allow them to retain control of their companies and operate in accordance with what’s best for customers, employees, and the long-term sustainability of the business instead of what will create the highest valuation in the venture capital marketplace.

    What she’s talking about her could be frame in terms of the interplay of the micro-social (founders, VC partners and key staff seeking fame and fortune) and the meso-social (the organisational dynamics of growing a firm under these conditions) within a very specific structure of incentives provided by the innovation ecosystem and the political, legal and economic climate of late neoliberalism. The turn towards public benefit corporations and ethical governance is a welcome shift but it does nothing to change the overarching context, nor does it produce fundamentally different types of firms.

    #AI #anthropic #artificialIntelligence #BlueSKy #business #CatherineBracy #finance #investment #investors #openAI #platformCapitalism #politicalEconomy #publicBenefitCorporation #samAltman

  11. Why public benefit corporations won’t fix the ethics of platform capitalism

    I wrote a couple of months ago about my scepticism that Bluesky will retain its ethical stances in the face of investor pressure. There’s no path to federation they’ve committed to, at a point where they’d be relatively free to do so, making it seem unlikely they’ll gut the commercialisation model at a future point when investors could push back. The obvious retort to this is that Bluesky is a public benefit corporation but, as Catherine Bracy points out in the (excellent) World Eaters, from pg 189:

    While PBCs are a positive development in corporate governance, moving away from the misguided concept of shareholder supremacy that has dominated capitalism for the last century, they still have significant shortcomings. The biggest is that they don’t require companies to behave a certain way. They just provide protection for those executives who choose to put mission over profit. The companies that want to enact stricter protocols that mandate certain behavior no matter who is in charge are mostly left to create their own governance structures.

    In other words it provides internal cover for sustaining commitment to a mission but it’s still dependent on motivated actors, who are operating within a system of incentives which makes it difficult to sustain a mission beyond growth and profitability. It doesn’t ‘lock in’ the mission, only ensures that it remains formally on the agenda in a discursive sense. Consider OpenAI’s hybrid structure which is arguably closer to a ‘lock in’ than being a public benefit corporation. From pg 189 of the same book:

    There are a few notable examples of these bespoke structures in tech, most famously the one employed by OpenAI, which puts the for-profit entity that develops and markets ChatGPT under the control of a nonprofit whose mission is to “ensure that artificial general intelligence benefits all of humanity.” The company also places a cap on the amount of returns that investors in the for-profit entity can make, an interesting indicator that it understands just how much investor returns can influence product and business model decisions.

    And Anthropic’s even more onerous hybrid structure, from pg 190:

    One of OpenAI’s main competitors, Anthropic AI (which was founded by a breakaway faction of OpenAI employees who were even more concerned about AI safety risks), also has constructed a bespoke governance model with the intention of protecting the company’s mission from the vagaries of investor demands. Anthropic’s model is a hybrid. They are incorporated as a Public Benefit Corporation in Delaware, but they have also created what they call a Long-Term Benefit Trust (LTBT) that, by 2027, will have the authority to select a majority of the company’s board members. The trustees who oversee the LTBT are selected based on their commitment to and expertise around the safe deployment of artificial intelligence and will have no financial stake in the company. The terms of the trust arrangement also require the company to report to the trustees “actions that could significantly alter the corporation or its business.”

    We’ve already seen Altman begin to dismantle OpenAI’s governance structure, supported by a workforce who, Bracy suggests, rallied around him after the sacking due to concerns about the value of their stock options. I think Altman’s motives have as much to do with power, particularly vis-a-vis the board, as profit in driving this dismantling of governance structures he played a significant role in designing. But fund raising will generically play a role in driving resistance to these governance structures, as Bracy notes on pg 192:

    The ability to raise money while adopting an alternative structure also reflects an enormous amount of privilege on the part of these companies’ founders. The vast majority of entrepreneurs are not able to drive the kind of bargain Altman and the Anthropic team did with their investors, even in times when VCs have more money to invest than they know what to do with. Even Altman found it difficult, telling me, “It was very hard to raise under this structure. Most investors looked at it and said ‘absolutely not, I’m not capping my profits.’ ” Creating a system in which any founder can do what Altman and his cofounders did will require much deeper structural change.

    While I hope Anthropic’s governance structure remains intact, not least of all because I think a reactionary Claude would be the most dangerous of the frontier models, the idea that public benefit corporations and complex governance mechanisms (consider Meta’s oversight board as well) will be sufficient to produce ethical outcomes is self-evidently implausible. The problem, as Bracy argues, in a really incisive book arises from, the incentive structure of the innovation ecosystem itself. From pg 169:

    That process, of continuously raising more venture capital in order to demonstrate value to future-round funders rather than focusing on building a solid business with strong fundamentals, is what creates bubbles. It is, more than any inherent risk associated with investing in startups, why Silicon Valley is such a boom-bust sector. Given what’s at stake for venture capitalists, it is extremely difficult for founders to find off-ramps that might allow them to retain control of their companies and operate in accordance with what’s best for customers, employees, and the long-term sustainability of the business instead of what will create the highest valuation in the venture capital marketplace.

    What she’s talking about her could be frame in terms of the interplay of the micro-social (founders, VC partners and key staff seeking fame and fortune) and the meso-social (the organisational dynamics of growing a firm under these conditions) within a very specific structure of incentives provided by the innovation ecosystem and the political, legal and economic climate of late neoliberalism. The turn towards public benefit corporations and ethical governance is a welcome shift but it does nothing to change the overarching context, nor does it produce fundamentally different types of firms.

    #anthropic #BlueSKy #CatherineBracy #investment #investors #platformCapitalism #politicalEconomy #publicBenefitCorporation #samAltman

  12. Why public benefit corporations won’t fix the ethics of platform capitalism

    I wrote a couple of months ago about my scepticism that Bluesky will retain its ethical stances in the face of investor pressure. There’s no path to federation they’ve committed to, at a point where they’d be relatively free to do so, making it seem unlikely they’ll gut the commercialisation model at a future point when investors could push back. The obvious retort to this is that Bluesky is a public benefit corporation but, as Catherine Bracy points out in the (excellent) World Eaters, from pg 189:

    While PBCs are a positive development in corporate governance, moving away from the misguided concept of shareholder supremacy that has dominated capitalism for the last century, they still have significant shortcomings. The biggest is that they don’t require companies to behave a certain way. They just provide protection for those executives who choose to put mission over profit. The companies that want to enact stricter protocols that mandate certain behavior no matter who is in charge are mostly left to create their own governance structures.

    In other words it provides internal cover for sustaining commitment to a mission but it’s still dependent on motivated actors, who are operating within a system of incentives which makes it difficult to sustain a mission beyond growth and profitability. It doesn’t ‘lock in’ the mission, only ensures that it remains formally on the agenda in a discursive sense. Consider OpenAI’s hybrid structure which is arguably closer to a ‘lock in’ than being a public benefit corporation. From pg 189 of the same book:

    There are a few notable examples of these bespoke structures in tech, most famously the one employed by OpenAI, which puts the for-profit entity that develops and markets ChatGPT under the control of a nonprofit whose mission is to “ensure that artificial general intelligence benefits all of humanity.” The company also places a cap on the amount of returns that investors in the for-profit entity can make, an interesting indicator that it understands just how much investor returns can influence product and business model decisions.

    And Anthropic’s even more onerous hybrid structure, from pg 190:

    One of OpenAI’s main competitors, Anthropic AI (which was founded by a breakaway faction of OpenAI employees who were even more concerned about AI safety risks), also has constructed a bespoke governance model with the intention of protecting the company’s mission from the vagaries of investor demands. Anthropic’s model is a hybrid. They are incorporated as a Public Benefit Corporation in Delaware, but they have also created what they call a Long-Term Benefit Trust (LTBT) that, by 2027, will have the authority to select a majority of the company’s board members. The trustees who oversee the LTBT are selected based on their commitment to and expertise around the safe deployment of artificial intelligence and will have no financial stake in the company. The terms of the trust arrangement also require the company to report to the trustees “actions that could significantly alter the corporation or its business.”

    We’ve already seen Altman begin to dismantle OpenAI’s governance structure, supported by a workforce who, Bracy suggests, rallied around him after the sacking due to concerns about the value of their stock options. I think Altman’s motives have as much to do with power, particularly vis-a-vis the board, as profit in driving this dismantling of governance structures he played a significant role in designing. But fund raising will generically play a role in driving resistance to these governance structures, as Bracy notes on pg 192:

    The ability to raise money while adopting an alternative structure also reflects an enormous amount of privilege on the part of these companies’ founders. The vast majority of entrepreneurs are not able to drive the kind of bargain Altman and the Anthropic team did with their investors, even in times when VCs have more money to invest than they know what to do with. Even Altman found it difficult, telling me, “It was very hard to raise under this structure. Most investors looked at it and said ‘absolutely not, I’m not capping my profits.’ ” Creating a system in which any founder can do what Altman and his cofounders did will require much deeper structural change.

    While I hope Anthropic’s governance structure remains intact, not least of all because I think a reactionary Claude would be the most dangerous of the frontier models, the idea that public benefit corporations and complex governance mechanisms (consider Meta’s oversight board as well) will be sufficient to produce ethical outcomes is self-evidently implausible. The problem, as Bracy argues, in a really incisive book arises from, the incentive structure of the innovation ecosystem itself. From pg 169:

    That process, of continuously raising more venture capital in order to demonstrate value to future-round funders rather than focusing on building a solid business with strong fundamentals, is what creates bubbles. It is, more than any inherent risk associated with investing in startups, why Silicon Valley is such a boom-bust sector. Given what’s at stake for venture capitalists, it is extremely difficult for founders to find off-ramps that might allow them to retain control of their companies and operate in accordance with what’s best for customers, employees, and the long-term sustainability of the business instead of what will create the highest valuation in the venture capital marketplace.

    What she’s talking about her could be frame in terms of the interplay of the micro-social (founders, VC partners and key staff seeking fame and fortune) and the meso-social (the organisational dynamics of growing a firm under these conditions) within a very specific structure of incentives provided by the innovation ecosystem and the political, legal and economic climate of late neoliberalism. The turn towards public benefit corporations and ethical governance is a welcome shift but it does nothing to change the overarching context, nor does it produce fundamentally different types of firms.

    #AI #anthropic #artificialIntelligence #BlueSKy #business #CatherineBracy #finance #investment #investors #openAI #platformCapitalism #politicalEconomy #publicBenefitCorporation #samAltman

  13. Why public benefit corporations won’t fix the ethics of platform capitalism

    I wrote a couple of months ago about my scepticism that Bluesky will retain its ethical stances in the face of investor pressure. There’s no path to federation they’ve committed to, at a point where they’d be relatively free to do so, making it seem unlikely they’ll gut the commercialisation model at a future point when investors could push back. The obvious retort to this is that Bluesky is a public benefit corporation but, as Catherine Bracy points out in the (excellent) World Eaters, from pg 189:

    While PBCs are a positive development in corporate governance, moving away from the misguided concept of shareholder supremacy that has dominated capitalism for the last century, they still have significant shortcomings. The biggest is that they don’t require companies to behave a certain way. They just provide protection for those executives who choose to put mission over profit. The companies that want to enact stricter protocols that mandate certain behavior no matter who is in charge are mostly left to create their own governance structures.

    In other words it provides internal cover for sustaining commitment to a mission but it’s still dependent on motivated actors, who are operating within a system of incentives which makes it difficult to sustain a mission beyond growth and profitability. It doesn’t ‘lock in’ the mission, only ensures that it remains formally on the agenda in a discursive sense. Consider OpenAI’s hybrid structure which is arguably closer to a ‘lock in’ than being a public benefit corporation. From pg 189 of the same book:

    There are a few notable examples of these bespoke structures in tech, most famously the one employed by OpenAI, which puts the for-profit entity that develops and markets ChatGPT under the control of a nonprofit whose mission is to “ensure that artificial general intelligence benefits all of humanity.” The company also places a cap on the amount of returns that investors in the for-profit entity can make, an interesting indicator that it understands just how much investor returns can influence product and business model decisions.

    And Anthropic’s even more onerous hybrid structure, from pg 190:

    One of OpenAI’s main competitors, Anthropic AI (which was founded by a breakaway faction of OpenAI employees who were even more concerned about AI safety risks), also has constructed a bespoke governance model with the intention of protecting the company’s mission from the vagaries of investor demands. Anthropic’s model is a hybrid. They are incorporated as a Public Benefit Corporation in Delaware, but they have also created what they call a Long-Term Benefit Trust (LTBT) that, by 2027, will have the authority to select a majority of the company’s board members. The trustees who oversee the LTBT are selected based on their commitment to and expertise around the safe deployment of artificial intelligence and will have no financial stake in the company. The terms of the trust arrangement also require the company to report to the trustees “actions that could significantly alter the corporation or its business.”

    We’ve already seen Altman begin to dismantle OpenAI’s governance structure, supported by a workforce who, Bracy suggests, rallied around him after the sacking due to concerns about the value of their stock options. I think Altman’s motives have as much to do with power, particularly vis-a-vis the board, as profit in driving this dismantling of governance structures he played a significant role in designing. But fund raising will generically play a role in driving resistance to these governance structures, as Bracy notes on pg 192:

    The ability to raise money while adopting an alternative structure also reflects an enormous amount of privilege on the part of these companies’ founders. The vast majority of entrepreneurs are not able to drive the kind of bargain Altman and the Anthropic team did with their investors, even in times when VCs have more money to invest than they know what to do with. Even Altman found it difficult, telling me, “It was very hard to raise under this structure. Most investors looked at it and said ‘absolutely not, I’m not capping my profits.’ ” Creating a system in which any founder can do what Altman and his cofounders did will require much deeper structural change.

    While I hope Anthropic’s governance structure remains intact, not least of all because I think a reactionary Claude would be the most dangerous of the frontier models, the idea that public benefit corporations and complex governance mechanisms (consider Meta’s oversight board as well) will be sufficient to produce ethical outcomes is self-evidently implausible. The problem, as Bracy argues, in a really incisive book arises from, the incentive structure of the innovation ecosystem itself. From pg 169:

    That process, of continuously raising more venture capital in order to demonstrate value to future-round funders rather than focusing on building a solid business with strong fundamentals, is what creates bubbles. It is, more than any inherent risk associated with investing in startups, why Silicon Valley is such a boom-bust sector. Given what’s at stake for venture capitalists, it is extremely difficult for founders to find off-ramps that might allow them to retain control of their companies and operate in accordance with what’s best for customers, employees, and the long-term sustainability of the business instead of what will create the highest valuation in the venture capital marketplace.

    What she’s talking about her could be frame in terms of the interplay of the micro-social (founders, VC partners and key staff seeking fame and fortune) and the meso-social (the organisational dynamics of growing a firm under these conditions) within a very specific structure of incentives provided by the innovation ecosystem and the political, legal and economic climate of late neoliberalism. The turn towards public benefit corporations and ethical governance is a welcome shift but it does nothing to change the overarching context, nor does it produce fundamentally different types of firms.

    #AI #anthropic #artificialIntelligence #BlueSKy #business #CatherineBracy #finance #investment #investors #openAI #platformCapitalism #politicalEconomy #publicBenefitCorporation #samAltman

  14. Why public benefit corporations won’t fix the ethics of platform capitalism

    I wrote a couple of months ago about my scepticism that Bluesky will retain its ethical stances in the face of investor pressure. There’s no path to federation they’ve committed to, at a point where they’d be relatively free to do so, making it seem unlikely they’ll gut the commercialisation model at a future point when investors could push back. The obvious retort to this is that Bluesky is a public benefit corporation but, as Catherine Bracy points out in the (excellent) World Eaters, from pg 189:

    While PBCs are a positive development in corporate governance, moving away from the misguided concept of shareholder supremacy that has dominated capitalism for the last century, they still have significant shortcomings. The biggest is that they don’t require companies to behave a certain way. They just provide protection for those executives who choose to put mission over profit. The companies that want to enact stricter protocols that mandate certain behavior no matter who is in charge are mostly left to create their own governance structures.

    In other words it provides internal cover for sustaining commitment to a mission but it’s still dependent on motivated actors, who are operating within a system of incentives which makes it difficult to sustain a mission beyond growth and profitability. It doesn’t ‘lock in’ the mission, only ensures that it remains formally on the agenda in a discursive sense. Consider OpenAI’s hybrid structure which is arguably closer to a ‘lock in’ than being a public benefit corporation. From pg 189 of the same book:

    There are a few notable examples of these bespoke structures in tech, most famously the one employed by OpenAI, which puts the for-profit entity that develops and markets ChatGPT under the control of a nonprofit whose mission is to “ensure that artificial general intelligence benefits all of humanity.” The company also places a cap on the amount of returns that investors in the for-profit entity can make, an interesting indicator that it understands just how much investor returns can influence product and business model decisions.

    And Anthropic’s even more onerous hybrid structure, from pg 190:

    One of OpenAI’s main competitors, Anthropic AI (which was founded by a breakaway faction of OpenAI employees who were even more concerned about AI safety risks), also has constructed a bespoke governance model with the intention of protecting the company’s mission from the vagaries of investor demands. Anthropic’s model is a hybrid. They are incorporated as a Public Benefit Corporation in Delaware, but they have also created what they call a Long-Term Benefit Trust (LTBT) that, by 2027, will have the authority to select a majority of the company’s board members. The trustees who oversee the LTBT are selected based on their commitment to and expertise around the safe deployment of artificial intelligence and will have no financial stake in the company. The terms of the trust arrangement also require the company to report to the trustees “actions that could significantly alter the corporation or its business.”

    We’ve already seen Altman begin to dismantle OpenAI’s governance structure, supported by a workforce who, Bracy suggests, rallied around him after the sacking due to concerns about the value of their stock options. I think Altman’s motives have as much to do with power, particularly vis-a-vis the board, as profit in driving this dismantling of governance structures he played a significant role in designing. But fund raising will generically play a role in driving resistance to these governance structures, as Bracy notes on pg 192:

    The ability to raise money while adopting an alternative structure also reflects an enormous amount of privilege on the part of these companies’ founders. The vast majority of entrepreneurs are not able to drive the kind of bargain Altman and the Anthropic team did with their investors, even in times when VCs have more money to invest than they know what to do with. Even Altman found it difficult, telling me, “It was very hard to raise under this structure. Most investors looked at it and said ‘absolutely not, I’m not capping my profits.’ ” Creating a system in which any founder can do what Altman and his cofounders did will require much deeper structural change.

    While I hope Anthropic’s governance structure remains intact, not least of all because I think a reactionary Claude would be the most dangerous of the frontier models, the idea that public benefit corporations and complex governance mechanisms (consider Meta’s oversight board as well) will be sufficient to produce ethical outcomes is self-evidently implausible. The problem, as Bracy argues, in a really incisive book arises from, the incentive structure of the innovation ecosystem itself. From pg 169:

    That process, of continuously raising more venture capital in order to demonstrate value to future-round funders rather than focusing on building a solid business with strong fundamentals, is what creates bubbles. It is, more than any inherent risk associated with investing in startups, why Silicon Valley is such a boom-bust sector. Given what’s at stake for venture capitalists, it is extremely difficult for founders to find off-ramps that might allow them to retain control of their companies and operate in accordance with what’s best for customers, employees, and the long-term sustainability of the business instead of what will create the highest valuation in the venture capital marketplace.

    What she’s talking about her could be frame in terms of the interplay of the micro-social (founders, VC partners and key staff seeking fame and fortune) and the meso-social (the organisational dynamics of growing a firm under these conditions) within a very specific structure of incentives provided by the innovation ecosystem and the political, legal and economic climate of late neoliberalism. The turn towards public benefit corporations and ethical governance is a welcome shift but it does nothing to change the overarching context, nor does it produce fundamentally different types of firms.

    #anthropic #BlueSKy #CatherineBracy #investment #investors #platformCapitalism #politicalEconomy #publicBenefitCorporation #samAltman

  15. OpenAI kehrt (teilweise) zu seinen Wurzeln zurück: Statt Vollprofit bleibt die Kontrolle bei der Non-Profit-Organisation. Ein Signal für mehr Verantwortung – oder nur Imagepflege? Was das für die Zukunft von KI bedeutet, liest Du hier: #OpenAI #PublicBenefitCorporation #KI 👇
    all-ai.de/news/top-news24/open

  16. @osma

    "it's a #publicBenefitCorporation"

    that has no legal bite

    it means a company *may* pursue ill defined "public benefit" instead of profit

    it doesn't *have* to

    shallow feel good marketing

    "it's #decentralized"

    in theory

    like #cryptocurrency hype, #bluesky promises a lot and delivers little, and #FOMO dum dums buy into the promise rather than the reality

    it's only a matter of when, not if, the #crypto bro owners of bluesky do to it what #musk did to #twitter

    bsky.social/about/blog/10-24-2

  17. @kithrup @Dianora @JonChevreau @mardigroan

    false

    a #publicbenefitcorporation, as a legal definition, simply says a company *may* (it doesn't have to) consider public benefit instead of #profit

    while a #nonprofit like #mastodon can indeed be corrupted. but that's a far better position (the board may be corrupted someday) than #bluesky (already is at the whims of whoever moves in and buys shares)

  18. CW: Long thread/12

    They raised more capital, and used that to create a nice place for independent artists, who piled into the platform and provided millions of unpaid hours of creative labor to help the founders increase its value. The founders and their investors turned the company into a #PublicBenefitCorporation which meant they had an obligation to serve the public benefit.

    12/

  19. "In June 2023, the servers just started returning errors, making nine years of member contributions inaccessible, apparently forever — every post, artwork, song, portfolio, and the community built there was gone in an instant." waxy.org/2024/01/the-quiet-dea h/t @ntnsndr re: downfall of a #PublicBenefitCorporation et al.

  20. @ChrisMayLA6 @MattMastodon
    So what is the 'asset lock' system in place? I've looked up en.wikipedia.org/wiki/Communit and the linked article on US #publicBenefitCorporation and haven't seen any specifics about their governance? Can they be controlled e.g. by an investment fund?

    It looks like it's all up to the regulating authority. For once, the French system is much more self-regulated (but there is also regulation in France of course, when it comes to determine which #taxation regime applies)