home.social

#platformgovernance — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #platformgovernance, aggregated by home.social.

  1. When Rules Mean Whatever They Want

    By Cliff Potts, CSO, and Editor-in-Chief of WPS News

    Baybay City, Leyte, Philippines — May 7, 2026

    Governance by Temper Tantrum

    At a certain point, the behavior of a system becomes so erratic that technical explanations stop being useful.

    The only analogy that fits TikTok’s management style at scale is this: an ill-behaved fourteen-year-old who just had his Xbox taken away, locked in a room with the one thing he still controls — the platform — and determined to use it to punish, mock, and toy with everyone else.

    That may sound flippant. It isn’t.

    Because when governance becomes reactive, punitive, and arbitrary, the problem is no longer incompetence. It is immaturity.

    Acting Out as a Control Strategy

    Mature systems behave predictably. Immature ones act out.

    On TikTok, enforcement does not feel reasoned or corrective. It feels emotional. Sudden. Spiteful. As if the platform itself is responding to perceived slights rather than applying policy.

    Creators wake up throttled. Sellers lose visibility without warning. Content is removed with boilerplate explanations that explain nothing. Appeals are ignored or answered by automation that clearly does not understand the question being asked.

    This is not discipline. It is lashing out.

    Punishment as Entertainment

    There is an unmistakable undertone to how penalties are applied: not merely corrective, but performative.

    People are not just penalized. They are humiliated through silence. Through disappearance. Through unexplained loss of reach. Through the quiet implication that you must have done something wrong, even when no one can say what that was.

    That dynamic mirrors troll culture precisely.

    Confusion is the joke. Scrambling is the joke. Watching people guess at invisible rules is the joke.

    A Sanitized Troll Board With Ad Revenue

    Viewed through this lens, TikTok starts to resemble something uncomfortably familiar: a cleaned-up, advertiser-friendly version of an old troll forum.

    Not as overt. Not as explicit. But driven by the same underlying pleasure in disruption.

    The system rewards chaos. It punishes stability. It amplifies nonsense while smothering consistency. It treats seriousness as a liability and volatility as fuel.

    It is what happens when troll logic is given a revenue model and a global audience.

    Why This Matters for Commerce

    Troll systems are incompatible with commerce.

    Serious businesses cannot operate on a platform where enforcement feels like mood swings. Sellers cannot invest time, inventory, or reputation into an ecosystem that behaves as though it enjoys pulling the rug out from under participants.

    Commerce requires adulthood:

    • Clear rules
    • Consistent enforcement
    • Transparent correction
    • Predictable outcomes

    What TikTok offers instead is impulse and spectacle.

    The Problem Is Not Tone — It’s Power

    This is not about being offended by style. It is about recognizing risk.

    When a platform with massive influence behaves like an adolescent with unchecked authority, the danger is not embarrassment. It is harm.

    Users adapt by self-censoring, fragmenting, or leaving quietly. Sellers absorb losses without recourse. Consumers lose trust without ever being told why.

    And TikTok continues forward as if this is all normal.

    Calling It What It Is

    Maturity in governance is not optional once power reaches a certain scale.

    When rules mean whatever the platform feels like enforcing that day, governance has failed. When punishment feels mocking rather than corrective, legitimacy is already gone.

    This essay does not accuse TikTok of malice. It accuses it of childishness — and of wielding enormous power without the restraint that power requires.

    That may be worse.

    For more social commentary and excellent fiction, see Occupy 2.5 at https://Occupy25.com

    This essay will be archived to the WPS News Monthly Brief available through Amazon.

    #contentModeration #digitalEthics #platformGovernance #socialMediaRisk #TikTok #TikTokShop
  2. Trust and safety used to be seen as the internet’s cleanup crew.

    After my conversation with Yoel Roth, SVP & Head of Trust & Safety at Match Group, I think that framing is outdated.

    Watch the full YouTube Video: youtu.be/cj577gj8mzg

    The better framing: trust and safety is now platform governance.

    Full conversation with Yoel Roth on Analyse Podcast.

    #TrustAndSafety #AIGovernance #PlatformGovernance #DigitalTrust #AnalysePodcast

  3. Fediverse explained – powerful shift in control over social media and data

    Fediverse explained reveals how decentralized social media gives users control over data, privacy, and speech beyond corporate platforms

    thedemocracyadvocate.com/news-

  4. And what about X (formerly Twitter)?

    Since Musk took over, studies show X now has a higher concentration of misinformation than Facebook, just lower total volume due to a smaller user base.

    Facebook still "wins" for overall harm, but TikTok is the most concerning rising threat.

    Moderation matters. Algorithms amplify. And platform design isn't neutral.

    #Misinformation #SocialMedia #TechPolicy #PlatformGovernance #digizenLK

  5. 🗣️⚖️ Who gets heard in digital governance?
    Rachel Griffin & Mateus Correia de Carvalho explore how civil society shapes EU platform rules 🇪🇺 – and why some voices are left out.
    Her work highlights inequalities in resources 💸, access 🏛️, and recognition 👁️ that shape how risks are defined and governed.
    🔗 dsa-observatory.eu/2026/02/17/
    #EURegulation #DigitalJustice #PlatformGovernance #ResponsibleAI #RCTrust

  6. Die EU zwingt X erstmals unter dem Digital Services Act zu Änderungen – inklusive 120 Mio. € Strafe und Anpassungen beim „Verifizierungs“-System. Ein wichtiger Präzedenzfall.
    Aber: Solange Plattformen Verifikation, Werbungstransparenz und Datenzugang erst nach Druck korrigieren, zeigt das auch, wie schwach Governance ohne Durchsetzung bleibt. Regulierung wirkt – aber nur, wenn sie tatsächlich angewendet wird.

    sciencemediacenter.de/angebote

    #DSA #PlatformGovernance #DigitalPolicy

  7. 🤣 Microsoft is going all-in with its automated moderation tools, blocking Discord messages just for saying "MicroSlop." The meme, popular among annoyed devs to poke fun at buggy or bloated software, is now being flagged by Microsoft’s AI as "hate speech."

    Honestly, this is what happens when a company lets algorithms handle "cultural sensitivity" on autopilot. Sure, technically "slop" can be an insult, but the AI totally misses the inside joke that devs are making. Instead of listening to what people are actually saying about their products, Microsoft just turbocharged the meme, and now it’s guaranteed to go viral even faster. Classic Streisand Effect.

    🧠 Automated moderation keeps tripping over inside jokes and creative digs.
    ⚡ The AI just can’t tell the difference between an honest rant and real harassment.
    🎓 Now everyone’s getting clever with new euphemisms to dodge the ban.
    🔍 This move accidentally brought gamers and devs together; everyone’s roasting Microsoft now.

    fastcompany.com/91501766/micro
    #AIModeration #PlatformGovernance #TechCulture #Microsoft #MicroSlop #Censorship #Freedom #Softare #Discord

  8. "While the DSA has created an obligation for platforms to identify and mitigate systemic risks in Europe, the first two years of risk assessments rely heavily on high-level company descriptions of policies, tools, and user controls. Assessments provide extremely limited detail into whether any of these interventions meaningfully reduce harm, particularly for minors. By contrast, US litigation is surfacing previously unreleased internal platform data, experiments, and deliberations that reveal how platforms internally measure risk and define acceptable trade-offs related to risk, engagement, and revenue. But US litigation is largely reactive and limited to the facts of each specific case.

    For example, internal company data released in US litigation shows that key safety mitigations – including screentime management tools, take a break reminders, parental controls, among others – suffer from extremely low adoption rates, often below 2% of minor users. Internal documents also suggest the design of these features may undermine effectiveness: TikTok leadership initially imposed “guardrail” metrics requiring that new screentime tools reduce usage by no more than 5%, while Meta’s internal projections accurately predicted that 99% of teens would not use optional opt-in take a break features.

    The evidence emerging from DSA systemic risk assessments and US platform litigation underscores a central gap in current approaches to platform governance: risks are increasingly well-described, but mitigations are rarely communicated using rigorous, outcome-oriented data and evidence."

    kgi.georgetown.edu/research-an

    #SocialMedia #EU #USA #DSA #TikTok #Instagram #Algorithms #Meta #Facebook #PlatformGovernance #MentalHealth

  9. It’s out! 🎉 My new paper is published.

    I examine how Instagram users practice digital vigilantism to fight botting & porn bots — taking authenticity governance into their own hands. The study highlights user-driven surveillance and platform power asymmetries.

    Part of the special issue “Digital Platforms and Agency” in Lateral (CSA), edited by Reed van Schenck & Elaine Venter.

    Read it here:
    csalateral.org/section/digital

    #PlatformGovernance #DigitalCulture #InternetStudies #culturalstudies

  10. Our recent blog examines how these measures operated in practice, looking at the legal framework under the IT Act, the role of platform geo-blocking, and the use of executive advisories and criminal law during crisis situations.

    Read here: sflc.in/content-blocking-and-c #ContentBlocking #DigitalGovernance #InternetRegulation #Section69A #PlatformGovernance #FreedomOfExpression #MediaFreedom 13m

  11. ⚖️🇪🇺 New at RC Trust since January 2026: Rachel Griffin!

    As a Postdoctoral Researcher, Rachel works on EU platform regulation and questions of structural injustice – from online violence and algorithmic bias to the political power of large tech platforms. Her research also examines how “risk” is defined and governed in digital regulation.

    At RC Trust, she’s expanding this work to AI regulation, collaborating across disciplines.

    #EURegulation #PlatformGovernance #DigitalJustice #ResponsibleAI

  12. One year ago Meta reversed its political content reduction policy. New research on 2.5M Facebook posts from Italian MPs:

    72% reach reduction for parliamentarians
    Effects detected 10 months BEFORE Meta's "global rollout"
    Extremists gained reach (+14%) while elected officials lost it

    🔗 osf.io/preprints/socarxiv/8dqa
    #Meta #Facebook #DSA #PlatformGovernance #Transparency

  13. Adam Mosseri (Instagram) argumentiert, dass KI-Labels langfristig an Wirksamkeit verlieren werden, weil KI-generierte Fälschungen immer perfekter werden. Praktischer sei es daher, reale Medien kryptografisch zu fingerprinten – etwa direkt bei der Aufnahme –, statt fortlaufend synthetische Inhalte zu jagen. Ein bemerkenswerter Perspektivwechsel von der Detektion des Falschen zur Verifikation des Echten.

    #AIandMedia #ContentAuthenticity #DigitalTrust #PlatformGovernance
    engadget.com/social-media/inst

  14. “Two decades of #BigTech funding for safety science has taught us that the #grantwashing playbook works every time. Internally, corporate leaders pacify passionate employees with token actions that seem consequential. External scientists take the money, get inconclusive results, and lose public trust. Policymakers see what looks like responsible self regulation from a powerful industry and backpedal calls for change.”
    techpolicy.press/beware-of-ope
    #research #PlatformGovernance

  15. The EU has fined X €120M under the Digital Services Act for transparency-related violations, including gaps in political ad repositories and restrictions on researcher access. X has stated it disagrees with the decision.

    For the security community, this raises important questions about:
    • the role of data access in identifying influence operations
    • how platforms can support threat research at scale
    • how regulatory frameworks may evolve across regions

    Thoughts on how transparency and researcher access should be structured for large platforms?

    Source: therecord.media/eu-fines-x-und

    💬 Join the conversation
    🔁 Boost & Follow for more neutral cybersecurity insights

    #Infosec #CyberSecurity #DSA #Transparency #PlatformGovernance #ThreatResearch #DigitalPolicy #OnlineSafety #Disinformation #TechRegulation

  16. 🤝🌎 Excited to kick off the third installment of #Digimeet today, bringing together around 80 digital researchers from 16 countries across the EU, UK, South America, the US & India!
    Our focus this year is a deep dive into #PlatformGovernance & #Power. We'll be exploring the latest global developments, zeroing in on the underlying power dynamics, societal implications, and technological advancements shaping policy discourse today.
    #DigitalResearch #PlatformRegulation #TechPolicy
    @bidt @CAISnrw

  17. 🤝🌎 Excited to kick off the third installment of #Digimeet today, bringing together around 80 digital researchers from 16 countries across the EU, UK, South America, the US & India!
    Our focus this year is a deep dive into #PlatformGovernance & #Power. We'll be exploring the latest global developments, zeroing in on the underlying power dynamics, societal implications, and technological advancements shaping policy discourse today.
    #DigitalResearch #PlatformRegulation #TechPolicy
    @bidt @CAISnrw

  18. 🤝🌎 Excited to kick off the third installment of #Digimeet today, bringing together around 80 digital researchers from 16 countries across the EU, UK, South America, the US & India!
    Our focus this year is a deep dive into #PlatformGovernance & #Power. We'll be exploring the latest global developments, zeroing in on the underlying power dynamics, societal implications, and technological advancements shaping policy discourse today.
    #DigitalResearch #PlatformRegulation #TechPolicy
    @bidt @CAISnrw

  19. 🤝🌎 Excited to kick off the third installment of #Digimeet today, bringing together around 80 digital researchers from 16 countries across the EU, UK, South America, the US & India!
    Our focus this year is a deep dive into #PlatformGovernance & #Power. We'll be exploring the latest global developments, zeroing in on the underlying power dynamics, societal implications, and technological advancements shaping policy discourse today.
    #DigitalResearch #PlatformRegulation #TechPolicy
    @bidt @CAISnrw

  20. 🤝🌎 Excited to kick off the third installment of #Digimeet today, bringing together around 80 digital researchers from 16 countries across the EU, UK, South America, the US & India!
    Our focus this year is a deep dive into #PlatformGovernance & #Power. We'll be exploring the latest global developments, zeroing in on the underlying power dynamics, societal implications, and technological advancements shaping policy discourse today.
    #DigitalResearch #PlatformRegulation #TechPolicy
    @bidt @CAISnrw

  21. YouTube's playing parole officer, allowing "some" banned creators to request new channels. It's a "pilot program" with criteria fuzzier than CSS on IE6. They won't get their old channels back, so it's a fresh start, from scratch.

    If you're getting a "second chance," who decides if you're qualified? What's your take on digital rehabilitation?

    engadget.com/big-tech/youtube-
    #YouTube #ContentPolicy #TechNews #CreatorEconomy #PlatformGovernance