home.social

#data-minimization — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #data-minimization, aggregated by home.social.

fetched live
  1. Do not collect the data that you do not need.

    Do not collect the data
    that you do not need.

    Do not collect
    the data
    that you
    do not
    need
    .

    #DataMinimization #Privacy #DigitalRights

  2. Do not collect the data that you do not need.

    Do not collect the data
    that you do not need.

    Do not collect
    the data
    that you
    do not
    need
    .

    #DataMinimization #Privacy #DigitalRights

  3. Do not collect the data that you do not need.

    Do not collect the data
    that you do not need.

    Do not collect
    the data
    that you
    do not
    need
    .

    #DataMinimization #Privacy #DigitalRights

  4. Do not collect the data that you do not need.

    Do not collect the data
    that you do not need.

    Do not collect
    the data
    that you
    do not
    need
    .

    #DataMinimization #Privacy #DigitalRights

  5. Do not collect the data that you do not need.

    Do not collect the data
    that you do not need.

    Do not collect
    the data
    that you
    do not
    need
    .

    #DataMinimization #Privacy #DigitalRights

  6. GM Faces $12.75M Penalty for Illicit Driver Data Sales

    General Motors has been hit with a record $12.75 million penalty for selling California drivers' data without their consent, despite promising to protect their privacy. This landmark case marks a major victory for data protection, with California's Attorney General Rob Bonta leading the charge.

    osintsights.com/gm-faces-1275m

    #CaliforniaConsumerPrivacyAct #DataMinimization #GeneralMotors #Onstar #SmartDriver

  7. Data minimization is not just a privacy principle.

    It is a trust control.

    If an app’s mission is to deliver official updates and public information, every nonessential data touch becomes a governance question.

    Not “can we collect this?”
    “Why do we need it at all?”

    #CyberSecurity #Privacy #Governance #DataMinimization

  8. Data minimization is not just a privacy principle.

    It is a trust control.

    If an app’s mission is to deliver official updates and public information, every nonessential data touch becomes a governance question.

    Not “can we collect this?”
    “Why do we need it at all?”

    #CyberSecurity #Privacy #Governance #DataMinimization

  9. Data minimization is not just a privacy principle.

    It is a trust control.

    If an app’s mission is to deliver official updates and public information, every nonessential data touch becomes a governance question.

    Not “can we collect this?”
    “Why do we need it at all?”

    #CyberSecurity #Privacy #Governance #DataMinimization

  10. Data minimization is not just a privacy principle.

    It is a trust control.

    If an app’s mission is to deliver official updates and public information, every nonessential data touch becomes a governance question.

    Not “can we collect this?”
    “Why do we need it at all?”

    #CyberSecurity #Privacy #Governance #DataMinimization

  11. Loblaw breach: phone numbers exposed. Unlike emails, these are permanent identifiers enabling cross-referencing attacks.

    WIGGWIGG's data minimization: temp numbers that forward then delete. Zero permanent records.

    Source: bnnbloomberg.ca/business/compa

    Early Access Q2 2026 → wiggwigg.ca

    #DataBreach #WIGGWIGG #DataMinimization

  12. Loblaw breach: phone numbers exposed. Unlike emails, these are permanent identifiers enabling cross-referencing attacks.

    WIGGWIGG's data minimization: temp numbers that forward then delete. Zero permanent records.

    Source: bnnbloomberg.ca/business/compa

    Early Access Q2 2026 → wiggwigg.ca

    #DataBreach #WIGGWIGG #DataMinimization

  13. Loblaw breach: phone numbers exposed. Unlike emails, these are permanent identifiers enabling cross-referencing attacks.

    WIGGWIGG's data minimization: temp numbers that forward then delete. Zero permanent records.

    Source: bnnbloomberg.ca/business/compa

    Early Access Q2 2026 → wiggwigg.ca

    #DataBreach #WIGGWIGG #DataMinimization

  14. Loblaw breach: phone numbers exposed. Unlike emails, these are permanent identifiers enabling cross-referencing attacks.

    WIGGWIGG's data minimization: temp numbers that forward then delete. Zero permanent records.

    Source: bnnbloomberg.ca/business/compa

    Early Access Q2 2026 → wiggwigg.ca

    #DataBreach #WIGGWIGG #DataMinimization

  15. No funnels. No paywalls. No permission required. wzrdgang.com #wzrdgang #dataminimization

  16. Policy shift with technical implications.
    The European Parliament endorsed an opinion proposing:
    • Social media ban under 13
    • Parental consent under 16
    • Privacy-preserving age assurance mechanisms
    • Expanded regulation under the Digital Fairness Act

    Security and engineering considerations:
    Zero-knowledge proof-based age verification?
    On-device age estimation vs centralized ID checks?

    Data minimization vs compliance logging requirements?

    AI-driven manipulation detection standards?
    Age verification at EU scale introduces non-trivial architectural challenges - particularly around privacy-by-design and cross-border enforcement.

    From a security architecture perspective:
    Can platforms implement robust age controls without increasing identity exposure risks?
    Engage below.

    Source: therecord.media/eu-lawmakers-p

    Follow @technadu for cybersecurity, AI governance, and digital compliance analysis.
    Repost to inform the security community.

    #Infosec #AgeVerification #PrivacyEngineering #DigitalPolicy #EURegulation #AIgovernance #PlatformSecurity #DataMinimization #CyberCompliance #OnlineSafety

  17. Policy shift with technical implications.
    The European Parliament endorsed an opinion proposing:
    • Social media ban under 13
    • Parental consent under 16
    • Privacy-preserving age assurance mechanisms
    • Expanded regulation under the Digital Fairness Act

    Security and engineering considerations:
    Zero-knowledge proof-based age verification?
    On-device age estimation vs centralized ID checks?

    Data minimization vs compliance logging requirements?

    AI-driven manipulation detection standards?
    Age verification at EU scale introduces non-trivial architectural challenges - particularly around privacy-by-design and cross-border enforcement.

    From a security architecture perspective:
    Can platforms implement robust age controls without increasing identity exposure risks?
    Engage below.

    Source: therecord.media/eu-lawmakers-p

    Follow @technadu for cybersecurity, AI governance, and digital compliance analysis.
    Repost to inform the security community.

    #Infosec #AgeVerification #PrivacyEngineering #DigitalPolicy #EURegulation #AIgovernance #PlatformSecurity #DataMinimization #CyberCompliance #OnlineSafety

  18. Policy shift with technical implications.
    The European Parliament endorsed an opinion proposing:
    • Social media ban under 13
    • Parental consent under 16
    • Privacy-preserving age assurance mechanisms
    • Expanded regulation under the Digital Fairness Act

    Security and engineering considerations:
    Zero-knowledge proof-based age verification?
    On-device age estimation vs centralized ID checks?

    Data minimization vs compliance logging requirements?

    AI-driven manipulation detection standards?
    Age verification at EU scale introduces non-trivial architectural challenges - particularly around privacy-by-design and cross-border enforcement.

    From a security architecture perspective:
    Can platforms implement robust age controls without increasing identity exposure risks?
    Engage below.

    Source: therecord.media/eu-lawmakers-p

    Follow @technadu for cybersecurity, AI governance, and digital compliance analysis.
    Repost to inform the security community.

    #Infosec #AgeVerification #PrivacyEngineering #DigitalPolicy #EURegulation #AIgovernance #PlatformSecurity #DataMinimization #CyberCompliance #OnlineSafety

  19. Policy shift with technical implications.
    The European Parliament endorsed an opinion proposing:
    • Social media ban under 13
    • Parental consent under 16
    • Privacy-preserving age assurance mechanisms
    • Expanded regulation under the Digital Fairness Act

    Security and engineering considerations:
    Zero-knowledge proof-based age verification?
    On-device age estimation vs centralized ID checks?

    Data minimization vs compliance logging requirements?

    AI-driven manipulation detection standards?
    Age verification at EU scale introduces non-trivial architectural challenges - particularly around privacy-by-design and cross-border enforcement.

    From a security architecture perspective:
    Can platforms implement robust age controls without increasing identity exposure risks?
    Engage below.

    Source: therecord.media/eu-lawmakers-p

    Follow @technadu for cybersecurity, AI governance, and digital compliance analysis.
    Repost to inform the security community.

    #Infosec #AgeVerification #PrivacyEngineering #DigitalPolicy #EURegulation #AIgovernance #PlatformSecurity #DataMinimization #CyberCompliance #OnlineSafety

  20. Collect what you need. Delete what you don't. Respect privacy by default.
    That's how you build trust that lasts.
    #DataMinimization #PrivacyByDesign #BusinessEthics

  21. My advice to everyone: always retain as much data as possible never, ever overshare, and always compartmentalize¹. 🔒🧩 #PrivacyTips #DontOvershare #Compartmentalize #DataMinimization

    1 In information security, compartmentalization limits access to sensitive data to only those with a "need to know," dividing information into isolated segments to minimize breach impact. If one compartment is compromised—say, by an insider or cyberattack—attackers can't easily access others, reducing overall risk

  22. My advice to everyone: always retain as much data as possible never, ever overshare, and always compartmentalize¹. 🔒🧩 #PrivacyTips #DontOvershare #Compartmentalize #DataMinimization

    1 In information security, compartmentalization limits access to sensitive data to only those with a "need to know," dividing information into isolated segments to minimize breach impact. If one compartment is compromised—say, by an insider or cyberattack—attackers can't easily access others, reducing overall risk

  23. My advice to everyone: always retain as much data as possible never, ever overshare, and always compartmentalize¹. 🔒🧩 #PrivacyTips #DontOvershare #Compartmentalize #DataMinimization

    1 In information security, compartmentalization limits access to sensitive data to only those with a "need to know," dividing information into isolated segments to minimize breach impact. If one compartment is compromised—say, by an insider or cyberattack—attackers can't easily access others, reducing overall risk

  24. My advice to everyone: always retain as much data as possible never, ever overshare, and always compartmentalize¹. 🔒🧩

    1 In information security, compartmentalization limits access to sensitive data to only those with a "need to know," dividing information into isolated segments to minimize breach impact. If one compartment is compromised—say, by an insider or cyberattack—attackers can't easily access others, reducing overall risk

  25. My advice to everyone: always retain as much data as possible never, ever overshare, and always compartmentalize¹. 🔒🧩 #PrivacyTips #DontOvershare #Compartmentalize #DataMinimization

    1 In information security, compartmentalization limits access to sensitive data to only those with a "need to know," dividing information into isolated segments to minimize breach impact. If one compartment is compromised—say, by an insider or cyberattack—attackers can't easily access others, reducing overall risk

  26. 📝 WhatsApp, metadata and privacy: when the problem is not the content but the context

    Two studies reveal WhatsApp metadata vulnerabilities: 3.5 billion accounts enumerated and device fingerprinting. Analysis of risks and open source alternatives such as XMPP and Matrix.

    🔗 nicfab.eu/en/posts/whatsapp-me

    #PrivacyByDesign #DataMinimization #DataProtection #DigitalRights #BugBounty

  27. 📝 WhatsApp, metadata and privacy: when the problem is not the content but the context

    Two studies reveal WhatsApp metadata vulnerabilities: 3.5 billion accounts enumerated and device fingerprinting. Analysis of risks and open source alternatives such as XMPP and Matrix.

    🔗 nicfab.eu/en/posts/whatsapp-me

  28. 📝 WhatsApp, metadata and privacy: when the problem is not the content but the context

    Two studies reveal WhatsApp metadata vulnerabilities: 3.5 billion accounts enumerated and device fingerprinting. Analysis of risks and open source alternatives such as XMPP and Matrix.

    🔗 nicfab.eu/en/posts/whatsapp-me

    #PrivacyByDesign #DataMinimization #DataProtection #DigitalRights #BugBounty

  29. 📝 WhatsApp, metadata and privacy: when the problem is not the content but the context

    Two studies reveal WhatsApp metadata vulnerabilities: 3.5 billion accounts enumerated and device fingerprinting. Analysis of risks and open source alternatives such as XMPP and Matrix.

    🔗 nicfab.eu/en/posts/whatsapp-me

    #PrivacyByDesign #DataMinimization #DataProtection #DigitalRights #BugBounty

  30. 📝 WhatsApp, metadata and privacy: when the problem is not the content but the context

    Two studies reveal WhatsApp metadata vulnerabilities: 3.5 billion accounts enumerated and device fingerprinting. Analysis of risks and open source alternatives such as XMPP and Matrix.

    🔗 nicfab.eu/en/posts/whatsapp-me

    #PrivacyByDesign #DataMinimization #DataProtection #DigitalRights #BugBounty

  31. Privacy First, Security Always: The Only Sane Default

    Privacy first, security always” is either a real principle or it is marketing wallpaper.

    People can smell the difference now. Not because everyone became a cryptography nerd overnight, but because the consequences turned personal. Accounts get drained. Identities get cloned. A harmless preference turns into a predictive profile. Then a company calls it “personalization” and expects gratitude.

    I keep coming back to a simple line: if a system cannot respect boundaries, it does not deserve trust.

    The quiet theft is not the breach. It is the business model

    Security failures arrive with sirens. Privacy failures arrive with a checkbox.

    Teams hide the most invasive defaults behind consent banners, vague policies, and settings buried three menus deep. That is why privacy first has to be architectural. If your product needs intimate data to function, the relationship starts compromised and every debate becomes about permission instead of necessity.

    A practical test helps.

    Picture your product landing on the desk of a skeptical customer who has already been burned. They ask one question: “Why do you need this data?

    A hand-wavy answer like “we might use it later” reveals the truth. You are not building a service. You are building a warehouse.

    Privacy first means you design so the system does not need to know everything about someone in order to work.

    Security always is not paranoia. It is respect for entropy

    Security is not a feature you bolt on. Security is the discipline you practice.

    Most compromises are not clever or dramatic. Routine mistakes create them: misconfigurations, over-permissioned accounts, leaked secrets, and unpatched dependencies.

    Permissions sprawl until nobody can map them. Teams ship misconfigurations. Secrets leak because nobody rotates them. Dependencies drag risk into your product like barnacles. Backups fail the one day you need them. Logs exist but never tell a story.

    Security always means you assume failure will happen and you engineer the impact down to something survivable.

    That mindset can sound pessimistic. In reality, it respects entropy. Systems decay, incentives shift, and people make mistakes. Entropy does not care about your roadmap.

    The practical blueprint: collect less, separate, prove

    I like frameworks when they sharpen thinking and do not become religious scrolls. The simplest operating model I trust looks like this.

    1) Collect less

    Collect only what you can defend in one sentence to a skeptical user. Not to your lawyer. To your user.

    Reduce identity where you can. Prefer short-lived identifiers over permanent ones. Process locally whenever it makes sense.

    A privacy-first system does not brag about protecting your data. It quietly replies, “we never stored it.”

    2) Separate what you must store

    Treat data like it can explode, because it can.

    Separate identifiers, content, metadata, and billing. Force access through clear boundaries. Encrypt sensitive fields at rest. Keep administrative power narrow and observable.

    Isolation is also cultural. Engineers should not casually browse production data. A company that must “look inside” to operate has built a fragile machine.

    3) Prove what you did

    Logging is not glamorous. Auditability is not optional.

    Teams earn trust when they can show what happened, who accessed what, and why. If you cannot prove access, you do not control access.

    This is where “security always” stops being a vibe and becomes engineering.

    Where AI changes the stakes

    AI increases the temptation to repurpose data. More data looks like more capability.

    That logic has a shadow.

    Once the data exists, incentives attack it from every angle. Governments demand it. Attackers leak it. Brokers sell it. Lawyers subpoena it. Insiders misuse it. Product teams pull it into models because it feels convenient.

    The old scandal playbook that turned personal information into political influence taught a brutal lesson. People do not hate being measured. People hate being manipulated.

    Privacy first, security always refuses to build manipulation pipelines by accident.

    The surveillance trade is a false bargain

    Leaders keep offering societies the same deal: give up a little privacy for a little security.

    The pitch sounds reasonable until you watch the pattern. Privacy leaves first. The promised security rarely arrives.

    Real security looks boring in practice. Patching, least privilege, planning for failure, and building systems that do not collapse when one component breaks define it.

    Mass surveillance does not deliver security. It delivers power.

    That matters if you care about liberal values, because agency needs a private interior. People who feel watched do not explore ideas. They perform. When performance replaces honesty, innovation dies quietly.

    What “privacy first, security always” looks like in real products

    It looks like choices that feel slightly harder in the short term and far cheaper in the long term.

    • End-to-end encryption where it actually matters, especially for private content.
    • Local-first or edge-first intelligence where feasible, so insights do not require central hoarding.
    • Clear data lifecycles: expiration by default, deletion that is real, retention that is justified.
    • User agency that is not performative: export, revoke, rotate, and leave.
    • Transparency that is specific: what is collected, why, where it goes, and how long it stays.

    Open source helps here, not as ideology, but as visibility. Opaque systems force trust to become faith. Visible systems let trust return to engineering.

    Shift: trust is becoming a business strategy again

    For years, growth came easiest to the companies that treated people as data sources. That era is wearing out, because distrust is becoming expensive.

    Customers ask better questions now. Teams tire of cleaning up preventable incidents. Regulators tighten expectations around data usage, especially when AI enters the picture. Investors learn that “move fast” turns expensive when you pay for the mess.

    The economics stay simple: trust costs less to build early than to buy back later.

    A line I like has stuck with me.

    “You don’t need to drive the car to influence the journey. Speak clearly, and the driver might begin to listen. Place a sign on the roadside, and someone behind you will see it. Offer a compass, and you guide even without steering.”

    Privacy first, security always is one of those signposts.

    A society that shrugs at surveillance becomes a society that cannot breathe. A company that shrugs at security becomes a company that cannot be trusted. The two failures reinforce each other.

    “Privacy first, security always” is the design stance that says: we do not need to own people to serve them.

    Build systems that deserve users.

    Call to action

    If you build products, pick one system this week and run a simple trust audit.

    Ask:

    • What personal data do we collect that we could remove?
    • What do we keep longer than we can justify?
    • Who can access sensitive data today, and how do we prove it?
    • Which dependency or vendor would hurt us most if it failed?
    • What would we tell users within 24 hours of a breach?

    If you find a gap, fix one thing. Small repairs compound.

    If this resonates, share the post with someone who ships software, and leave a comment with the hardest privacy or security tradeoff you are facing right now. I read them and I will reply.

    Key Takeaways

    • Privacy first means designing systems that respect user boundaries and don’t require excessive data.
    • Security always involves assuming failures will happen and engineering to minimize their impact.
    • The practical blueprint consists of collecting less data, separating necessary data, and proving access to it.
    • Privacy first, security always discourages manipulation and builds trust between users and companies.
    • Companies that prioritize trust will thrive as users demand better data practices and transparency.
    #AIGovernance #dataMinimization #digitalRights #encryption #Privacy #PrivacyByDesign #secureByDefault #security #trust #zeroTrust
  32. Privacy First, Security Always: The Only Sane Default

    Privacy first, security always” is either a real principle or it is marketing wallpaper.

    People can smell the difference now. Not because everyone became a cryptography nerd overnight, but because the consequences turned personal. Accounts get drained. Identities get cloned. A harmless preference turns into a predictive profile. Then a company calls it “personalization” and expects gratitude.

    I keep coming back to a simple line: if a system cannot respect boundaries, it does not deserve trust.

    The quiet theft is not the breach. It is the business model

    Security failures arrive with sirens. Privacy failures arrive with a checkbox.

    Teams hide the most invasive defaults behind consent banners, vague policies, and settings buried three menus deep. That is why privacy first has to be architectural. If your product needs intimate data to function, the relationship starts compromised and every debate becomes about permission instead of necessity.

    A practical test helps.

    Picture your product landing on the desk of a skeptical customer who has already been burned. They ask one question: “Why do you need this data?

    A hand-wavy answer like “we might use it later” reveals the truth. You are not building a service. You are building a warehouse.

    Privacy first means you design so the system does not need to know everything about someone in order to work.

    Security always is not paranoia. It is respect for entropy

    Security is not a feature you bolt on. Security is the discipline you practice.

    Most compromises are not clever or dramatic. Routine mistakes create them: misconfigurations, over-permissioned accounts, leaked secrets, and unpatched dependencies.

    Permissions sprawl until nobody can map them. Teams ship misconfigurations. Secrets leak because nobody rotates them. Dependencies drag risk into your product like barnacles. Backups fail the one day you need them. Logs exist but never tell a story.

    Security always means you assume failure will happen and you engineer the impact down to something survivable.

    That mindset can sound pessimistic. In reality, it respects entropy. Systems decay, incentives shift, and people make mistakes. Entropy does not care about your roadmap.

    The practical blueprint: collect less, separate, prove

    I like frameworks when they sharpen thinking and do not become religious scrolls. The simplest operating model I trust looks like this.

    1) Collect less

    Collect only what you can defend in one sentence to a skeptical user. Not to your lawyer. To your user.

    Reduce identity where you can. Prefer short-lived identifiers over permanent ones. Process locally whenever it makes sense.

    A privacy-first system does not brag about protecting your data. It quietly replies, “we never stored it.”

    2) Separate what you must store

    Treat data like it can explode, because it can.

    Separate identifiers, content, metadata, and billing. Force access through clear boundaries. Encrypt sensitive fields at rest. Keep administrative power narrow and observable.

    Isolation is also cultural. Engineers should not casually browse production data. A company that must “look inside” to operate has built a fragile machine.

    3) Prove what you did

    Logging is not glamorous. Auditability is not optional.

    Teams earn trust when they can show what happened, who accessed what, and why. If you cannot prove access, you do not control access.

    This is where “security always” stops being a vibe and becomes engineering.

    Where AI changes the stakes

    AI increases the temptation to repurpose data. More data looks like more capability.

    That logic has a shadow.

    Once the data exists, incentives attack it from every angle. Governments demand it. Attackers leak it. Brokers sell it. Lawyers subpoena it. Insiders misuse it. Product teams pull it into models because it feels convenient.

    The old scandal playbook that turned personal information into political influence taught a brutal lesson. People do not hate being measured. People hate being manipulated.

    Privacy first, security always refuses to build manipulation pipelines by accident.

    The surveillance trade is a false bargain

    Leaders keep offering societies the same deal: give up a little privacy for a little security.

    The pitch sounds reasonable until you watch the pattern. Privacy leaves first. The promised security rarely arrives.

    Real security looks boring in practice. Patching, least privilege, planning for failure, and building systems that do not collapse when one component breaks define it.

    Mass surveillance does not deliver security. It delivers power.

    That matters if you care about liberal values, because agency needs a private interior. People who feel watched do not explore ideas. They perform. When performance replaces honesty, innovation dies quietly.

    What “privacy first, security always” looks like in real products

    It looks like choices that feel slightly harder in the short term and far cheaper in the long term.

    • End-to-end encryption where it actually matters, especially for private content.
    • Local-first or edge-first intelligence where feasible, so insights do not require central hoarding.
    • Clear data lifecycles: expiration by default, deletion that is real, retention that is justified.
    • User agency that is not performative: export, revoke, rotate, and leave.
    • Transparency that is specific: what is collected, why, where it goes, and how long it stays.

    Open source helps here, not as ideology, but as visibility. Opaque systems force trust to become faith. Visible systems let trust return to engineering.

    Shift: trust is becoming a business strategy again

    For years, growth came easiest to the companies that treated people as data sources. That era is wearing out, because distrust is becoming expensive.

    Customers ask better questions now. Teams tire of cleaning up preventable incidents. Regulators tighten expectations around data usage, especially when AI enters the picture. Investors learn that “move fast” turns expensive when you pay for the mess.

    The economics stay simple: trust costs less to build early than to buy back later.

    A line I like has stuck with me.

    “You don’t need to drive the car to influence the journey. Speak clearly, and the driver might begin to listen. Place a sign on the roadside, and someone behind you will see it. Offer a compass, and you guide even without steering.”

    Privacy first, security always is one of those signposts.

    A society that shrugs at surveillance becomes a society that cannot breathe. A company that shrugs at security becomes a company that cannot be trusted. The two failures reinforce each other.

    “Privacy first, security always” is the design stance that says: we do not need to own people to serve them.

    Build systems that deserve users.

    Call to action

    If you build products, pick one system this week and run a simple trust audit.

    Ask:

    • What personal data do we collect that we could remove?
    • What do we keep longer than we can justify?
    • Who can access sensitive data today, and how do we prove it?
    • Which dependency or vendor would hurt us most if it failed?
    • What would we tell users within 24 hours of a breach?

    If you find a gap, fix one thing. Small repairs compound.

    If this resonates, share the post with someone who ships software, and leave a comment with the hardest privacy or security tradeoff you are facing right now. I read them and I will reply.

    Key Takeaways

    • Privacy first means designing systems that respect user boundaries and don’t require excessive data.
    • Security always involves assuming failures will happen and engineering to minimize their impact.
    • The practical blueprint consists of collecting less data, separating necessary data, and proving access to it.
    • Privacy first, security always discourages manipulation and builds trust between users and companies.
    • Companies that prioritize trust will thrive as users demand better data practices and transparency.
    #AIGovernance #dataMinimization #digitalRights #encryption #Privacy #PrivacyByDesign #secureByDefault #security #trust #zeroTrust
  33. Privacy First, Security Always: The Only Sane Default

    Privacy first, security always” is either a real principle or it is marketing wallpaper.

    People can smell the difference now. Not because everyone became a cryptography nerd overnight, but because the consequences turned personal. Accounts get drained. Identities get cloned. A harmless preference turns into a predictive profile. Then a company calls it “personalization” and expects gratitude.

    I keep coming back to a simple line: if a system cannot respect boundaries, it does not deserve trust.

    The quiet theft is not the breach. It is the business model

    Security failures arrive with sirens. Privacy failures arrive with a checkbox.

    Teams hide the most invasive defaults behind consent banners, vague policies, and settings buried three menus deep. That is why privacy first has to be architectural. If your product needs intimate data to function, the relationship starts compromised and every debate becomes about permission instead of necessity.

    A practical test helps.

    Picture your product landing on the desk of a skeptical customer who has already been burned. They ask one question: “Why do you need this data?

    A hand-wavy answer like “we might use it later” reveals the truth. You are not building a service. You are building a warehouse.

    Privacy first means you design so the system does not need to know everything about someone in order to work.

    Security always is not paranoia. It is respect for entropy

    Security is not a feature you bolt on. Security is the discipline you practice.

    Most compromises are not clever or dramatic. Routine mistakes create them: misconfigurations, over-permissioned accounts, leaked secrets, and unpatched dependencies.

    Permissions sprawl until nobody can map them. Teams ship misconfigurations. Secrets leak because nobody rotates them. Dependencies drag risk into your product like barnacles. Backups fail the one day you need them. Logs exist but never tell a story.

    Security always means you assume failure will happen and you engineer the impact down to something survivable.

    That mindset can sound pessimistic. In reality, it respects entropy. Systems decay, incentives shift, and people make mistakes. Entropy does not care about your roadmap.

    The practical blueprint: collect less, separate, prove

    I like frameworks when they sharpen thinking and do not become religious scrolls. The simplest operating model I trust looks like this.

    1) Collect less

    Collect only what you can defend in one sentence to a skeptical user. Not to your lawyer. To your user.

    Reduce identity where you can. Prefer short-lived identifiers over permanent ones. Process locally whenever it makes sense.

    A privacy-first system does not brag about protecting your data. It quietly replies, “we never stored it.”

    2) Separate what you must store

    Treat data like it can explode, because it can.

    Separate identifiers, content, metadata, and billing. Force access through clear boundaries. Encrypt sensitive fields at rest. Keep administrative power narrow and observable.

    Isolation is also cultural. Engineers should not casually browse production data. A company that must “look inside” to operate has built a fragile machine.

    3) Prove what you did

    Logging is not glamorous. Auditability is not optional.

    Teams earn trust when they can show what happened, who accessed what, and why. If you cannot prove access, you do not control access.

    This is where “security always” stops being a vibe and becomes engineering.

    Where AI changes the stakes

    AI increases the temptation to repurpose data. More data looks like more capability.

    That logic has a shadow.

    Once the data exists, incentives attack it from every angle. Governments demand it. Attackers leak it. Brokers sell it. Lawyers subpoena it. Insiders misuse it. Product teams pull it into models because it feels convenient.

    The old scandal playbook that turned personal information into political influence taught a brutal lesson. People do not hate being measured. People hate being manipulated.

    Privacy first, security always refuses to build manipulation pipelines by accident.

    The surveillance trade is a false bargain

    Leaders keep offering societies the same deal: give up a little privacy for a little security.

    The pitch sounds reasonable until you watch the pattern. Privacy leaves first. The promised security rarely arrives.

    Real security looks boring in practice. Patching, least privilege, planning for failure, and building systems that do not collapse when one component breaks define it.

    Mass surveillance does not deliver security. It delivers power.

    That matters if you care about liberal values, because agency needs a private interior. People who feel watched do not explore ideas. They perform. When performance replaces honesty, innovation dies quietly.

    What “privacy first, security always” looks like in real products

    It looks like choices that feel slightly harder in the short term and far cheaper in the long term.

    • End-to-end encryption where it actually matters, especially for private content.
    • Local-first or edge-first intelligence where feasible, so insights do not require central hoarding.
    • Clear data lifecycles: expiration by default, deletion that is real, retention that is justified.
    • User agency that is not performative: export, revoke, rotate, and leave.
    • Transparency that is specific: what is collected, why, where it goes, and how long it stays.

    Open source helps here, not as ideology, but as visibility. Opaque systems force trust to become faith. Visible systems let trust return to engineering.

    Shift: trust is becoming a business strategy again

    For years, growth came easiest to the companies that treated people as data sources. That era is wearing out, because distrust is becoming expensive.

    Customers ask better questions now. Teams tire of cleaning up preventable incidents. Regulators tighten expectations around data usage, especially when AI enters the picture. Investors learn that “move fast” turns expensive when you pay for the mess.

    The economics stay simple: trust costs less to build early than to buy back later.

    A line I like has stuck with me.

    “You don’t need to drive the car to influence the journey. Speak clearly, and the driver might begin to listen. Place a sign on the roadside, and someone behind you will see it. Offer a compass, and you guide even without steering.”

    Privacy first, security always is one of those signposts.

    A society that shrugs at surveillance becomes a society that cannot breathe. A company that shrugs at security becomes a company that cannot be trusted. The two failures reinforce each other.

    “Privacy first, security always” is the design stance that says: we do not need to own people to serve them.

    Build systems that deserve users.

    Call to action

    If you build products, pick one system this week and run a simple trust audit.

    Ask:

    • What personal data do we collect that we could remove?
    • What do we keep longer than we can justify?
    • Who can access sensitive data today, and how do we prove it?
    • Which dependency or vendor would hurt us most if it failed?
    • What would we tell users within 24 hours of a breach?

    If you find a gap, fix one thing. Small repairs compound.

    If this resonates, share the post with someone who ships software, and leave a comment with the hardest privacy or security tradeoff you are facing right now. I read them and I will reply.

    Key Takeaways

    • Privacy first means designing systems that respect user boundaries and don’t require excessive data.
    • Security always involves assuming failures will happen and engineering to minimize their impact.
    • The practical blueprint consists of collecting less data, separating necessary data, and proving access to it.
    • Privacy first, security always discourages manipulation and builds trust between users and companies.
    • Companies that prioritize trust will thrive as users demand better data practices and transparency.
    #AIGovernance #dataMinimization #digitalRights #encryption #Privacy #PrivacyByDesign #secureByDefault #security #trust #zeroTrust
  34. Privacy First, Security Always: The Only Sane Default

    Privacy first, security always” is either a real principle or it is marketing wallpaper.

    People can smell the difference now. Not because everyone became a cryptography nerd overnight, but because the consequences turned personal. Accounts get drained. Identities get cloned. A harmless preference turns into a predictive profile. Then a company calls it “personalization” and expects gratitude.

    I keep coming back to a simple line: if a system cannot respect boundaries, it does not deserve trust.

    The quiet theft is not the breach. It is the business model

    Security failures arrive with sirens. Privacy failures arrive with a checkbox.

    Teams hide the most invasive defaults behind consent banners, vague policies, and settings buried three menus deep. That is why privacy first has to be architectural. If your product needs intimate data to function, the relationship starts compromised and every debate becomes about permission instead of necessity.

    A practical test helps.

    Picture your product landing on the desk of a skeptical customer who has already been burned. They ask one question: “Why do you need this data?

    A hand-wavy answer like “we might use it later” reveals the truth. You are not building a service. You are building a warehouse.

    Privacy first means you design so the system does not need to know everything about someone in order to work.

    Security always is not paranoia. It is respect for entropy

    Security is not a feature you bolt on. Security is the discipline you practice.

    Most compromises are not clever or dramatic. Routine mistakes create them: misconfigurations, over-permissioned accounts, leaked secrets, and unpatched dependencies.

    Permissions sprawl until nobody can map them. Teams ship misconfigurations. Secrets leak because nobody rotates them. Dependencies drag risk into your product like barnacles. Backups fail the one day you need them. Logs exist but never tell a story.

    Security always means you assume failure will happen and you engineer the impact down to something survivable.

    That mindset can sound pessimistic. In reality, it respects entropy. Systems decay, incentives shift, and people make mistakes. Entropy does not care about your roadmap.

    The practical blueprint: collect less, separate, prove

    I like frameworks when they sharpen thinking and do not become religious scrolls. The simplest operating model I trust looks like this.

    1) Collect less

    Collect only what you can defend in one sentence to a skeptical user. Not to your lawyer. To your user.

    Reduce identity where you can. Prefer short-lived identifiers over permanent ones. Process locally whenever it makes sense.

    A privacy-first system does not brag about protecting your data. It quietly replies, “we never stored it.”

    2) Separate what you must store

    Treat data like it can explode, because it can.

    Separate identifiers, content, metadata, and billing. Force access through clear boundaries. Encrypt sensitive fields at rest. Keep administrative power narrow and observable.

    Isolation is also cultural. Engineers should not casually browse production data. A company that must “look inside” to operate has built a fragile machine.

    3) Prove what you did

    Logging is not glamorous. Auditability is not optional.

    Teams earn trust when they can show what happened, who accessed what, and why. If you cannot prove access, you do not control access.

    This is where “security always” stops being a vibe and becomes engineering.

    Where AI changes the stakes

    AI increases the temptation to repurpose data. More data looks like more capability.

    That logic has a shadow.

    Once the data exists, incentives attack it from every angle. Governments demand it. Attackers leak it. Brokers sell it. Lawyers subpoena it. Insiders misuse it. Product teams pull it into models because it feels convenient.

    The old scandal playbook that turned personal information into political influence taught a brutal lesson. People do not hate being measured. People hate being manipulated.

    Privacy first, security always refuses to build manipulation pipelines by accident.

    The surveillance trade is a false bargain

    Leaders keep offering societies the same deal: give up a little privacy for a little security.

    The pitch sounds reasonable until you watch the pattern. Privacy leaves first. The promised security rarely arrives.

    Real security looks boring in practice. Patching, least privilege, planning for failure, and building systems that do not collapse when one component breaks define it.

    Mass surveillance does not deliver security. It delivers power.

    That matters if you care about liberal values, because agency needs a private interior. People who feel watched do not explore ideas. They perform. When performance replaces honesty, innovation dies quietly.

    What “privacy first, security always” looks like in real products

    It looks like choices that feel slightly harder in the short term and far cheaper in the long term.

    • End-to-end encryption where it actually matters, especially for private content.
    • Local-first or edge-first intelligence where feasible, so insights do not require central hoarding.
    • Clear data lifecycles: expiration by default, deletion that is real, retention that is justified.
    • User agency that is not performative: export, revoke, rotate, and leave.
    • Transparency that is specific: what is collected, why, where it goes, and how long it stays.

    Open source helps here, not as ideology, but as visibility. Opaque systems force trust to become faith. Visible systems let trust return to engineering.

    Shift: trust is becoming a business strategy again

    For years, growth came easiest to the companies that treated people as data sources. That era is wearing out, because distrust is becoming expensive.

    Customers ask better questions now. Teams tire of cleaning up preventable incidents. Regulators tighten expectations around data usage, especially when AI enters the picture. Investors learn that “move fast” turns expensive when you pay for the mess.

    The economics stay simple: trust costs less to build early than to buy back later.

    A line I like has stuck with me.

    “You don’t need to drive the car to influence the journey. Speak clearly, and the driver might begin to listen. Place a sign on the roadside, and someone behind you will see it. Offer a compass, and you guide even without steering.”

    Privacy first, security always is one of those signposts.

    A society that shrugs at surveillance becomes a society that cannot breathe. A company that shrugs at security becomes a company that cannot be trusted. The two failures reinforce each other.

    “Privacy first, security always” is the design stance that says: we do not need to own people to serve them.

    Build systems that deserve users.

    Call to action

    If you build products, pick one system this week and run a simple trust audit.

    Ask:

    • What personal data do we collect that we could remove?
    • What do we keep longer than we can justify?
    • Who can access sensitive data today, and how do we prove it?
    • Which dependency or vendor would hurt us most if it failed?
    • What would we tell users within 24 hours of a breach?

    If you find a gap, fix one thing. Small repairs compound.

    If this resonates, share the post with someone who ships software, and leave a comment with the hardest privacy or security tradeoff you are facing right now. I read them and I will reply.

    Key Takeaways

    • Privacy first means designing systems that respect user boundaries and don’t require excessive data.
    • Security always involves assuming failures will happen and engineering to minimize their impact.
    • The practical blueprint consists of collecting less data, separating necessary data, and proving access to it.
    • Privacy first, security always discourages manipulation and builds trust between users and companies.
    • Companies that prioritize trust will thrive as users demand better data practices and transparency.
    #AIGovernance #dataMinimization #digitalRights #encryption #Privacy #PrivacyByDesign #secureByDefault #security #trust #zeroTrust
  35. Privacy First, Security Always: The Only Sane Default

    Privacy first, security always” is either a real principle or it is marketing wallpaper.

    People can smell the difference now. Not because everyone became a cryptography nerd overnight, but because the consequences turned personal. Accounts get drained. Identities get cloned. A harmless preference turns into a predictive profile. Then a company calls it “personalization” and expects gratitude.

    I keep coming back to a simple line: if a system cannot respect boundaries, it does not deserve trust.

    The quiet theft is not the breach. It is the business model

    Security failures arrive with sirens. Privacy failures arrive with a checkbox.

    Teams hide the most invasive defaults behind consent banners, vague policies, and settings buried three menus deep. That is why privacy first has to be architectural. If your product needs intimate data to function, the relationship starts compromised and every debate becomes about permission instead of necessity.

    A practical test helps.

    Picture your product landing on the desk of a skeptical customer who has already been burned. They ask one question: “Why do you need this data?

    A hand-wavy answer like “we might use it later” reveals the truth. You are not building a service. You are building a warehouse.

    Privacy first means you design so the system does not need to know everything about someone in order to work.

    Security always is not paranoia. It is respect for entropy

    Security is not a feature you bolt on. Security is the discipline you practice.

    Most compromises are not clever or dramatic. Routine mistakes create them: misconfigurations, over-permissioned accounts, leaked secrets, and unpatched dependencies.

    Permissions sprawl until nobody can map them. Teams ship misconfigurations. Secrets leak because nobody rotates them. Dependencies drag risk into your product like barnacles. Backups fail the one day you need them. Logs exist but never tell a story.

    Security always means you assume failure will happen and you engineer the impact down to something survivable.

    That mindset can sound pessimistic. In reality, it respects entropy. Systems decay, incentives shift, and people make mistakes. Entropy does not care about your roadmap.

    The practical blueprint: collect less, separate, prove

    I like frameworks when they sharpen thinking and do not become religious scrolls. The simplest operating model I trust looks like this.

    1) Collect less

    Collect only what you can defend in one sentence to a skeptical user. Not to your lawyer. To your user.

    Reduce identity where you can. Prefer short-lived identifiers over permanent ones. Process locally whenever it makes sense.

    A privacy-first system does not brag about protecting your data. It quietly replies, “we never stored it.”

    2) Separate what you must store

    Treat data like it can explode, because it can.

    Separate identifiers, content, metadata, and billing. Force access through clear boundaries. Encrypt sensitive fields at rest. Keep administrative power narrow and observable.

    Isolation is also cultural. Engineers should not casually browse production data. A company that must “look inside” to operate has built a fragile machine.

    3) Prove what you did

    Logging is not glamorous. Auditability is not optional.

    Teams earn trust when they can show what happened, who accessed what, and why. If you cannot prove access, you do not control access.

    This is where “security always” stops being a vibe and becomes engineering.

    Where AI changes the stakes

    AI increases the temptation to repurpose data. More data looks like more capability.

    That logic has a shadow.

    Once the data exists, incentives attack it from every angle. Governments demand it. Attackers leak it. Brokers sell it. Lawyers subpoena it. Insiders misuse it. Product teams pull it into models because it feels convenient.

    The old scandal playbook that turned personal information into political influence taught a brutal lesson. People do not hate being measured. People hate being manipulated.

    Privacy first, security always refuses to build manipulation pipelines by accident.

    The surveillance trade is a false bargain

    Leaders keep offering societies the same deal: give up a little privacy for a little security.

    The pitch sounds reasonable until you watch the pattern. Privacy leaves first. The promised security rarely arrives.

    Real security looks boring in practice. Patching, least privilege, planning for failure, and building systems that do not collapse when one component breaks define it.

    Mass surveillance does not deliver security. It delivers power.

    That matters if you care about liberal values, because agency needs a private interior. People who feel watched do not explore ideas. They perform. When performance replaces honesty, innovation dies quietly.

    What “privacy first, security always” looks like in real products

    It looks like choices that feel slightly harder in the short term and far cheaper in the long term.

    • End-to-end encryption where it actually matters, especially for private content.
    • Local-first or edge-first intelligence where feasible, so insights do not require central hoarding.
    • Clear data lifecycles: expiration by default, deletion that is real, retention that is justified.
    • User agency that is not performative: export, revoke, rotate, and leave.
    • Transparency that is specific: what is collected, why, where it goes, and how long it stays.

    Open source helps here, not as ideology, but as visibility. Opaque systems force trust to become faith. Visible systems let trust return to engineering.

    Shift: trust is becoming a business strategy again

    For years, growth came easiest to the companies that treated people as data sources. That era is wearing out, because distrust is becoming expensive.

    Customers ask better questions now. Teams tire of cleaning up preventable incidents. Regulators tighten expectations around data usage, especially when AI enters the picture. Investors learn that “move fast” turns expensive when you pay for the mess.

    The economics stay simple: trust costs less to build early than to buy back later.

    A line I like has stuck with me.

    “You don’t need to drive the car to influence the journey. Speak clearly, and the driver might begin to listen. Place a sign on the roadside, and someone behind you will see it. Offer a compass, and you guide even without steering.”

    Privacy first, security always is one of those signposts.

    A society that shrugs at surveillance becomes a society that cannot breathe. A company that shrugs at security becomes a company that cannot be trusted. The two failures reinforce each other.

    “Privacy first, security always” is the design stance that says: we do not need to own people to serve them.

    Build systems that deserve users.

    Call to action

    If you build products, pick one system this week and run a simple trust audit.

    Ask:

    • What personal data do we collect that we could remove?
    • What do we keep longer than we can justify?
    • Who can access sensitive data today, and how do we prove it?
    • Which dependency or vendor would hurt us most if it failed?
    • What would we tell users within 24 hours of a breach?

    If you find a gap, fix one thing. Small repairs compound.

    If this resonates, share the post with someone who ships software, and leave a comment with the hardest privacy or security tradeoff you are facing right now. I read them and I will reply.

    Key Takeaways

    • Privacy first means designing systems that respect user boundaries and don’t require excessive data.
    • Security always involves assuming failures will happen and engineering to minimize their impact.
    • The practical blueprint consists of collecting less data, separating necessary data, and proving access to it.
    • Privacy first, security always discourages manipulation and builds trust between users and companies.
    • Companies that prioritize trust will thrive as users demand better data practices and transparency.
    #AIGovernance #dataMinimization #digitalRights #encryption #Privacy #PrivacyByDesign #secureByDefault #security #trust #zeroTrust
  36. New Privacy Guides video 🎞️🪪
    by @jw:

    Age Verification represents an incredible threat to our privacy.

    Not only Age Verification doesn't protect the children, but this could mean the end of protective pseudonymity for everyone if implemented widely.

    Watch this excellent video created by Jordan here on PeerTube (based on my article on the same topic): neat.tube/w/aR4toTWJpcBZamUdQQ

    #PrivacyGuides #Privacy #AgeVerification #DataMinimization #Pseudonymity #PeerTube

  37. New Privacy Guides video 🎞️🪪
    by @jw:

    Age Verification represents an incredible threat to our privacy.

    Not only Age Verification doesn't protect the children, but this could mean the end of protective pseudonymity for everyone if implemented widely.

    Watch this excellent video created by Jordan here on PeerTube (based on my article on the same topic): neat.tube/w/aR4toTWJpcBZamUdQQ

    #PrivacyGuides #Privacy #AgeVerification #DataMinimization #Pseudonymity #PeerTube

  38. New Privacy Guides video 🎞️🪪
    by @jw:

    Age Verification represents an incredible threat to our privacy.

    Not only Age Verification doesn't protect the children, but this could mean the end of protective pseudonymity for everyone if implemented widely.

    Watch this excellent video created by Jordan here on PeerTube (based on my article on the same topic): neat.tube/w/aR4toTWJpcBZamUdQQ

    #PrivacyGuides #Privacy #AgeVerification #DataMinimization #Pseudonymity #PeerTube

  39. New Privacy Guides video 🎞️🪪
    by @jw:

    Age Verification represents an incredible threat to our privacy.

    Not only Age Verification doesn't protect the children, but this could mean the end of protective pseudonymity for everyone if implemented widely.

    Watch this excellent video created by Jordan here on PeerTube (based on my article on the same topic): neat.tube/w/aR4toTWJpcBZamUdQQ

    #PrivacyGuides #Privacy #AgeVerification #DataMinimization #Pseudonymity #PeerTube

  40. New Privacy Guides video 🎞️🪪
    by @jw:

    Age Verification represents an incredible threat to our privacy.

    Not only Age Verification doesn't protect the children, but this could mean the end of protective pseudonymity for everyone if implemented widely.

    Watch this excellent video created by Jordan here on PeerTube (based on my article on the same topic): neat.tube/w/aR4toTWJpcBZamUdQQ

    #PrivacyGuides #Privacy #AgeVerification #DataMinimization #Pseudonymity #PeerTube

  41. In case you are falsely feeling protected outside of Europe:

    Chat Control doesn't just concern Europeans. It concerns all of us.

    These kind of regulations will come for all of us, everywhere, if we do not ALL push against it everywhere.

    If you do not understand how this is all intertwined, I invite you to read more privacy news and in-depth analysis. Because we must all support each other's privacy fights.

    Privacy is a human right 💚

    Fight for a better world, together ✊🌍

    #ChatControl #AgeVerification #DataMinimization #HumanRights #DigitalRights #Privacy #Encryption #E2EE #RootForE2EE 🎉

  42. In case you are falsely feeling protected outside of Europe:

    Chat Control doesn't just concern Europeans. It concerns all of us.

    These kind of regulations will come for all of us, everywhere, if we do not ALL push against it everywhere.

    If you do not understand how this is all intertwined, I invite you to read more privacy news and in-depth analysis. Because we must all support each other's privacy fights.

    Privacy is a human right 💚

    Fight for a better world, together ✊🌍

    #ChatControl #AgeVerification #DataMinimization #HumanRights #DigitalRights #Privacy #Encryption #E2EE #RootForE2EE 🎉

  43. In case you are falsely feeling protected outside of Europe:

    Chat Control doesn't just concern Europeans. It concerns all of us.

    These kind of regulations will come for all of us, everywhere, if we do not ALL push against it everywhere.

    If you do not understand how this is all intertwined, I invite you to read more privacy news and in-depth analysis. Because we must all support each other's privacy fights.

    Privacy is a human right 💚

    Fight for a better world, together ✊🌍

    #ChatControl #AgeVerification #DataMinimization #HumanRights #DigitalRights #Privacy #Encryption #E2EE #RootForE2EE 🎉

  44. In case you are falsely feeling protected outside of Europe:

    Chat Control doesn't just concern Europeans. It concerns all of us.

    These kind of regulations will come for all of us, everywhere, if we do not ALL push against it everywhere.

    If you do not understand how this is all intertwined, I invite you to read more privacy news and in-depth analysis. Because we must all support each other's privacy fights.

    Privacy is a human right 💚

    Fight for a better world, together ✊🌍

    #ChatControl #AgeVerification #DataMinimization #HumanRights #DigitalRights #Privacy #Encryption #E2EE #RootForE2EE 🎉

  45. In case you are falsely feeling protected outside of Europe:

    Chat Control doesn't just concern Europeans. It concerns all of us.

    These kind of regulations will come for all of us, everywhere, if we do not ALL push against it everywhere.

    If you do not understand how this is all intertwined, I invite you to read more privacy news and in-depth analysis. Because we must all support each other's privacy fights.

    Privacy is a human right 💚

    Fight for a better world, together ✊🌍

    #ChatControl #AgeVerification #DataMinimization #HumanRights #DigitalRights #Privacy #Encryption #E2EE #RootForE2EE 🎉

  46. If you store the data of others:

    1) you are responsible for protecting it,

    2) you are responsible for determining if harm could be caused if this data leaked,

    3) and you are responsible for deleting it properly once you do not need to retain it anymore, especially if it could cause harm.

    This is a moral obligation and, in many circumstances, can also be a legal one.

    #Privacy #DataMinimization #DataDeletion

  47. If you store the data of others:

    1) you are responsible for protecting it,

    2) you are responsible for determining if harm could be caused if this data leaked,

    3) and you are responsible for deleting it properly once you do not need to retain it anymore, especially if it could cause harm.

    This is a moral obligation and, in many circumstances, can also be a legal one.

    #Privacy #DataMinimization #DataDeletion

  48. If you store the data of others:

    1) you are responsible for protecting it,

    2) you are responsible for determining if harm could be caused if this data leaked,

    3) and you are responsible for deleting it properly once you do not need to retain it anymore, especially if it could cause harm.

    This is a moral obligation and, in many circumstances, can also be a legal one.

    #Privacy #DataMinimization #DataDeletion

  49. If you store the data of others:

    1) you are responsible for protecting it,

    2) you are responsible for determining if harm could be caused if this data leaked,

    3) and you are responsible for deleting it properly once you do not need to retain it anymore, especially if it could cause harm.

    This is a moral obligation and, in many circumstances, can also be a legal one.

    #Privacy #DataMinimization #DataDeletion