home.social

#secure-by-default — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #secure-by-default, aggregated by home.social.

fetched live
  1. 🚀 Ah, the age-old tale of "Allocating on the Stack" with Go, where pressing "Enter" is the most exhilarating feature. 😂 Apparently, Go's biggest claim to fame is that it helps you "stay secure by default"—as long as you remember how to navigate an online dropdown menu. 🙄
    go.dev/blog/allocation-optimiz #AllocatingOnTheStack #GoProgramming #SecureByDefault #DropdownMenu #Humor #HackerNews #ngated

  2. Privacy First, Security Always: The Only Sane Default

    Privacy first, security always” is either a real principle or it is marketing wallpaper.

    People can smell the difference now. Not because everyone became a cryptography nerd overnight, but because the consequences turned personal. Accounts get drained. Identities get cloned. A harmless preference turns into a predictive profile. Then a company calls it “personalization” and expects gratitude.

    I keep coming back to a simple line: if a system cannot respect boundaries, it does not deserve trust.

    The quiet theft is not the breach. It is the business model

    Security failures arrive with sirens. Privacy failures arrive with a checkbox.

    Teams hide the most invasive defaults behind consent banners, vague policies, and settings buried three menus deep. That is why privacy first has to be architectural. If your product needs intimate data to function, the relationship starts compromised and every debate becomes about permission instead of necessity.

    A practical test helps.

    Picture your product landing on the desk of a skeptical customer who has already been burned. They ask one question: “Why do you need this data?

    A hand-wavy answer like “we might use it later” reveals the truth. You are not building a service. You are building a warehouse.

    Privacy first means you design so the system does not need to know everything about someone in order to work.

    Security always is not paranoia. It is respect for entropy

    Security is not a feature you bolt on. Security is the discipline you practice.

    Most compromises are not clever or dramatic. Routine mistakes create them: misconfigurations, over-permissioned accounts, leaked secrets, and unpatched dependencies.

    Permissions sprawl until nobody can map them. Teams ship misconfigurations. Secrets leak because nobody rotates them. Dependencies drag risk into your product like barnacles. Backups fail the one day you need them. Logs exist but never tell a story.

    Security always means you assume failure will happen and you engineer the impact down to something survivable.

    That mindset can sound pessimistic. In reality, it respects entropy. Systems decay, incentives shift, and people make mistakes. Entropy does not care about your roadmap.

    The practical blueprint: collect less, separate, prove

    I like frameworks when they sharpen thinking and do not become religious scrolls. The simplest operating model I trust looks like this.

    1) Collect less

    Collect only what you can defend in one sentence to a skeptical user. Not to your lawyer. To your user.

    Reduce identity where you can. Prefer short-lived identifiers over permanent ones. Process locally whenever it makes sense.

    A privacy-first system does not brag about protecting your data. It quietly replies, “we never stored it.”

    2) Separate what you must store

    Treat data like it can explode, because it can.

    Separate identifiers, content, metadata, and billing. Force access through clear boundaries. Encrypt sensitive fields at rest. Keep administrative power narrow and observable.

    Isolation is also cultural. Engineers should not casually browse production data. A company that must “look inside” to operate has built a fragile machine.

    3) Prove what you did

    Logging is not glamorous. Auditability is not optional.

    Teams earn trust when they can show what happened, who accessed what, and why. If you cannot prove access, you do not control access.

    This is where “security always” stops being a vibe and becomes engineering.

    Where AI changes the stakes

    AI increases the temptation to repurpose data. More data looks like more capability.

    That logic has a shadow.

    Once the data exists, incentives attack it from every angle. Governments demand it. Attackers leak it. Brokers sell it. Lawyers subpoena it. Insiders misuse it. Product teams pull it into models because it feels convenient.

    The old scandal playbook that turned personal information into political influence taught a brutal lesson. People do not hate being measured. People hate being manipulated.

    Privacy first, security always refuses to build manipulation pipelines by accident.

    The surveillance trade is a false bargain

    Leaders keep offering societies the same deal: give up a little privacy for a little security.

    The pitch sounds reasonable until you watch the pattern. Privacy leaves first. The promised security rarely arrives.

    Real security looks boring in practice. Patching, least privilege, planning for failure, and building systems that do not collapse when one component breaks define it.

    Mass surveillance does not deliver security. It delivers power.

    That matters if you care about liberal values, because agency needs a private interior. People who feel watched do not explore ideas. They perform. When performance replaces honesty, innovation dies quietly.

    What “privacy first, security always” looks like in real products

    It looks like choices that feel slightly harder in the short term and far cheaper in the long term.

    • End-to-end encryption where it actually matters, especially for private content.
    • Local-first or edge-first intelligence where feasible, so insights do not require central hoarding.
    • Clear data lifecycles: expiration by default, deletion that is real, retention that is justified.
    • User agency that is not performative: export, revoke, rotate, and leave.
    • Transparency that is specific: what is collected, why, where it goes, and how long it stays.

    Open source helps here, not as ideology, but as visibility. Opaque systems force trust to become faith. Visible systems let trust return to engineering.

    Shift: trust is becoming a business strategy again

    For years, growth came easiest to the companies that treated people as data sources. That era is wearing out, because distrust is becoming expensive.

    Customers ask better questions now. Teams tire of cleaning up preventable incidents. Regulators tighten expectations around data usage, especially when AI enters the picture. Investors learn that “move fast” turns expensive when you pay for the mess.

    The economics stay simple: trust costs less to build early than to buy back later.

    A line I like has stuck with me.

    “You don’t need to drive the car to influence the journey. Speak clearly, and the driver might begin to listen. Place a sign on the roadside, and someone behind you will see it. Offer a compass, and you guide even without steering.”

    Privacy first, security always is one of those signposts.

    A society that shrugs at surveillance becomes a society that cannot breathe. A company that shrugs at security becomes a company that cannot be trusted. The two failures reinforce each other.

    “Privacy first, security always” is the design stance that says: we do not need to own people to serve them.

    Build systems that deserve users.

    Call to action

    If you build products, pick one system this week and run a simple trust audit.

    Ask:

    • What personal data do we collect that we could remove?
    • What do we keep longer than we can justify?
    • Who can access sensitive data today, and how do we prove it?
    • Which dependency or vendor would hurt us most if it failed?
    • What would we tell users within 24 hours of a breach?

    If you find a gap, fix one thing. Small repairs compound.

    If this resonates, share the post with someone who ships software, and leave a comment with the hardest privacy or security tradeoff you are facing right now. I read them and I will reply.

    Key Takeaways

    • Privacy first means designing systems that respect user boundaries and don’t require excessive data.
    • Security always involves assuming failures will happen and engineering to minimize their impact.
    • The practical blueprint consists of collecting less data, separating necessary data, and proving access to it.
    • Privacy first, security always discourages manipulation and builds trust between users and companies.
    • Companies that prioritize trust will thrive as users demand better data practices and transparency.
    #AIGovernance #dataMinimization #digitalRights #encryption #Privacy #PrivacyByDesign #secureByDefault #security #trust #zeroTrust
  3. Microsoft Outlook: Enhanced search experience with Copilot in Classic Outlook

    #MC1176366

    • Users will see an entry point to the Copilot side pane after they execute a search.
    • The feature will be enabled by default; no admin action will be required to turn it on.
    • No admin controls are available to disable the feature; however, users must opt in to AI-generated content.

    Compliance considerations:
    No compliance considerations identified, review as appropriate for your organization.

    #SecureByDefault

  4. Is there at least one default setting anywhere in the #Microsoft universe which are actually #SecureByDefault?

  5. The feature-list of secureblue is a great resource to start your research about security concepts. secureblue.dev/features

    But do we really need a special GNU/Linux distribution that is secure? Let's go #SecureByDefault!

    #secureblue #linux #silverblue #fedora #cybersecurity #privacy

  6. For those looking for OpenBSD advocacy material, I will allow myself to push my own "What every IT person needs to know about OpenBSD", as a three part series on the APNIC blog starting with part one at blog.apnic.net/2021/10/28/open, also available in one big chunk without tracking at my webspace nxdomain.no/~peter/what_every_ or *with* trackers and slightly nicer formatting at bsdly.blogspot.com/2021/09/wha - enjoy!

    (also links therein) #openbsd #securebydefault #qualitysoftware

  7. Now that #OpenBSD 7.6 is out, a reprise of the daily life with our favorite operating system piece "You Have Installed OpenBSD. Now For The Daily Tasks." nxdomain.no/~peter/openbsd_ins (prettified, tracked bsdly.blogspot.com/2024/09/you) may be in order.

    Enjoy the #OpenBSD #freesoftware #securebydefault experience!

  8. "By default Deno allows importing sources from following hosts:

    deno.land
    esm.sh
    jsr.io
    cdn.jsdelivr.net
    raw.githubusercontent.com
    gist.githubusercontent.com"

    I'm loving #deno and it's #secureByDefault but this particular setting feels weirdly permissive compared to the strictness of the rest of their permissions model.

    I guess the security sandbox is more about limiting what code can do, rather than what code gets executed.

    docs.deno.com/runtime/fundamen

  9. Hey #EntraID are you feelin' good?

    I guess this tenant is #SecureByDefault because I can't sign-in after I configured MFA on the account.

    Good stuff.

  10. Tailscale funnel will tell the whole world about your service trough the certificate transparency log.

    I just discovered this after watching someone from a Russian IP identifying as "scanner.ducks.party" crawling my little test.

    I don't think @tailscale makes it clear at all that anything exposed with tailscale funnel is announced to everyone listening thanks to certificate transparency.
    A small warning when running tailscale funnel would be in place because I very much did not expect anyone to find my little funnel. And I doubt others do either.

    @tannerprynn also noticed this already a while ago and did a bit of scanning to see what people are putting up. And it was mostly Plex and other hobbyist thing. But I think nowadays Tailscale has moved into enterprise so I would guess there is a lot more "interesting" things being exposed.

    infosec.exchange/@tannerprynn/

    #tailscalefunnel #tailscale #psa #securebydefault

  11. Get onboard with MFA. If you're in the Microsoft ecosystem, every portal & every service leveraging Entra ID will ultimately mandate app-based authentication as the baseline for MFA. Not SMS.

    And certainly not "my complex password is better than MFA".
    #securebydefault #cybersecurity #Microsoft #mfa #msftadvocate

    Auto Rollout of Conditional Access Policies in Microsoft Entra ID - Microsoft Community Hub
    techcommunity.microsoft.com/t5

  12. My blog about what I really want from memory safety seems to be at the top of the front page of lobste.rs!

    #MemorySafety isn't the thing I'm trying to build with #CHERI, it's just something I need on the way to be able to get to the really interesting #SecureByDefault places.

    linkedin.com/pulse/i-dont-care

    LinkedIn pops up an annoying box, you can close it with the X in the top right.

  13. I keep thinking about this blog post, and what it means to compare developer expectations to how customers actually configure the product in the field. Field tests can provide fresh insights into deployment practices that can lead to safer products. #secureByDefault

    blog.thinkst.com/2023/08/defau

  14. My new post maps the new CISA et al guidance on security-by-design and by-default to my new book that is out now (and omg breaking news it's officially out!!!!): kellyshortridge.com/blog/posts

    the tl;dr is that if you want to understand more of the "why" but also learn the "how" to implement #SecureByDesign and #SecureByDefault in practice, read these chapters:
    * Chapter 3: Architecting & Designing
    * Chapter 4: Building & Delivering
    * Chapter 7: Platform #Resilience Engineering

  15. Jen Easterly, director of CISA, in an interview at CES 2023 urges software and product developers to build "secure-by-design" and "secure-by-default" products--because, among other things, "it's the right thing to do," as ultimately safety of the users is at stake. She explains how the current cybersecurity situation is not sustainable.

    "Leaders need to look at cyber risk as core business risk and their own responsibility."

    She also asks end users to "demand radical transparency" and "ask hard questions" of product developers.

    Excellent advice, which if followed, will have a long lasting impact on cybersecurity.

    bloomberg.com/news/videos/2023

    #securebydesign #secureproducts #securebydefault