home.social

#classicalliberalism — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #classicalliberalism, aggregated by home.social.

  1. Founding Fathers of Liberalism

    John Locke (1632-1704)
    Montesquieu (1689-1755)
    David Hume (1711-1776)
    Adam Smith (1723-1790)
    Benjamin Constant (1767-1830)
    Jean-Baptiste Say (1767-1832)
    Wilhelm von Humboldt (1767-1835)
    Alexis de Tocqueville (1805-1859)
    Herbert Spencer (1820-1903)

    #liberalism #classicalliberalism

  2. AI Short Term Memory: Why Better Models Still Frustrate Us

    AI short term memory is the reason today’s models can feel sharp, helpful, even uncanny, and then suddenly feel inconsistent. Capability has improved fast. The reliability gap remains because continuity is still fragile.

    Anyone who has walked a dog will recognise the pattern. A dog can hold a goal for a moment. Heel. Wait. Cross. Focus stays locked in when the street is quiet and the routine is familiar. Then a new stimulus hits and the whole world resets around it. The plan you thought you were sharing disappears, not because the dog is “stupid,” but because attention is narrow and the present moment takes over.

    Modern AI behaves in a similar way. The model is excellent at what it can see right now. Outside that view, it forgets unless you engineer memory around it. For many people, that makes AI feel both powerful and annoying. The tool can generate a great answer, then lose a constraint you already clarified, or repeat a mistake you already fixed.

    This matters more than it seems. As AI becomes embedded in everyday workflows, AI forgetfulness becomes more than a mild irritation. It becomes a system design problem with social consequences. The future is not just smarter outputs. The future is reliable context, durable state, and accountable decisions.

    AI Memory Limits and the Context Window Problem

    Most frustration starts with a simple technical reality: large language models operate within a context window, which is the information they can use at a given moment. Inside that window, the model can reason, summarize, draft, and plan. Outside it, there is no stable long-term memory unless the product supplies one.

    That is why “it understood me five minutes ago” can be true and still end badly. The earlier information might no longer be present. The model cannot “remember” it unless it is reintroduced or stored in some persistent state.

    People often interpret that as incompetence. The more accurate diagnosis is AI memory limits. Working memory is not the same thing as durable memory. A model can be highly capable while still being unreliable across multi-step tasks, especially when the task is long, complex, or full of constraints.

    This is also why AI can sound confident even when it is missing crucial context. Fluency is not evidence. A model can produce persuasive language while improvising. When the thread drops, the model often does not announce uncertainty. It fills gaps with whatever fits the current prompt and the statistical shape of likely text.

    That creates a specific kind of friction. Users end up acting as the memory layer. They repeat constraints. They restate goals. They paste context again. In practice, that turns “AI assistant” into “AI tool that needs constant reminders.”

    The near-future question is not whether models will improve. They will. The deeper question is whether AI systems will become trustworthy assistants or remain short-term intelligence with long-term consequences.

    AI Reliability Gap: Capability vs Continuity

    The improvement curve is real. Models follow instructions better than they used to. They reason more effectively. They handle nuance with fewer obvious errors. Yet the everyday experience can still feel brittle because the core problem is not raw intelligence. It is continuity.

    This is the AI reliability gap: the mismatch between what the model can do in a single moment and what you need it to do across time.

    Three frustrations tend to show up again and again.

    One is thread loss. The system forgets a boundary or a requirement and continues as if it never existed. That is the classic “you already told it, but it didn’t stick” feeling.

    Another is inconsistency. The system can produce a strong answer, then later contradict itself, not out of malice, but because different prompts pull it into different local interpretations. Without a stable state, the model is easily redirected by whatever is most salient in the current input.

    The third is confidence without accountability. Dogs get instant feedback from the leash. Humans get feedback from consequences. Most AI systems do not. They can be wrong with a steady tone and no immediate correction, which is why AI mistakes feel sharper than normal human error: the system sounds certain even when it is guessing.

    Those frustrations remain even as models improve because better capability does not automatically produce better reliability. Reliability comes from engineering: state management, verification, provenance, and the ability to recover when context shifts.

    Smarter text is not the finish line. Reliability has to be engineered through state, verification, provenance, and recovery when context shifts.

    Long Term Memory for AI and Why It Is Hard

    People talk about “AI memory” as if it is a single feature. In practice, there are multiple kinds of memory, and each one solves a different part of the problem.

    Working memory is what the model holds inside the current context window. This is where most models shine.

    Long-term memory for AI is durable context across sessions: preferences, project constraints, stable decisions, and the history that actually matters. This often needs explicit storage, not just longer chats.

    Provenance is memory with receipts: where claims came from, what sources were used, what the system relied on. Without provenance, it is hard to trust outputs in high stakes settings.

    Normative constraints are the system remembering what it should not do, even when a prompt tries to push it there. This includes safety, but also practical constraints like “do not change the goal” or “do not invent sources” or “do not ignore previously agreed requirements.”

    Many AI products do working memory fairly well. The rest is uneven. Some tools store “memories,” but those memories can be noisy, incomplete, or hard to inspect. Some tools retrieve documents, but do not cite what they used. Some tools keep state, but state becomes a hidden layer the user cannot correct.

    That is why the experience can still feel distractible under novelty, especially when new inputs pull attention away from the original goal.

    This is not a reason to give up on AI. It is a reason to stop pretending that intelligence alone solves the problem. The missing component is structured memory, along with the ability to edit, correct, and constrain it.

    Trustworthy AI Systems and the Futuristic Risk

    This is where futurism becomes practical.

    AI is moving from a writing assistant into an intermediary layer. It will book appointments, negotiate schedules, filter information, draft official messages, summarize meetings, recommend actions, and sometimes trigger actions automatically. That is delegated agency. It is the beginning of AI as an operating layer between you and the world.

    If that layer still has short-term-memory behaviour, small errors become structural.

    A missing detail becomes a wrong booking. A misread intent becomes a silent denial. A distorted summary becomes an inaccurate record. A confident hallucination becomes the official explanation that someone later treats as fact. The risk is not only dramatic failure. The risk is quiet normalization of machine-driven misunderstandings.

    A second risk is cultural. People adapt to the tool. They reduce nuance. They repeat themselves. They learn to phrase requests to avoid misfires. They start writing for the machine. Over time, that can flatten human thinking and shift agency away from the user toward the system’s preferred patterns.

    A third risk is soft control. Systems do not need to ban anything to shape behaviour. Defaults, friction, and selective summaries can steer people without visible coercion. A world of AI intermediaries that forget what matters can become a world where citizens are nudged by accident as often as by design.

    Trustworthy AI needs contestability, transparency, and reversibility. Without that, we get smooth tools that quietly degrade autonomy.

    Classical Liberalism, Human Agency, and Contestable Decisions

    Classical liberalism has an unusually practical message for the AI era. Individuals are moral agents. People deserve dignity, due process, and the ability to contest decisions that shape their lives.

    That should remain true even when software is influencing the outcome, not just a human being.

    A system that mediates your options must support basic rights of the user:

    Clear reasons, not opaque outcomes. The ability to appeal or override. The right to opt out without being punished. Transparency about what the system knows and what it does not know. Accountability for those who deploy it.

    Convenience is not a sufficient moral argument. Convenience can coexist with freedom, but it can also erode freedom when it replaces explanation with automation.

    This is not anti-technology. It is pro-human. A free society is not one where errors never happen. A free society is one where errors are correctable, power is constrained, and the individual is not treated as a passive input to an optimization engine.

    AI short term memory becomes political when systems act on people at scale. The fix is not panic or worship. The fix is design: make the system legible, make it contestable, make it accountable.

    Practical Design for AI Memory, Provenance, and Accountability

    The next leap is not only a better model. It is a better wrapper around the model.

    Reliable AI needs explicit project goals. Constraints should be stored, not implied. The system should retrieve context from durable storage when needed, and it should show what it retrieved. Important actions should generate audit trails. Users should be able to undo and roll back when outcomes matter. Uncertainty should be stated clearly when evidence is missing.

    This is the difference between vibes and structure.

    A system with accountable memory can say: here is what I used, here is what I assumed, here is what might be wrong, here is how to correct me. That is the foundation of trust.

    It also turns frustration into progress. Instead of repeating yourself, you update a stable set of constraints. Instead of arguing with the model, you correct the state. Instead of hoping the system “remembers,” you can point to what it stored.

    The dog analogy still holds. A good walk is not achieved by pretending squirrels do not exist. A good walk is achieved by cues, boundaries, and a relationship that can recover from distraction. AI will always have edge cases. A good AI system is one that can recover without dragging the user into constant babysitting.

    Tomorrow’s AI should not be a mind that forgets. It should be a tool that keeps receipts.

    Better Models Need Better Memory Design

    Models will continue to improve. That part is almost certain. Yet the most meaningful improvement in how AI feels day to day will come from reliable continuity.

    AI short term memory explains why the tool can feel brilliant and frustrating at the same time. The fix is not only smarter language. The fix is structured memory, provenance, and accountability, plus the right to contest and correct.

    If we build AI that respects human agency, it will expand what individuals can do without turning them into passengers. If we build AI that optimizes convenience while hiding its reasoning, we will end up in a world that feels smart, smooth, and quietly unfree.

    A dog can be distractible and still be a good companion. An AI can be powerful and still be unreliable. The future is not pretending otherwise. The future is designing systems that remember what matters, and that let humans stay in charge.

    AI short term memory has probably bitten you at least once. Drop the most annoying example in the comments. Was it thread loss, inconsistent answers, or confident guessing that cost you time?

    Could you do me a small favour and share this post? A like helps, a follow helps, but a share is what really gets the conversation in front of the right people.


    Key Takeaways

    • AI short term memory causes inconsistency, often leading to frustration and communication breakdowns.
    • Large language models operate within a context window, lacking stable long-term memory without specific engineering.
    • The AI reliability gap highlights the difference between a model’s capabilities at a moment and its continuity over time.
    • Long-term memory for AI includes various types like working memory and provenance, which are crucial for effective operation.
    • Trustworthy AI requires structured memory and accountability, ensuring users can contest and correct decisions made by the system.

    #accountableAI #AIForgetfulness #AIMemoryLimits #AIReliability #AIShortTermMemory #classicalLiberalism #contextWindow #ethicsOfAI #futurism #humanAgency #philosophyOfTechnology #systemsDesign #trustworthyAI
  3. The video includes a brief explanation of different a priori truths behind true economic claims and proceeds to refute a series of counter-arguments against capitalism using those truths and derivations from those truths.

    youtube.com/watch?v=aKUeDDAN8mY

    #liberalism #classicalliberalism #libertarianism #rightlibertarianism #economics #austrianeconomics #capitalism #anticommunism #antisocialism

  4. The video includes a brief explanation of different a priori truths behind true economic claims and proceeds to refute a series of counter-arguments against capitalism using those truths and derivations from those truths.

    youtube.com/watch?v=aKUeDDAN8mY

    #liberalism #classicalliberalism #libertarianism #rightlibertarianism #economics #austrianeconomics #capitalism #anticommunism #antisocialism

  5. A New Blog Direction, One Step at a Time

    Every so often a creator reaches a point where the work begins to shift. This is what has been happening with Vortex of a Digital Kind, and it is time to speak openly about the new blog direction. The journey here has never been linear. It has moved through phases of technical writing, futurism, storytelling, philosophy, privacy, and ethics. Each shift reflected the stage of life I was in, the tools I was building, and the questions I was trying to answer.

    What follows is not a reinvention but a refinement. It is a clearer writing direction, a more reliable publishing rhythm, and a structure that respects how people read. Above all, it keeps honesty at the centre.

    Why the New Blog Direction Matters

    This site began with technical deep dives into architecture, encryption, frameworks, and the early formation of Aethrix. Later the writing expanded into futurism because exploring what might come next helped me understand the present. When the world became louder and self-censorship felt necessary, storytelling offered a safer route for expressing difficult ideas without stepping into unnecessary conflict. In addition, fiction made it possible to speak truths at a slight angle.

    Eventually I shifted back to essays about technology, autonomy, digital dignity, and the responsibilities that come with building new systems. These pieces felt more grounded and more aligned with the world we are navigating. As a result, the time arrived for a stable writing direction that readers can rely on.

    This new content strategy is based on what has worked in practice, not on theories or trends. Because of this, the direction feels steady instead of experimental.

    A Posting Rhythm Shaped by Experience

    For a while my posts landed between Friday and Monday. There was no strategy behind it. That schedule simply matched the pockets of time I had available. Although it worked in the early stages, the analytics revealed a different story.

    Posts published at the end of the week tended to lose momentum. They were read, but they rarely gained traction or brought back returning visitors. Meanwhile, the mid-week posts performed far better. They stayed open longer, travelled further, and sparked more engagement.

    It became clear that the timing shaped outcomes more than expected. This pattern did not mean that weekends are universally poor for publishing. Instead, it showed that my readers were far more active during the week. Therefore the new publishing rhythm will fall between Monday and Thursday. In turn, the weekend becomes a pause for recovery rather than a place where essays disappear.

    This shift is grounded in real behaviour, which makes the new blog direction stronger and more sustainable.

    A Clear Weekly Structure for the New Blog Direction

    To make the writing easier to follow, the blog will adopt a simple weekly pattern that reflects natural attention cycles. This structure supports the new blog direction by giving each day a clear identity.

    Monday focuses on reflection to help reset the week.

    Tuesday explores the future and examines the direction technology might take.

    Wednesday turns to technical writing. It will include architecture decisions, frameworks, AI orchestration tools, peer to peer work, and updates on Aethrix.

    Thursday brings the ethical perspective, including privacy, autonomy, classical liberal values, and the human impact of technology.

    This structure gives the site a consistent pace. As a result, readers will know when to return and what type of idea will meet them. Furthermore, the rhythm supports creative balance throughout the week.

    The Return to Technical Writing

    The strongest performing posts on this site have always been the technical ones. They are read deeply, saved often, and revisited over time. Readers appreciate clarity and depth, and they often return to posts that explain the reasoning behind each design choice.

    Because of this, technical writing will become a stable part of the weekly pattern. It will remain conversational and human, with the goal of making complex systems understandable without losing their integrity. In addition, these posts help document the building process in a way that feels open and practical.

    This part of the new blog direction builds trust and supports the more philosophical pieces that sit alongside it.

    Ethics and Digital Dignity Remain the Foundation

    No matter how technical the work becomes, ethics and privacy stay at the centre. Technology shapes human agency and influences the choices people can realistically make. Good design can strengthen autonomy, while careless design can weaken it.

    These essays remain essential because they inform every architectural decision behind Aethrix and the AI tools I build. Digital dignity is not an abstract idea. It affects authentication models, encryption choices, decentralised identity, and the long-term move toward Web3 publishing. Consequently, the ethical perspective becomes the thread that ties everything together.

    The new blog direction simply puts these values into a clearer, more predictable structure.

    The Move Toward Web3 Publishing

    A decentralised version of this blog is already in development. Posts will eventually be published through a Web3 layer with verifiable authorship and decentralised storage. Although this work will take time, it aligns with the belief that ideas deserve independence from corporate platforms.

    This long-term direction will unfold gradually. It represents a commitment to autonomy and permanence, not a rushed experiment. Ultimately, the goal is to create a more resilient space for both writer and reader.

    Coming Up Next Week

    As the new blog direction settles in, it makes sense to share what is coming. A steady rhythm helps readers know when to visit and what to expect. Next week follows the new Monday to Thursday structure and marks the return of a familiar favourite.

    Monday: Reflection and Reset

    A short essay about agency and personal direction. It will explore how people can stay grounded when technology is constantly shaping their decisions and attention.

    Tuesday: Looking Ahead

    A piece on digital autonomy and the near future. This one examines how decentralised identity, secure communications, and federated systems are beginning to reshape everyday life.

    Wednesday: Technical Deep Dive – The PHP Framework Series Returns

    The PHP series is coming back. The new instalment will revisit the custom MVC framework with a fresh technical breakdown. This project was always about building something lightweight and uncluttered, using raw PHP without relying on heavy third-party libraries. The only external tools allowed in the stack are PHPUnit and PHPStan during development, and nothing else. The goal is a clean, understandable, high-performance framework that stays as close to native PHP as possible. This series was one of the most popular parts of the blog, and bringing it back fits naturally with the renewed focus on practical, hands-on engineering.

    Thursday: Ethics and Digital Dignity

    An essay about privacy as a form of empowerment. It will connect autonomy, classical liberal values, and the practical choices behind the tools we build. This post sets the tone for the ethical spine of the new blog direction.

    A More Honest Space, Built With Intention

    This new blog direction is not about chasing numbers. It is about creating a space that feels consistent, sincere, and worth returning to. Readers who care about privacy, philosophy, technical craftsmanship, and the future should feel at home here.

    The next phase blends structure with honesty. It keeps the original voice while giving it a clear framework that helps each idea carry further. Moreover, it allows each theme to develop with greater depth.

    If this new blog direction resonates with you, consider sharing the post or leaving a comment about what you would like to see explored in future weeks. Your perspective helps shape the ideas that follow, and every conversation strengthens the community forming around this space. By returning each week or passing the link to someone who might appreciate it, you help the work grow in a way that stays true to its purpose.

    #AethrixEngineering #AITools #autonomy #classicalLiberalism #decentralisedPublishing #digitalDignity #futurism #newBlogDirection #PHPFramework #Privacy #technicalWriting #Web3Blog
  6. The Militarised Libertarian Skinheads (M.L.S.) is a skinhead movement that combines military aesthetics with urban life. The M.L.S. movement advocates for the rights to life, liberty and property, as well as the right to possess and carry firearms, and supports open-source software.

    #liberalism #classicalliberalism #libertarianism #rightlibertarianism #anarchism #anarchocapitalism #skinhead

  7. #SiliconValley #Libertarianism #ClassicalLIberalism #PoliticalTheory #PoliticalPhilosophy: "If Silicon Valley thinkers are to take their political commitments to liberty and technological progress seriously, they need to acknowledge and deal with the contradictions in their ideological positions rather than papering over them. Rather than spinning out business models into unconvincing grand social theories, they ought start with good theories and think seriously about the implications for their business models.

    Here, they might usefully learn from a consistent line of reasoning in classical liberalism that they currently neglect. Eighteenth-century liber­als like Hume and the authors of The Federalist Papers were obsessed with the dangers of faction, and the need to channel it so that it did not overwhelm society. Their twentieth- and twenty-first-century heirs, like Ernest Gellner, Douglass North, and Barry Weingast, have adapted the tools of social science to understand the circumstances under which open societies can live and thrive despite, and sometimes thanks to, their internal contradictions.

    The lessons are straightforward, even if they jar painfully with some common myths in the Valley. Actual free markets require a state that is both powerful and constrained. Real technological progress is not solely generated by risk-taking entrepreneur-heroes in a social vacuum. It is also the contingent by-product of a fragile set of common social and political arrangements. Without constitutional constraints, voluntary in­teractions tend, as Silk Road did, to degenerate into gangster capitalism. And the trick of creating a vibrant open order is not to try to escape the sordid bargains of politics, or to eliminate your enemies, but to channel disagreement usefully."

    americanaffairsjournal.org/202

  8. Reclaiming #Democratic #ClassicalLiberalism

    ellerman.org/wp-content/upload

    #Liberalism is skeptical "about government[s] being able to “do good” for people. Instead" the state should "maintain the conditions for people to be empowered ... to do good for themselves, for example, in establishing ... private property prerequisites for ... a market economy as emphasized in ... economic [thought] (e.g., Heyne et al. 2006, pp. 36–38)."

    #liberal #PoliticalPhilosophy #philosophy #democracy #capitalism

  9. Gingrich proves that fash gets super into #history, and remain stuck in the morality of 19th century #ClassicalLiberalism.

    Their idea of “a more perfect union” was somewhere between 1820 (Missouri Compromise) to 1859 (vote for secession), until #AmericanWhigs and #RadicalRepublicans ruined everything.

    Their idea of a good time is ChatGPT, iPhones, and enslaved people serving sweet tea.

    @blogdiva @kroltanz

  10. @Mary625 @paul @lolgop

    Yes, it’s #ClassicalLiberalism of the 19th century.

    They’re mad about the 11th through 26th Amendments.

  11. “Do free work for narcissistic sociopaths, so that your customers become your friends” is the kind of take that would come from depths of the #ClassicalLiberalism #TechBros hive mind.

  12. I just realized that people who identify as aligning with #ClassicalLiberalism basically have the luxury of picking a point in time for their preferred ideology of #PoliticalEconomy: it’s somewhere in the 19th century.

    How convenient.

    #history #politics

  13. Putting the “Political” Back Into Adam Smith’s Political Economy: Smith on Power and Privilege, Faction and Fanaticism, and Corruption and Conspiracies

    A new paper from David M. Hart given at this year’s History of Economic Thought Society of Australia conference.

    davidmhart.com/liberty/Papers/
    h/t @mzwolinski

    #economics #politicaleconomy #intellectualhistory #classtheory #classicalliberalism #liberal

  14. Racists, apartheid apologists not welcome among liberals
    dailyfriend.co.za/2023/02/10/r

    (Today's column, with many comments answered. As if to prove my point, the most upvoted comment is by an apartheid apologist.)

    #SouthAfrica #Apartheid #Racism #ClassicalLiberalism #IRR #TheDailyFriend #Comments #Moderation

  15. @witchescauldron
    Its good that you do that. We run various tests ourselves. A lot of kids have no idea what is going on. That's okay because some are only teenagers or early twenties, but some of them can vote so you have to try to instill some sense of responsibility.

    #Keynesianism as a response to #priceStickiness in #classicalLiberalism was a mistake. Breaking up the #monopolies that aggregate immense wealth into few hands would've been better, a bit of that happened, iirc, but maybe more.