home.social

Oskar Dudycz 🇺🇦✊

  1. Btw. It's the 101st closed Pull Request in #Pongo. Chef kiss 👌😎

  2. See what I just merged! 🎉🐶 There are a few steps left, but hey! The First working version of #SQLite support in #Pongo landed in the main branch!

    github.com/event-driven-io/Pon

  3. #Pongo just reached 1300 🌟 on GitHub! Nice!

    If you’d like to match #MongoDB accessibility with #PostgreSQL powers - try it!

    github.com/event-driven-io/Pon

  4. Sneaky code bites back, I realised that again recently.

    I realised I'd been writing terrible code when I couldn't explain it to myself.

    The code worked. It was adding SQLite support to Pongo. But when I tried to describe how I was implementing multi-database support, I heard myself saying:

    "First it checks if the Promise is cached, then it creates a proxy that defers to the real implementation which doesn't exist yet, but will be loaded dynamically when..."

    Insane, I was going insane.

    It's a pattern we all fall into.

    We see repetition and think, "I can abstract this."

    We see explicit configuration and think: "I can infer this". We see upfront costs and think: "I can defer this".

    Sometimes we just don’t trust our colleagues, thinking that they’re too incompetent and we need to take precautionary steps.

    Sometimes we're right. Most often we’re not.

    Too often, we're optimising the wrong metric. I was optimising for fewer imports, not simpler code. I was hiding essential configuration, not accidental complexity.

    A minimal API with complex implementation isn't simple—it's a lie. The complexity doesn't disappear. It moves from the visible surface to hidden internals, where it's harder to understand, debug, and modify.

    I got too clever. We all do sometimes. The key is catching it before it escapes into production, where it becomes someone else's nightmare.

    Now I'm removing this code, and that's fine.

    Want to read details? Check the latest #ArchitectureWeekly architecture-weekly.com/p/snea

    And tell me about your sneakiness.

    How hard did it bite you?

  5. Me ATM fighting with the #TypeScript generics while reshaping ⁠#Pongo API 😅

  6. Do you want to learn more about cool #PostgreSQL features like partitioning, logical replication? Know also more about dos and dont's from my experience?

    After receiving a ton of questions on last week's Particular Software webinar on #PostgreSQL Superpowers, I did a Q&A, discussing those topics (and more!)

    Read it in the latest #ArchitectureWeekly: architecture-weekly.com/p/post

    You'll also know what I answered for the question:

    "What misuse of PostgreSQL (feature or in general) has turned out to be useful?"!

    Feedback is welcome, and thanks again, @danielmarbach for the invitation!

  7. “Just use #SQL”, they say.

    “No need for ORMs”, they add.

    And they may be right, but… But then accidental complexity piles in.

    Just is enough for a simple or explicit context. Eventually, as our system grows, we’ll need to deal with complications. Popular tools might not be perfect; they may be too heavy, depending on design decisions, but as Gerald Weinberg said:

    "Things are the way they are because they got that way"

    You can live “just” with something. This can be a good starting point. Yet, we should always consider when deciding on DIY or “take off the shelve” solutions where we need to land and what our main problem is to solve.

    Because if it appears that we’ll need more advanced features, then we’ll follow the path similar to the one that people building popular libraries/tools have taken.

    It’s always worth thinking whether “just” will be enough for us. Usually, “just” is enough only for some time. And it is better not to overlook this to change our approach, as then we can deal with accidental complexity related to wrongly weighted “just”.

    How does "wrongly weighted just" look like? Check in my latest #ArchitectureWeekly edition

    architecture-weekly.com/p/just

    What are your "wrongly weighted just" horror stories?

  8. Vertical Slices in software architecture are pictured right now as the best thing since sliced bread.

    I won’t try to hide that, like it. I've written about CQRS and Vertical Slices over the years - how to slice the codebase effectively, shown examples, and explained why generic doesn't mean simple, yet…

    I still get questions about Vertical Slices Architecture (VSA).

    After a recent Discord discussion in the recent #ArchitectureWeekly, I want to share some additional thoughts on how I see Vertical Slices Architecture, how it relates to CQRS, what different slicing strategies are, and (of course) the tradeoffs.

    But also why we love discussing it, and how Semantic Diffusion impacts that.

    architecture-weekly.com/p/my-t

  9. What compilers have to do with event-driven pipelines? At first glance? Nothing! But after thinking about the new design in #Emmett, I changed my mind!

    I wrote today on #ArchitectureWeekly on how building an Event-Driven pipeline led me to writing my own compiler. Oh well, small one, simple one, but still.

    Read also about more surprising places besides programming languages when compiling happens, and helps your applications!

    architecture-weekly.com/p/comp

  10. The current Open Source model assumes symmetry between all users, but... When the OSI insists cloud providers deserve equal treatment to individual developers, it forces projects into defensive positions.

    Then we hear:
    - Rug pull!
    - Open Source drama!
    - Yet another License change!

    And guess what, I'm also want to set a dual license for #Pongo and #Emmett. I want to do it in a transparent way and created a dedicated, public RFC for that: github.com/event-driven-io/emm.

    If you have some thoughts around it, please comment and share with me your thoughts.

    If you don't have, then I think that this RFC is still a decent way to learn on the OSS licensing, and why you should care about it. I tried to explain them in a straightforward way, together with the background.

    Sharing is caring, so I'd appreciate resharing or tagging someone who can have experience to share 🙂

  11. If you prefer to read, that’s fine, check #Emmett getting started, I think that’s a decent read, and not a typical boring piece of docs 🙂👌

    event-driven-io.github.io/emme

  12. Looking for something to watch during the weekend? What about a lighthearted intro to #EventSourcing m.youtube.com/watch?v=SDXdcymK ? 😎

    It should give you a practical start to the most important concepts and also a good chance to learn #TypeScript modelling and how #Emmett can help! 🙂

  13. 👋 Folks, I need your help, I’m working on the Workflow Engine for business process orchestration in #Emmett and I want to have it in the transparent, community way.

    I prepared RFC and I’m counting on your feedback and questions! github.com/event-driven-io/emm ❤️

  14. Most of the time, we discuss strong consistency and eventual consistency.

    Most systems prefer to have strong consistency. You perform the operation, wait till it’s finished and then proceed.

    Take adding attachment to your business processes as an example. We could handle uploads synchronously. User selects file → browser uploads to server → server stores file → server creates database record → server returns success.

    Simple, consistent, slow.

    This creates problems:
    1. Large files block the UI for minutes.
    2. Network interruptions force complete restarts.
    3. Servers become bottlenecks, streaming every byte through memory.
    4. Multi-gigabyte files can crash servers.

    Luckily, we learned that sometimes we have to live with an additional delay. Still, we should not stop that, as it’s a bit trickier than “a small delay”.

    We usually assume that stuff will happen in a particular order, we just don’t know precisely when. And that’s actually something that’s called Causal Consistency. The difference between Eventual and Causal consistency is:

    Eventual consistency - The system eventually reaches a consistent state. Order of operations doesn't matter. Create a link, then upload the file, or upload the file, then create the link - the result is the same. The system tolerates temporary inconsistency.

    Causal consistency - Operations must respect cause and effect. You can't comment on a document before it exists. Effects follow causes.

    File uploads fit eventual consistency perfectly. When attaching a blueprint to a construction task, two things happen: storing the file and linking it to the task. The order is irrelevant. What matters is that both are completed eventually.

    With eventual consistency for file uploads, we can show files as "attached" immediately. The actual upload happens in the background. Of course, we won’t be able to download them until the upload is finished, but the system continues working during the brief inconsistency.

    And that's also what I discussed today in details in the latest #ArchitectureWeekly.

    I did a follow-up to last week's article about inverting dependency, this time expanding on how predictable ids can also help you to beat eventual consistency.

    Read more: architecture-weekly.com/p/deal

  15. Folks, the new docs section on writing read models in #Emmett just arrived! Check it till it's fresh, and tell me how you like it!

    event-driven-io.github.io/emme

    It's a first draft, more to come and needs to be documented, but baby steps 🙂

    We'll get there!

  16. In distributed systems, we face a fundamental tension between module communication and module autonomy. Our modules need to exchange information and coordinate actions, yet we want each module to evolve independently without forcing changes across the entire system.

    This tension becomes evident in event-driven architectures, where modules communicate through events and messages.

    Consider a typical e-commerce platform. The payment module needs to process requests from orders, handle reimbursements, manage subscriptions, and potentially deal with dozens of other payment scenarios.

    Each connection point between modules represents a potential coupling that can ripple through the system when requirements change.

    The traditional approach involves modules directly referencing each other through event types, API endpoints, or shared data structures.

    This creates a web of dependencies where adding a new module or changing an existing one requires coordinated updates across multiple teams.

    What if we could enable modules to communicate without knowing about each other's existence?

    What if a payment module could process requests from any source without being programmed to handle specific scenarios?

    I've explored in the latest #ArchitectureWeekly how predictable identifiers, specifically Uniform Resource Names (URNs), provide an elegant solution to this challenge.

    The key insight is that identity can carry meaning without creating coupling. URNs provide a standardised way to structure this identity, enabling infrastructure-level routing while maintaining module independence.

    The payment module could process generic payment requests, inverting the dependency and publishing responses using the correlation ID for routing. It never needs to know whether a payment is for an order, reimbursement, or subscription.

    Read more in architecture-weekly.com/p/pred

    Are you using such an approach in your systems?

  17. I've been facilitating the #EventStorming sessions for years now, and something keeps catching my attention.

    While teams naturally focus on mapping out those orange event sticky notes (the backbone of any EventStorming session), they often underestimate the power of two critical elements: Hot Spots and Notes.

    What I value most about Hot Spots and Notes isn't just how they improve workshops - it's how they change team culture around uncertainty.

    In technical discussions, we often feel pressure to know everything immediately.

    Hot Spots create permission to say, "I don't know yet" while still making progress. They transform uncertainties from conversation-killers into clearly defined next steps.

    I've seen teams evolve from hiding what they don't know to actively hunting for uncertainties as valuable information. Questions become assets rather than liabilities.

    If you're incorporating EventStorming into your toolkit, don't underestimate these powerful elements. They might not get as much attention as domain events, but in my experience, they often separate a productive modelling session from a frustrating stalemate.

    Read more in the latest #ArchitectureWeekly: architecture-weekly.com/p/the-

  18. I just closed the 200th Pull request in #Emmett!

    Plus #Emmett just passed 300 GitHub Stars! ⭐

    Nice milestones to cherish! Maybe it ain't much but it's honest work!

  19. Have you heard a surgeon say: "I won't sterilise my tools, as the patient won't let me"?

    I haven't, but I have heard multiple times: "Business won't let us add unit tests."

    As this grinds my gears, I decided to write my take on why we tell those lies to ourselves. I think that it's more about us avoiding accountability and dodging our duties than about mischievous business people.

    Read more in the latest #ArchitectureWeekly: architecture-weekly.com/p/busi

    What's your take on that?

  20. Object-oriented or relational? Why not both?

    For many years, we tried to fit the business data into a normalised table structure. We used Object-Relational Mappers, which was a constant battle on how to map unfitting models.

    Then document databases like MongoDB came along and got traffic.

    Still, many people wanted guarantees they had in relational databases, they also wanted to reuse muscle memory related to operations and other tooling.

    Now we have the choice as we have #JSONB data type implemented by #PostgreSQL and then by MySQL, SQLite.

    The B in JSONB stands for binary. It looks like a JSON, it quacks like a JSON, but it's not JSON. And thanks to that, it's powerful.

    When you're storing JSON data in JSONB, it's parsed, tokenised, and stored in a tree-like structure. Types are preserved, and a hierarchical structure is also preserved, and thanks to that, you can index it and efficiently query it.

    I'm super happy that in recent years, I have had the opportunity to use Postgresql and JSONB, first in Marten and now in Pongo. I didn't look back. JSONB has its cons, but for most typical line-of-business applications, they're negligible.

    I finally wrote an intro in #ArchitectureWeekly about how JSONB works, check it, tell me how you liked it and share with your friends!

    And most importantly, play with it on your own 😊

    architecture-weekly.com/p/post

  21. The pendulum swung back! Now, the "modular monolith first" is the best advice. But is it?

    Every few years, our industry rallies around a new architectural approach that promises to solve all our problems. A few years ago, we all wanted to be micro and distributed. Microservices for the win!

    Now, we hear the “modular monolith first!”.

    It sounds great in theory but can fall short in practice. In practice, too often:
    - module boundaries erode,
    - resource isolation is absent,
    - promised "easy split" rarely happens,
    - ease of deployment is not so easy, as CI/CD tooling is not existing or absent.

    I was a bit fed up with the blank advice to go monolith first and covered stuff that, too often, is missed by the folks overselling this idea.

    Read more in the latest #ArchitectureWeekly: architecture-weekly.com/p/mono

    I’m not against monolith-first. Moreover, I agree with the general principles, but I wanted to debunk some myths around it.

    The modular monolith isn't a free lunch.

    It’s not precise to say:

    “If you cannot deal with monolith, microservices won’t work for you!”.

    The actual phrase should be:

    "If you can’t deal with modularity, then neither monoliths nor microservices will work for you."

    If you try to cheat and select modulith-first to cheat using a simplistic solution, it won’t help you; you will probably end up with a big ball of mud.

    Monolith-first can give you a gradual process of reaching maturity, but you need to plan it and consider that the DevOps process related to it and tooling is not as simple and mature as glossed.

    So, if you’re considering whether to go with microservices-first or monolith-first, you may be thinking the wrong way.

    You should go Modular-First and then think about your deployment strategy.

    What are your thoughts?

  22. 💣 Boom here comes the big news! I made the #ArchitectureWeekly a fully free newsletter. Yes, all the past articles and videos are free. Why did I do it?

    architecture-weekly.com/p/whol

    I believe that all of that gives you more content than a lot of books or paid online courses.

    Why did I do it? Am I mad? What’s the catch?

    In past years, I delivered many unique materials. I also increased the number of subscribers, both paid and regular ones. Preparing the type of content that I try to deliver each week quality content, takes time and focus.

    I still want to do that and produce new articles, but I decrease the pressure to do full-length and deep dives each week—sometimes even on weekends. And I’m not sure how about you, but I prefer to spend weekends with my family.

    The number of paid subscribers grew, but still insufficient to fully focus on Architecture Weekly. That’s fine, as I have other work, such as consultancy and training, open source work, and contracts.

    I believe that making the Architecture Weekly free newsletter should free some time for me, reduce the pressure I put on myself and let me focus on other stuff.

    I still plan to deliver similar content as I did, keeping the weekly cadence, maybe in a slightly different form, or changing it to every other week later on. We’ll see.

    So if you worry, don’t. I don’t plan to shut it down, but do it more for fun than profit. Quality should still be here, but maybe a bit lightweight form.

    I would appreciate it if you shared the news with your friends, so they could also benefit from those free materials. I believe they’re worth it.

    What are your thoughts?

  23. Folks, it's happening! I started to work on #SQLite support for #Pongo. Stay tuned, more to come soon! 🤟

    github.com/event-driven-io/Pon

  24. Look, on that! I just did this print screen on recent #Emmett Pull Requests. See the names of authors. It makes me proud that majority of them are not made by me.

    It's great that we've built such an engaged and quality community around #Emmett. It's a great validation of the concept around it 🙂

  25. So you want to break down monolith? Read that first. The obvious advice is to use the Strangler Fig pattern, and gradually replace the old code, until it's fully brand new.

    Let me tell you something: your monolith will survive. Maybe partially, but it will. I haven't seen a complete monolith migration succeed.

    Most were left in the middle of transition after going far above the planned time for rewrite.

    Some finished the migration, but replaced the old legacy with the new legacy after rushing the deadline.

    Some didn’t even manage those phases, as they sunk product and budget and were thrown away.

    So, the safe assumption is that some part of the old system will remain and continue running.

    After doing and observing numerous breaking-down-legacy-monolith operations, I've seen a clear pattern: teams fixated on architectural purity generally fail, while those focused on business value succeed.

    The most effective migrations aren't driven by trendy tech or eliminating monoliths completely. They start with specific business needs - faster delivery of high-value features, better resilience in critical components, or enabling team independence.

    Here's what I've found works consistently:
    - focus on the features that extraction makes sense business-wise,
    - define measurable business metrics before starting, not just vague technical goals
    - be skeptical of customer feedback that isn't backed by behavior data,
    - migrate in small vertical slices, learning and adjusting as you go,
    - of course do strangler fig eventually.

    Most systems benefit from a pragmatic architectural mix. Some components genuinely need to be independent services, while others work perfectly in a monolith. Some may not fit perfectly but:
    - work well enough,
    - are changed rarely or
    - are not a priority for business,
    so why change them?

    Architecture should serve the business, not the other way around. Read the full detailed guide in the latest #ArchitectureWeekly.

    Also, share your failed and successful migration strategy with me!

    architecture-weekly.com/p/so-y

  26. 🎉 Small step, but big milestone for #Emmett, I just released the first version of async projections for #PostgreSQL!

    Check more in the release notes: github.com/event-driven-io/emm

  27. Let’s say it’s Friday. Not party Friday, but Black Friday. You’re working on a busy e-commerce system that handles thousands of orders per minute. Suddenly, the service responsible for billing processing crashes. Until it recovers, new orders are piling up. How do you resume processing after the service restart?

    Typically, you use the messaging system to accept incoming requests and then process them gradually. Messaging systems have durable storage capabilities to keep messages until they’re delivered. Kafka can even keep them longer with a defined retention policy.

    If we’re using Kafka, when the service restarts, it can resubscribe to the topic. But which messages should we process?

    One naive approach to ensure consistency might be reprocessing messages from the topic's earliest position. That might prevent missed events, but it could also lead to a massive backlog of replayed data—and a serious risk of double processing. Would you really want to read every message on the topic from the very beginning, risking duplicate charges and triggering actions that have already been handled? Wouldn’t it be better to pick up exactly where you left off, with minimal overhead and no guesswork about what you’ve already handled?

    Not surprisingly, that’s what Kafka does: it has built-in offset tracking. It does it through internal topic and stores each offset change as a separate messages. Why does it do it this way? Wouldn't be simpler to just store in database the latest position?

    That and more I discussed in the latest #ArchitectureWeekly.

    Offset tracking may seem like a detail, but understanding it can be critical for smooth message processing. Check to learn more about internals, but also how other tools are dealing with it.

    As always, we take a specific tool and try to extend it to the wide architecture level.

    See more in 👇 and tell me your thoughts!

    architecture-weekly.com/p/how-

  28. So I just closed 150th PR to #Emmett. Nice milestone 🙂🎉

    Also I did 2 releases recently driven by community changes.

    Steadily, step by step bootstrapping the project with still small but vibrant community.

    Check more in: github.com/event-driven-io/emm

  29. We've all been there: trying to learn something new, only to find our old habits holding us back. We discussed today how our gut feelings about solving problems can sometimes be our own worst enemy.

    Every architectural decision is a trade-off. The trick isn't finding the perfect solution - it's understanding the trade-offs well enough to make informed decisions. Try implementing things in practice, build up some muscle memory, and then revisit whether your paranoia level is too high or too low.

    Our paranoia level isn't just about being careful - it's about understanding where that care is most needed. Looking at new technologies means finding the balance between respecting their power and not being afraid to use them.

    The patterns that worked before aren't wrong - they're just part of our journey to understanding when different approaches make sense.

    The key isn't to become less paranoid as you learn - it's to become more precisely paranoid. To know exactly which parts need your careful attention and which parts can flow naturally from your chosen patterns. That's how you grow not just in knowledge but in judgment.

    How do I calibrate my paranoia level? I wrote about it in detail in the latest #ArchitectureWeekly

    architecture-weekly.com/p/defi

    What's your paranoia level, and how do you deal with it?

  30. I promised the next #ArchitectureWeekly a deep dive into React/Tanstack Query. And here it is! architecture-weekly.com/p/reac

    I'm super happy that Tomasz Ducin agreed again to share his extensive look at Frontend Architecture and make it available for free.

    We explored if a dedicated tool like React Query can prevent the usual headaches for handling server data like:
    - stale info,
    - redundant fetching,
    - scattered “loading” flags.

    It was a lot of learning, all with hands-on explanation. I think that the webinar is a good example of how we can evaluate the exact tool choice in the wider architecture concept.

    Watch the full video, drop a comment, and let us know which topics you’d like us to tackle next. 🙂

    ❤️ for resharing to your friends!

  31. So I just casually merged such a small Pull Request extending „a bit” my workshop exercises in #Java.

    github.com/oskardudycz/EventSo

    Wanna see how to implement event stores on top of #PostgreSQL or #Mongo? How to use #EventStoreDB? Why not!

    I’m going to have a few open and private workshops teaching #EventSourcing in practice. You can also join them or schedule one.

    Or, you can try to do exercises as a self-paced kit, as I’m always open sourcing all of them. That’s not the full workshop experience, but can be good enough 🙂

  32. Just read such a nice e-mail on my phone! #Emmett got a first 🥉sponsor. Thank you Product Minds!

    If you want to take back your applications back to the future with the event-driven check Emmett, as you see, it’s already getting trust in organisations 🙂

    See the project: github.com/event-driven-io/emm

    If you’d like to also want to help make my work sustainable, deliver more and better,you can also become sponsor.

    By that you also get support from me on your journey. Help me, help you 🙂

    See sponsors perks: github.com/sponsors/event-driv

    Resharing and liking this post also counts as help ❤️😅

  33. There’s a common misunderstanding that #Kafka “pushes” messages to consumers. In reality, it’s a pull-based system where each consumer actively requests data. This design choice means consumers control their own pace—handy for scaling out your application or gracefully handling temporary slowdowns. Instead of blindly pushing messages that might overwhelm one consumer, Kafka gives each consumer control over how much data it’s ready to receive.

    Under the hood, concepts like heartbeats, error codes, and session timeouts ensure consumers stay in sync with the broker, even if one goes offline or a network hiccup occurs.

    Heartbeats serve double duty: they confirm a consumer is alive and also act as a signal for rebalancing when something changes in the group.

    Rather than maintaining a separate notification channel, Kafka folds these “you need to rebalance!” alerts into its standard heartbeat responses—keeping the design both elegant and efficient.

    When you dig deeper, you’ll see how partition assignments, group coordinators, and long polling all fit together to create a surprisingly flexible system.

    If you’ve ever wondered how that works, or why knowing cryptic consumer configs like fetch.max.wait.ms or session.timeout.ms can be important. Or even make or break performance, my latest post unpacks these mechanics with hands-on examples.

    In new #ArchitectureWeekly edition, I explore the specifics with code snippets and real-world examples, covering session timeouts, error codes, partition assignments, and how each piece can make or break a data pipeline.

    That's the third part of my "Nerd sniping Kafka" series.

    Give it a read, and let me know what you think!

    architecture-weekly.com/p/unde

  34. Let's discuss today how Kafka handles message consumption! We'll start by explaining Consumer Groups and how they manage consumers and data distribution.

    Then, we'll explore partition assignments, fault tolerance, and the trade-offs involved in more detail. We'll wrap up with real-world implications of Kafka’s design, comparing it to other solutions and discussing whether it's always as great as pictured.

    Kafka model is decent, but it can fall into the following issues:

    1. Uneven Partition Loads and Processing Skew

    2. Processing Pauses During Rebalances

    3. Frequent Updates to Offset Storage

    4. Handling Workload Changes Inefficiently

    5. Rebalance Churn in Dynamic Environments

    That stuff can be ignored in the dev environment, but when you're in production, that stuff can give you a serious headache.

    Want to learn more? Check the latest #ArchitectureWeekly and drop me a note on how you liked it!

    architecture-weekly.com/p/kafk

  35. @rdnt it's great that you're saying that, as that's why I created #Pongo: github.com/event-driven-io/Pon

    Together with #Emmet PostgreSQL support it makes it already possible (at least if you're in Node.js space).

    See:
    - event-driven.io/en/emmett_proj
    - event-driven.io/en/projections

    And more about Pongo internals: event-driven.io/en/pongo_behin

  36. How to build #MongoDB Event Store? The neat part is you don't!

    Oh well, past me thought like that, but Alexander Lay-Calvert persuaded me to change my mind and did most of the work. We delivered #MongoDB storage, and it went surprisingly well. I wrote a detailed write-up on how to do it!

    There were many interesting challenges in how to make it consistent and performant, so I think that's an interesting read.

    I think it's a good guide if you're considering using #MongoDB as anevent store. Surprisingly, I have had numerous discussions recently with people trying to do it.

    If you're considering using key-value databases like #DynamoDB and #CosmosDB, then this article can also outline the challenges and solutions.

    My first choice is still on #PostgreSQL, but I'm happy with the #MongoDB implementation in #Emmett.

    If #MongoDB is already part of your tech stack and the outlined article constraints are not deal-breakers, this approach can deliver a pragmatic, production-friendly solution that balances performance, simplicity, and developer familiarity.

    I'm not sure what took longer, delivering the implementation or writing this article. So I'll appreciate the feedback and sharing with your friends. ❤️

    event-driven.io/en/mongodb_eve

  37. "Event Streaming is not Event Sourcing!" is probably the article I'm linking the most from those that I wrote. It's part of my Don Quixote crusade to untangle those terms, as I've seen many significant architectural decisions made without realising those differences. And the consequences were severe.

    There's a skewed perspective conflating #EventSourcing with #EventStreaming.

    I know those terms sound similar. I know many people tell you that you can use Kafka as an event store, but...

    Event Sourcing is about making decisions, capturing their outcomes (so events) and using them to make further decisions (so events are the state).

    Event Streaming is about moving information from one place to another and integrating multiple components.

    Event stores are databases. They may have similar capabilities as Event Streaming solutions, but the focus is different:
    - event stores on consistency, durability and quality of data,
    - event streaming solutions (like #Kafka) are focused on delivery, throughput and integration.

    So, to give a bold comparison, saying that Kafka is an event store is almost like saying that #RabbitMQ is a database.

    I really like Kafka; I've been using it successfully, along with the Event Sourcing tool. That's why I believe it's important to know the difference and how to compose those tooling instead of mixing them and getting a hangover.

    Check more in my article 👇

    event-driven.io/en/event_strea

  38. To my event-driven speaker friends. There's only one week left to apply to the #EventCentric conference. There are few conferences where you can show your event-driven face and go down the rabbit hole.

    The conference is a nice and safe space for discussion and showing your ideas, tools, and case studies. It's a sub-conference of @dddeu. Dates: June 4-5, 2025.

    This time located in Antwerp. The city with the longest underwater walking tunnel I saw! I were there twice and enjoyed it.

    I'm helping the organisers a bit, and I'd love to see talks around #EventSourcing and #EventStreaming, both modelling and applying event-driven ways in practice.

    Don't wait and send your proposals here: 2025.eventcentric.eu/cfp/

  39. Thinking of building “one #platform to rule them all”? You’re not alone—and it’s trickier than it seems. Right before Christmas, I wrote a post about the hot topic. As my thoughts spin up some good discussion, I expanded it in the latest #ArchitectureWeekly.

    I discussed why these big, central solutions usually peak after 1–2 years, then erode under patches, shifting priorities, and detachment from real business needs.

    I also brought insights from the 2024 #DORA Accelerate State of #DevOps report, plus stories from Spotify, Stack Overflow, and more.

    I see many of my clients trying to apply it, some succeeding, some more, but all struggling. I’m not an expert in building internal platforms, but I was a part of the core teams, shaping continuous integration and delivery processes, and I see many similarities.

    If you’re curious about my experience staying sane and keeping your platform or core team on track (without becoming that giant one-size-fits-all monster), check it out!

    architecture-weekly.com/p/thou

    And that’s a free post, so if you enjoyed it, feel free (pun intended) to share it with your friends!

    I would also love to hear your thoughts!

  40. Last month I’ve made the Black Friday „offer”. And results surprised me. I gave 30 days trial for free to #ArchitectureWeekly.

    Anyone could subscribe, read all articles, watch all videos (over 30h) and decide whether to stay.

    I think that it’s fair for old subscribers and good option for new.

    What’s the surprising part? Out of 60 people who gave it a try over 30 decided to stay with paid subscription. That’s a lot, as currently after that I have 197.

    Super happy about the trust.

    I reshaped the format last year, hoping to make it sustainable. I’m still not fully there yet, but that brought me closer. And enough to continue trying.

    Thanks!

  41. It seems that you can give #ArchitectureWeekly as a gift. That’s probably not the best gift you can give for Christmas but still better than socks or photo frame, so there you have it architecture-weekly.com/subscr 🤷‍♂️🎅

  42. In the latest #ArchitectureWeekly, I want to be less techy and focus more on the soft part of being an architect. I uncovered some of my tricks for striking a balance between talking and not talking and for using talking to achieve the outcome.

    I explained why I think that we should:
    1. Ask and listen more often than tell
    2. It’s okay to agree to disagree
    3. Split the discussions on facts from discussions on feelings.
    4. Focus on similarities, not differences.
    5. Avoid jargon at all costs.

    2/

  43. This week, a surprising thing happened to me. In #ArchitectureWeekly, I try to share my perspective experience every week, but I don't like acting as an expert. That's why I am inviting special guests who know specific areas much better than I do. Sometimes, there are webinars, sometimes podcast-like interviews. What's the surprising part?

    In this week's episode with @hazelweakly we discussed observability, and Hazel made us switch places, and she started interviewing me

    1/

  44. That's why I'm happy that @hazelweakly agreed to record the #ArchitectureWeekly podcast episode and try to bridge those two perspectives.

    architecture-weekly.com/p/appl

    @hazelweakly is one of the best people in the observability space. I think that she doesn't need an introduction here as one of the persons who made the Hachyderm instance operable, safe, and community-driven.

    That's also where we started the episode, but we went much further! Also beyond the #OpenTelemetry

    2/

  45. I’ll give you two numbers: 15 and 47. What are them? 15 is the number of hours left to get a 30-day FREE trial to all #ArchitectureWeekly content.

    47 is the number of people decided to give it a go!

    So don’t wait, click here: architecture-weekly.com/blackf

    Read all articles, watch over 40h of video learning materials and decide whether you stay longer or just benefit from free knowledge!

    If you like it, share with your friends so they can also benefit! 🙂❤️

  46. Is today a #BlackFriday? Yes, it is! Are you looking for decent content about #SoftwareArchitecture? Yes, you do! Do I have something for you? Yes, I do!

    You can get a 30-day free trial of my #ArchitectureWeekly newsletter by going to this magic link: architecture-weekly.com/blackf

    You can get access to all paid content, articles, and over 40 hours of video recording. Try it and decide if you want to stay.

    👋 Feel invited to join!

    The offer lasts till the end of the week, so be quick!

  47. Have you considered applying observability but struggled to match the strategy with the tooling? Or maybe you were lost on how to do it? I have something for you, or actually a sneak peek of what we'll be in next Monday's #ArchitectureWeekly edition!

    I had a great discussion with @hazelweakly about Observability beyond Open Telemetry, keeping it "meaty" with practical thoughts. I learned a lot, and I'm sure you will, too!

    Subscribe and don't miss it: architecture-weekly.com/blackf! 😀

  48. In the latest #ArchitectureWeekly, I discussed deduplication strategies, assessed their usefulness, and considered scenarios where you may need them. I also did a reality check on messaging vendors' promises of exactly-once delivery.

    TLDR: They're broken.

    architecture-weekly.com/p/dedu

    And hey, this is a free edition post, so no paywall this time! 😀 Fasten your seat belts, and grab a coffee, as this is a long read.

    Are you using deduplication in your messaging systems?