home.social

#governmentservices — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #governmentservices, aggregated by home.social.

  1. Associated Press: Callers to Washington state hotline press 2 for Spanish and get accented AI English instead. “For months, callers to the Washington state Department of Licensing who have requested automated service in Spanish have instead heard an AI voice speaking English in a strong Spanish accent. The agency has since apologized and says it fixed the problem.”

    https://rbfirehose.com/2026/03/01/associated-press-callers-to-washington-state-hotline-press-2-for-spanish-and-get-accented-ai-english-instead/
  2. Associated Press: Callers to Washington state hotline press 2 for Spanish and get accented AI English instead. “For months, callers to the Washington state Department of Licensing who have requested automated service in Spanish have instead heard an AI voice speaking English in a strong Spanish accent. The agency has since apologized and says it fixed the problem.”

    https://rbfirehose.com/2026/03/01/associated-press-callers-to-washington-state-hotline-press-2-for-spanish-and-get-accented-ai-english-instead/
  3. Associated Press: Callers to Washington state hotline press 2 for Spanish and get accented AI English instead. “For months, callers to the Washington state Department of Licensing who have requested automated service in Spanish have instead heard an AI voice speaking English in a strong Spanish accent. The agency has since apologized and says it fixed the problem.”

    https://rbfirehose.com/2026/03/01/associated-press-callers-to-washington-state-hotline-press-2-for-spanish-and-get-accented-ai-english-instead/
  4. Associated Press: Callers to Washington state hotline press 2 for Spanish and get accented AI English instead. “For months, callers to the Washington state Department of Licensing who have requested automated service in Spanish have instead heard an AI voice speaking English in a strong Spanish accent. The agency has since apologized and says it fixed the problem.”

    https://rbfirehose.com/2026/03/01/associated-press-callers-to-washington-state-hotline-press-2-for-spanish-and-get-accented-ai-english-instead/
  5. State of Massachusetts: Governor Healey Announces Massachusetts to Become First State to Deploy ChatGPT Across Executive Branch. Uh. “Today, Governor Maura Healey announced the launch of the ChatGPT-powered Artificial Intelligence (AI) Assistant for the state’s workforce, with the goal of making government work better and faster for people…. Massachusetts will be the first state to adopt […]

    https://rbfirehose.com/2026/02/16/state-of-massachusetts-governor-healey-announces-massachusetts-to-become-first-state-to-deploy-chatgpt-across-executive-branch/
  6. ComputerWeekly: Large language models provide unreliable answers about public services, Open Data Institute finds. “Popular large language models (LLMs) are unable to provide reliable information about key public services such as health, taxes and benefits, the Open Data Institute (ODI) has found.”

    https://rbfirehose.com/2026/02/15/computerweekly-large-language-models-provide-unreliable-answers-about-public-services-open-data-institute-finds/
  7. What Does a Good Spec File Look Like?

    Most legacy government systems exist in a state of profound documentation poverty. The knowledge lives in the heads of retiring employees, in COBOL comments from 1987, in binders that may or may not reflect current behavior. Against this baseline, the question of what makes a “good” spec file takes on different dimensions than it might in greenfield development.

    Common Elements

    Any spec worth writing answers the fundamental question: what are we building and why? Beyond that, good specs share a few specific characteristics:

    Clear success criteria. Not just features, but how you’ll know the thing works. This matters especially when AI agents are generating implementations—they need something concrete to validate against.

    Constraints and boundaries. What’s out of scope. What technologies or patterns to use or avoid. Performance requirements. AI tools are prone to scope creep and assumption-making without explicit boundaries.

    Examples of expected behavior. Concrete inputs and outputs, edge cases, error states. These serve as both specification and implicit test cases.

    Context about the broader system. How this piece fits into what exists. AI assistants lack awareness of surrounding code and architectural decisions unless you tell them.

    The SpecOps Context

    When modernizing legacy government systems, specs serve a different purpose than typical development documentation. They’re not just implementation guides—they are artifacts that preserve institutional knowledge. This changes what “good” looks like.

    A SpecOps specification document must work for multiple audiences simultaneously: domain experts who verify that the spec captures policy intent, software developers and AI coding agents who need precision to generate correct implementations, and future humans who need to understand why the system behaves a certain way years from now—possibly after everyone currently involved has moved on.

    That last audience is the one most spec formats neglect entirely.

    Three States, Not One

    Legacy system specs can’t just describe “what the system does.” They need to distinguish between:

    1. Current system behavior—what the legacy code actually does today, bugs and all
    2. Current policy requirements—what the system should do according to governing statutes and regulations
    3. Technical constraints—what the system cannot do regardless of policy, due to missing integrations or platform limitations

    These three things can be in alignment or tension at any moment. And that alignment can shift over time without the code changing—a policy update tomorrow can transform compliant behavior into a violation.

    Known Deviation Patterns

    Consider the example of a benefits system that should verify income against a state tax agency records, but the legacy system only captures self-reported income because the integration with the tax agency was never built. A good spec would make this explicit:

    Policy requirement: Per [directive], applicant income must be verified against tax agency records prior to benefit approval.

    Current implementation: Self-reported income only. Applicant provides income information on Form X.

    Deviation reason: No interface to tax agency income verification service exists. Integration requested in 2019, not funded.

    Modernization note: Modern implementation should include tax agency income verification integration.

    This surfaces the gap, documents why it exists, and gives the modernization effort clear direction—without pretending the legacy system does something it doesn’t.

    Explicit Ambiguity as a Feature

    There’s something that seems almost radical about a methodology that says write down what you don’t know. Traditional documentation can project false confidence. It often describes how things should work and quietly omits the messy parts.

    A spec that explicitly marks areas of tension or uncertainty is more honest, more useful for risk assessment, and a better starting point for modernization. It’s an invitation for future clarification rather than a false endpoint.

    A spec with unresolved tension is better than no reviewable documentation at all. 

    Policy Grounding

    Government system specs need explicit links to authorizing statutes, regulations, or directives. Not just “these items are excluded from income calculations” but “per 42 USC § 1382a, the following items are excluded from income calculations”

    This is the why that survives personnel turnover. It’s what allows future teams to evaluate whether behavior that was correct five years ago still aligns with current policy.

    Decision Records

    When domain experts verify a spec, they make judgment calls—especially where legacy behavior diverges from current policy understanding. Those decisions need to be captured in the spec, not in a separate document that gets lost.

    The spec becomes the repository of institutional reasoning, not just institutional behavior.

    Accessible or Precise?

    The SpecOps approach says that specs should be “readable by domain experts while detailed enough to guide implementation.” This is genuinely hard.

    Options include stratified specs (plain-language summaries with expandable technical detail), executable specs (written as tests that are simultaneously human-readable and machine-verifiable), or annotated specs (a single verbose document where technical precision is explained inline).

    Given that the spec is meant to be the source of truth that outlasts implementations, keeping everything in one artifact—even at the cost of verbosity—reduces the risk of layers drifting apart over time.

    The Road Ahead

    We’re still in early days. Questions remain open:

    • How granular should policy references be?
    • What’s the right way to represent known deviations?
    • How should specs age—versioning, or is git history enough?
    • What level of detail helps AI agents versus adding noise?

    These will get answered empirically as more agencies adopt the approach. The methodology will evolve. The important thing is to start—to surface questions that were previously invisible, to give future teams something to interrogate rather than nothing at all.

    Because the knowledge is what matters. Everything else is implementation details.

    #ai #artificialIntelligence #chatgpt #governmentServices #legacySystems #systemModernization

  8. What Does a Good Spec File Look Like?

    Most legacy government systems exist in a state of profound documentation poverty. The knowledge lives in the heads of retiring employees, in COBOL comments from 1987, in binders that may or may not reflect current behavior. Against this baseline, the question of what makes a “good” spec file takes on different dimensions than it might in greenfield development.

    Common Elements

    Any spec worth writing answers the fundamental question: what are we building and why? Beyond that, good specs share a few specific characteristics:

    Clear success criteria. Not just features, but how you’ll know the thing works. This matters especially when AI agents are generating implementations—they need something concrete to validate against.

    Constraints and boundaries. What’s out of scope. What technologies or patterns to use or avoid. Performance requirements. AI tools are prone to scope creep and assumption-making without explicit boundaries.

    Examples of expected behavior. Concrete inputs and outputs, edge cases, error states. These serve as both specification and implicit test cases.

    Context about the broader system. How this piece fits into what exists. AI assistants lack awareness of surrounding code and architectural decisions unless you tell them.

    The SpecOps Context

    When modernizing legacy government systems, specs serve a different purpose than typical development documentation. They’re not just implementation guides—they are artifacts that preserve institutional knowledge. This changes what “good” looks like.

    A SpecOps specification document must work for multiple audiences simultaneously: domain experts who verify that the spec captures policy intent, software developers and AI coding agents who need precision to generate correct implementations, and future humans who need to understand why the system behaves a certain way years from now—possibly after everyone currently involved has moved on.

    That last audience is the one most spec formats neglect entirely.

    Three States, Not One

    Legacy system specs can’t just describe “what the system does.” They need to distinguish between:

    1. Current system behavior—what the legacy code actually does today, bugs and all
    2. Current policy requirements—what the system should do according to governing statutes and regulations
    3. Technical constraints—what the system cannot do regardless of policy, due to missing integrations or platform limitations

    These three things can be in alignment or tension at any moment. And that alignment can shift over time without the code changing—a policy update tomorrow can transform compliant behavior into a violation.

    Known Deviation Patterns

    Consider the example of a benefits system that should verify income against a state tax agency records, but the legacy system only captures self-reported income because the integration with the tax agency was never built. A good spec would make this explicit:

    Policy requirement: Per [directive], applicant income must be verified against tax agency records prior to benefit approval.

    Current implementation: Self-reported income only. Applicant provides income information on Form X.

    Deviation reason: No interface to tax agency income verification service exists. Integration requested in 2019, not funded.

    Modernization note: Modern implementation should include tax agency income verification integration.

    This surfaces the gap, documents why it exists, and gives the modernization effort clear direction—without pretending the legacy system does something it doesn’t.

    Explicit Ambiguity as a Feature

    There’s something that seems almost radical about a methodology that says write down what you don’t know. Traditional documentation can project false confidence. It often describes how things should work and quietly omits the messy parts.

    A spec that explicitly marks areas of tension or uncertainty is more honest, more useful for risk assessment, and a better starting point for modernization. It’s an invitation for future clarification rather than a false endpoint.

    A spec with unresolved tension is better than no reviewable documentation at all. 

    Policy Grounding

    Government system specs need explicit links to authorizing statutes, regulations, or directives. Not just “these items are excluded from income calculations” but “per 42 USC § 1382a, the following items are excluded from income calculations”

    This is the why that survives personnel turnover. It’s what allows future teams to evaluate whether behavior that was correct five years ago still aligns with current policy.

    Decision Records

    When domain experts verify a spec, they make judgment calls—especially where legacy behavior diverges from current policy understanding. Those decisions need to be captured in the spec, not in a separate document that gets lost.

    The spec becomes the repository of institutional reasoning, not just institutional behavior.

    Accessible or Precise?

    The SpecOps approach says that specs should be “readable by domain experts while detailed enough to guide implementation.” This is genuinely hard.

    Options include stratified specs (plain-language summaries with expandable technical detail), executable specs (written as tests that are simultaneously human-readable and machine-verifiable), or annotated specs (a single verbose document where technical precision is explained inline).

    Given that the spec is meant to be the source of truth that outlasts implementations, keeping everything in one artifact—even at the cost of verbosity—reduces the risk of layers drifting apart over time.

    The Road Ahead

    We’re still in early days. Questions remain open:

    • How granular should policy references be?
    • What’s the right way to represent known deviations?
    • How should specs age—versioning, or is git history enough?
    • What level of detail helps AI agents versus adding noise?

    These will get answered empirically as more agencies adopt the approach. The methodology will evolve. The important thing is to start—to surface questions that were previously invisible, to give future teams something to interrogate rather than nothing at all.

    Because the knowledge is what matters. Everything else is implementation details.

    #ai #artificialIntelligence #chatgpt #governmentServices #legacySystems #systemModernization

  9. What Does a Good Spec File Look Like?

    Most legacy government systems exist in a state of profound documentation poverty. The knowledge lives in the heads of retiring employees, in COBOL comments from 1987, in binders that may or may not reflect current behavior. Against this baseline, the question of what makes a “good” spec file takes on different dimensions than it might in greenfield development.

    Common Elements

    Any spec worth writing answers the fundamental question: what are we building and why? Beyond that, good specs share a few specific characteristics:

    Clear success criteria. Not just features, but how you’ll know the thing works. This matters especially when AI agents are generating implementations—they need something concrete to validate against.

    Constraints and boundaries. What’s out of scope. What technologies or patterns to use or avoid. Performance requirements. AI tools are prone to scope creep and assumption-making without explicit boundaries.

    Examples of expected behavior. Concrete inputs and outputs, edge cases, error states. These serve as both specification and implicit test cases.

    Context about the broader system. How this piece fits into what exists. AI assistants lack awareness of surrounding code and architectural decisions unless you tell them.

    The SpecOps Context

    When modernizing legacy government systems, specs serve a different purpose than typical development documentation. They’re not just implementation guides—they are artifacts that preserve institutional knowledge. This changes what “good” looks like.

    A SpecOps specification document must work for multiple audiences simultaneously: domain experts who verify that the spec captures policy intent, software developers and AI coding agents who need precision to generate correct implementations, and future humans who need to understand why the system behaves a certain way years from now—possibly after everyone currently involved has moved on.

    That last audience is the one most spec formats neglect entirely.

    Three States, Not One

    Legacy system specs can’t just describe “what the system does.” They need to distinguish between:

    1. Current system behavior—what the legacy code actually does today, bugs and all
    2. Current policy requirements—what the system should do according to governing statutes and regulations
    3. Technical constraints—what the system cannot do regardless of policy, due to missing integrations or platform limitations

    These three things can be in alignment or tension at any moment. And that alignment can shift over time without the code changing—a policy update tomorrow can transform compliant behavior into a violation.

    Known Deviation Patterns

    Consider the example of a benefits system that should verify income against a state tax agency records, but the legacy system only captures self-reported income because the integration with the tax agency was never built. A good spec would make this explicit:

    Policy requirement: Per [directive], applicant income must be verified against tax agency records prior to benefit approval.

    Current implementation: Self-reported income only. Applicant provides income information on Form X.

    Deviation reason: No interface to tax agency income verification service exists. Integration requested in 2019, not funded.

    Modernization note: Modern implementation should include tax agency income verification integration.

    This surfaces the gap, documents why it exists, and gives the modernization effort clear direction—without pretending the legacy system does something it doesn’t.

    Explicit Ambiguity as a Feature

    There’s something that seems almost radical about a methodology that says write down what you don’t know. Traditional documentation can project false confidence. It often describes how things should work and quietly omits the messy parts.

    A spec that explicitly marks areas of tension or uncertainty is more honest, more useful for risk assessment, and a better starting point for modernization. It’s an invitation for future clarification rather than a false endpoint.

    A spec with unresolved tension is better than no reviewable documentation at all. 

    Policy Grounding

    Government system specs need explicit links to authorizing statutes, regulations, or directives. Not just “these items are excluded from income calculations” but “per 42 USC § 1382a, the following items are excluded from income calculations”

    This is the why that survives personnel turnover. It’s what allows future teams to evaluate whether behavior that was correct five years ago still aligns with current policy.

    Decision Records

    When domain experts verify a spec, they make judgment calls—especially where legacy behavior diverges from current policy understanding. Those decisions need to be captured in the spec, not in a separate document that gets lost.

    The spec becomes the repository of institutional reasoning, not just institutional behavior.

    Accessible or Precise?

    The SpecOps approach says that specs should be “readable by domain experts while detailed enough to guide implementation.” This is genuinely hard.

    Options include stratified specs (plain-language summaries with expandable technical detail), executable specs (written as tests that are simultaneously human-readable and machine-verifiable), or annotated specs (a single verbose document where technical precision is explained inline).

    Given that the spec is meant to be the source of truth that outlasts implementations, keeping everything in one artifact—even at the cost of verbosity—reduces the risk of layers drifting apart over time.

    The Road Ahead

    We’re still in early days. Questions remain open:

    • How granular should policy references be?
    • What’s the right way to represent known deviations?
    • How should specs age—versioning, or is git history enough?
    • What level of detail helps AI agents versus adding noise?

    These will get answered empirically as more agencies adopt the approach. The methodology will evolve. The important thing is to start—to surface questions that were previously invisible, to give future teams something to interrogate rather than nothing at all.

    Because the knowledge is what matters. Everything else is implementation details.

    #ai #artificialIntelligence #chatgpt #governmentServices #legacySystems #systemModernization

  10. What Does a Good Spec File Look Like?

    Most legacy government systems exist in a state of profound documentation poverty. The knowledge lives in the heads of retiring employees, in COBOL comments from 1987, in binders that may or may not reflect current behavior. Against this baseline, the question of what makes a “good” spec file takes on different dimensions than it might in greenfield development.

    Common Elements

    Any spec worth writing answers the fundamental question: what are we building and why? Beyond that, good specs share a few specific characteristics:

    Clear success criteria. Not just features, but how you’ll know the thing works. This matters especially when AI agents are generating implementations—they need something concrete to validate against.

    Constraints and boundaries. What’s out of scope. What technologies or patterns to use or avoid. Performance requirements. AI tools are prone to scope creep and assumption-making without explicit boundaries.

    Examples of expected behavior. Concrete inputs and outputs, edge cases, error states. These serve as both specification and implicit test cases.

    Context about the broader system. How this piece fits into what exists. AI assistants lack awareness of surrounding code and architectural decisions unless you tell them.

    The SpecOps Context

    When modernizing legacy government systems, specs serve a different purpose than typical development documentation. They’re not just implementation guides—they are artifacts that preserve institutional knowledge. This changes what “good” looks like.

    A SpecOps specification document must work for multiple audiences simultaneously: domain experts who verify that the spec captures policy intent, software developers and AI coding agents who need precision to generate correct implementations, and future humans who need to understand why the system behaves a certain way years from now—possibly after everyone currently involved has moved on.

    That last audience is the one most spec formats neglect entirely.

    Three States, Not One

    Legacy system specs can’t just describe “what the system does.” They need to distinguish between:

    1. Current system behavior—what the legacy code actually does today, bugs and all
    2. Current policy requirements—what the system should do according to governing statutes and regulations
    3. Technical constraints—what the system cannot do regardless of policy, due to missing integrations or platform limitations

    These three things can be in alignment or tension at any moment. And that alignment can shift over time without the code changing—a policy update tomorrow can transform compliant behavior into a violation.

    Known Deviation Patterns

    Consider the example of a benefits system that should verify income against a state tax agency records, but the legacy system only captures self-reported income because the integration with the tax agency was never built. A good spec would make this explicit:

    Policy requirement: Per [directive], applicant income must be verified against tax agency records prior to benefit approval.

    Current implementation: Self-reported income only. Applicant provides income information on Form X.

    Deviation reason: No interface to tax agency income verification service exists. Integration requested in 2019, not funded.

    Modernization note: Modern implementation should include tax agency income verification integration.

    This surfaces the gap, documents why it exists, and gives the modernization effort clear direction—without pretending the legacy system does something it doesn’t.

    Explicit Ambiguity as a Feature

    There’s something that seems almost radical about a methodology that says write down what you don’t know. Traditional documentation can project false confidence. It often describes how things should work and quietly omits the messy parts.

    A spec that explicitly marks areas of tension or uncertainty is more honest, more useful for risk assessment, and a better starting point for modernization. It’s an invitation for future clarification rather than a false endpoint.

    A spec with unresolved tension is better than no reviewable documentation at all. 

    Policy Grounding

    Government system specs need explicit links to authorizing statutes, regulations, or directives. Not just “these items are excluded from income calculations” but “per 42 USC § 1382a, the following items are excluded from income calculations”

    This is the why that survives personnel turnover. It’s what allows future teams to evaluate whether behavior that was correct five years ago still aligns with current policy.

    Decision Records

    When domain experts verify a spec, they make judgment calls—especially where legacy behavior diverges from current policy understanding. Those decisions need to be captured in the spec, not in a separate document that gets lost.

    The spec becomes the repository of institutional reasoning, not just institutional behavior.

    Accessible or Precise?

    The SpecOps approach says that specs should be “readable by domain experts while detailed enough to guide implementation.” This is genuinely hard.

    Options include stratified specs (plain-language summaries with expandable technical detail), executable specs (written as tests that are simultaneously human-readable and machine-verifiable), or annotated specs (a single verbose document where technical precision is explained inline).

    Given that the spec is meant to be the source of truth that outlasts implementations, keeping everything in one artifact—even at the cost of verbosity—reduces the risk of layers drifting apart over time.

    The Road Ahead

    We’re still in early days. Questions remain open:

    • How granular should policy references be?
    • What’s the right way to represent known deviations?
    • How should specs age—versioning, or is git history enough?
    • What level of detail helps AI agents versus adding noise?

    These will get answered empirically as more agencies adopt the approach. The methodology will evolve. The important thing is to start—to surface questions that were previously invisible, to give future teams something to interrogate rather than nothing at all.

    Because the knowledge is what matters. Everything else is implementation details.

    #ai #artificialIntelligence #chatgpt #governmentServices #legacySystems #systemModernization

  11. What Does a Good Spec File Look Like?

    Most legacy government systems exist in a state of profound documentation poverty. The knowledge lives in the heads of retiring employees, in COBOL comments from 1987, in binders that may or may not reflect current behavior. Against this baseline, the question of what makes a “good” spec file takes on different dimensions than it might in greenfield development.

    Common Elements

    Any spec worth writing answers the fundamental question: what are we building and why? Beyond that, good specs share a few specific characteristics:

    Clear success criteria. Not just features, but how you’ll know the thing works. This matters especially when AI agents are generating implementations—they need something concrete to validate against.

    Constraints and boundaries. What’s out of scope. What technologies or patterns to use or avoid. Performance requirements. AI tools are prone to scope creep and assumption-making without explicit boundaries.

    Examples of expected behavior. Concrete inputs and outputs, edge cases, error states. These serve as both specification and implicit test cases.

    Context about the broader system. How this piece fits into what exists. AI assistants lack awareness of surrounding code and architectural decisions unless you tell them.

    The SpecOps Context

    When modernizing legacy government systems, specs serve a different purpose than typical development documentation. They’re not just implementation guides—they are artifacts that preserve institutional knowledge. This changes what “good” looks like.

    A SpecOps specification document must work for multiple audiences simultaneously: domain experts who verify that the spec captures policy intent, software developers and AI coding agents who need precision to generate correct implementations, and future humans who need to understand why the system behaves a certain way years from now—possibly after everyone currently involved has moved on.

    That last audience is the one most spec formats neglect entirely.

    Three States, Not One

    Legacy system specs can’t just describe “what the system does.” They need to distinguish between:

    1. Current system behavior—what the legacy code actually does today, bugs and all
    2. Current policy requirements—what the system should do according to governing statutes and regulations
    3. Technical constraints—what the system cannot do regardless of policy, due to missing integrations or platform limitations

    These three things can be in alignment or tension at any moment. And that alignment can shift over time without the code changing—a policy update tomorrow can transform compliant behavior into a violation.

    Known Deviation Patterns

    Consider the example of a benefits system that should verify income against a state tax agency records, but the legacy system only captures self-reported income because the integration with the tax agency was never built. A good spec would make this explicit:

    Policy requirement: Per [directive], applicant income must be verified against tax agency records prior to benefit approval.

    Current implementation: Self-reported income only. Applicant provides income information on Form X.

    Deviation reason: No interface to tax agency income verification service exists. Integration requested in 2019, not funded.

    Modernization note: Modern implementation should include tax agency income verification integration.

    This surfaces the gap, documents why it exists, and gives the modernization effort clear direction—without pretending the legacy system does something it doesn’t.

    Explicit Ambiguity as a Feature

    There’s something that seems almost radical about a methodology that says write down what you don’t know. Traditional documentation can project false confidence. It often describes how things should work and quietly omits the messy parts.

    A spec that explicitly marks areas of tension or uncertainty is more honest, more useful for risk assessment, and a better starting point for modernization. It’s an invitation for future clarification rather than a false endpoint.

    A spec with unresolved tension is better than no reviewable documentation at all. 

    Policy Grounding

    Government system specs need explicit links to authorizing statutes, regulations, or directives. Not just “these items are excluded from income calculations” but “per 42 USC § 1382a, the following items are excluded from income calculations”

    This is the why that survives personnel turnover. It’s what allows future teams to evaluate whether behavior that was correct five years ago still aligns with current policy.

    Decision Records

    When domain experts verify a spec, they make judgment calls—especially where legacy behavior diverges from current policy understanding. Those decisions need to be captured in the spec, not in a separate document that gets lost.

    The spec becomes the repository of institutional reasoning, not just institutional behavior.

    Accessible or Precise?

    The SpecOps approach says that specs should be “readable by domain experts while detailed enough to guide implementation.” This is genuinely hard.

    Options include stratified specs (plain-language summaries with expandable technical detail), executable specs (written as tests that are simultaneously human-readable and machine-verifiable), or annotated specs (a single verbose document where technical precision is explained inline).

    Given that the spec is meant to be the source of truth that outlasts implementations, keeping everything in one artifact—even at the cost of verbosity—reduces the risk of layers drifting apart over time.

    The Road Ahead

    We’re still in early days. Questions remain open:

    • How granular should policy references be?
    • What’s the right way to represent known deviations?
    • How should specs age—versioning, or is git history enough?
    • What level of detail helps AI agents versus adding noise?

    These will get answered empirically as more agencies adopt the approach. The methodology will evolve. The important thing is to start—to surface questions that were previously invisible, to give future teams something to interrogate rather than nothing at all.

    Because the knowledge is what matters. Everything else is implementation details.

    #ai #artificialIntelligence #chatgpt #governmentServices #legacySystems #systemModernization

  12. The Future is Ahead of Schedule

    MCP Apps and the Acceleration of Just-in-Time Interfaces

    In August, Dan Munz and I wrote about the end of civic tech’s interface era, arguing that the rise of AI-generated, just-in-time interfaces would fundamentally change how civic technologists think about designing government services. We acknowledged that these ideas were still mostly theoretical—”this is still an idea that lies in the future,” we wrote. “But the future is getting here very quickly.”

    That’s starting to look like a pretty significant understatement.

    Just three months later, the organiztions behind the Model Context Protocol (MCP) — the open standard that connects AI assistants to data sources and tools — have announced MCP Apps, a formal extension for delivering interactive user interfaces through the MCP protocol. What we described in our earlier post as an emerging concept is now being standardized by Anthropic, OpenAI, and the MCP community. The timeline from theoretical possibility to a formal specification to guide production implementations wasn’t years or even months. It was weeks.

    And we’d better get usef to it – this is what change looks like in the AI era.

    From Concept to Standard in Record Time

    When we initialy wrote about just-in-time interfaces, we pointed to early experiments and proof-of-concepts: Shopify’s internal prototyping with generative AI, Google’s Stitch and Opal projects, AWS’s explorations with PartyRock. These seemed like interesting signals, but they were scattered efforts using different approaches and solving similar problems in ways that were not obviously compatible.

    MCP Apps changes seems poised to change that. It provides a standardized way for AI tools to deliver interactive interfaces — not as a speculative idea, but as a specification that developers can start to implement today. The extension enables AI-powered tools that can present rich, interactive interfaces while maintaining the security, auditability, and consistency that production systems will require.

    The design is deliberately lean, starting with HTML-based interfaces delivered through sandboxed iframes. But the implications reach further. As the team beghins this effort notes, this is starting to look like “an agentic app runtime: a foundation for novel interactions between AI models, users, and applications.”

    This matters for government digital services because it validates the core thesis of our earlier post: the constraints that forced civic designers to build one interface for everyone is eroding faster than most people anticipated. Certainly faster than we did.

    The Infrastructure and the Ingredients

    MCP Apps provides the delivery mechanism — a standardized way to serve interactive interfaces through AI systems. The specification itself is deliberately lean, focusing on core infrastructure: HTML templates delivered through sandboxed iframes, JSON-RPC protocols for communication, and multiple layers of security (iframe sandboxing, predeclared templates, auditable messaging, and user consent requirements).

    What MCP Apps doesn’t specify is what makes those interfaces good or appropriate for government use. That’s where the foundational work civic technologists have already done becomes critical.

    When we wrote about just-in-time interfaces in August, we noted that Shopify’s generative UI prototyping works in part because their design system is built on tokens—named variables that store key aspects of design systems like colors, spacing, and typography. We noted that “tokens aren’t sufficient to make just-in-time UIs a reality, but they probably are foundational.”

    MCP Apps now provides the plumbing. But the quality of AI-generated government interfaces will still depend on having the right ingredients: well-structured design systems, clear interaction principles, and encoded policy logic. The U.S. Web Design SystemVA.gov Design SystemNational Cancer Institute Design System, and other design systems used in government use tokens. That existing infrastructure positions government agencies to potentially benefit from MCP Apps when the time comes to experiment with dynamic interfaces — not because MCP Apps requires tokens, but because tokenized design systems can give AI something coherent to work with when generating interfaces.

    The architectural decisions in MCP Apps demonstrate another principle that veteran civic technologists will recognize: building on proven patterns rather than inventing everything from scratch. Using MCP’s existing JSON-RPC protocol means developers can use familiar tools. Prioritizing security from the start means it won’t need to be retrofitted later. These are the kinds of decisions that distinguish serious infrastructure from interesting experiments—and exactly the kinds of decisions that government technology teams need to see before they’ll trust a new approach for delivering citizen-facing services.

    What This Means for Civic Designers

    The rapid standardization of interactive interfaces in AI systems has immediate implications for how civic designers should think about their work.

    First, it underscores that the shift from fixed, multitenant interfaces to adaptive, context-specific experiences isn’t just theoretically possible — it’s actively being built. The expertise that civic designers have developed around creating design systems, documenting interaction patterns, and encoding policy logic won’t becoming obsolete. it will becoming more valuable because it provides the neccesary ingredients that AI systems will use to generate appropriate interfaces.

    Second, it underscores the importance of getting the upstream architecture right. As we wrote in August, expertise in civic tech will move upstream — from implementation to architecture, from specific solutions to systemic standards. MCP Apps makes this more concrete. The work of defining interaction principles, building component libraries, and establishing visual identity standards becomes foundational to building great experiences, not nice-to-haves.

    Third, it highlights the compressed timeline that government agencies are now facing. In previous waves of technological change, governments had years to observe how the private sector adopted new approaches before deciding whether (and how) to follow suit. The telephone era unfolded over decades. The Internet era compressed change to years. The AI era is compressing change to months. MCP Apps emerged from theoretical concept to production standard in less time than it typically takes a government agency to complete a procurement cycle for new software.

    This mismatch between the pace of technological change and the pace of government adoption isn’t new – but the gap is widening at an accelerating rate.

    The Infrastructure We Need Now

    If just-in-time interfaces are moving from concept to production this quickly, what should government digital services teams be doing now to prepare?

    The answer isn’t to rush into production deployments of AI-generated interfaces. The better approach is to strengthen the foundations that make such deployments viable when the time is right.

    That means investing in design systems that use tokens and are built with the assumption that they’ll need to support dynamic interface generation. It means continuing the hard work of encoding policy logic in formats that AI systems can understand—efforts like the Digital Benefits Network’s Rules as Code community of practice aren’t just preparing for a possible future, they’re building essential infrastructure for a future that’s arriving ahead of schedule.

    It also means rethinking how government agencies approach risk and experimentation. The traditional model of waiting until a technology is fully mature before considering adoption doesn’t work when the maturity cycle has compressed from years to months. Agencies need to develop the capacity to experiment safely and learn quickly—running controlled pilots, establishing clear evaluation criteria, and building the organizational muscle to rapidly deploy what works while quickly abandoning what doesn’t.

    Acceleration Requires New Muscles

    Perhaps the most important takeaway from the rapid emergence of MCP Apps isn’t about the technology itself. It’s about the pace of change in the AI era and what that means for how government organizations operate.

    Three months ago, we described just-in-time interfaces as lying in the future. Today, there’s a formal specification proposal for delivering them. The team behind the MCP protocol has built an early access SDK to demonstrate the patterns, and projects like MCP-UI are already implementing support. The cycle of innovation, standardization, and adoption that once took years now happens in weeks and months — even if we’re still in the early stages of this particular evolution.

    This creates genuine challenges for government organizations whose processes and decision-making structures were designed for a different era. But it also creates opportunities. Agencies that have invested in the right foundations — strong design systems, encoded policy logic, clear interaction principles — are positioned to benefit from these rapid advances. Those that haven’t will find themselves further behind with each passing month.

    The future we wrote about in August isn’t coming. It’s here, and it arrived faster than even we expected. Government digital services will need to adapt to just-in-time interfaces.

    The challenge for those of us working in and with governments is whether these organizations can develop the capacity to adapt at the speed that technological change now demands. Because if three months taught us anything, it’s that the next three months will bring changes we haven’t yet imagined.

    #artificialIntelligence #governmentServices #justInTimeInterfaces #modelContextProtocol #userExperience

  13. 🎨🤑 "Designing" government services like it's a trendy coffee shop?! 🍏💼 Joe Gebbia, with zero government experience, plans to turn the entire US bureaucracy into an Apple Store and expects applause. 🙄 Is this #satire or just another day in the land of infinite irony?
    chrbutler.com/the-national-des #DesignThinking #GovernmentServices #Innovation #Irony #HackerNews #ngated

  14. Mashable: Nevada government offices close after massive ‘network security incident’. “Nevada closed all in-person services at state offices on Monday following the ‘network security incident,’ which was first detected early Sunday, according to a press statement from Gov. Joe Lombardo. While the in-person services were unavailable because of the outage, emergency services and 911 remained […]

    https://rbfirehose.com/2025/08/26/mashable-nevada-government-offices-close-after-massive-network-security-incident/

  15. The Register: Glasgow City Council online services crippled following cyberattack. “A cyberattack on Glasgow City Council is causing massive disruption with a slew of its digital services unavailable. The local authority has confirmed the attack started on June 19 and attributed it to a supply chain issue involving a third-party contractor’s supplier.”

    https://rbfirehose.com/2025/07/02/the-register-glasgow-city-council-online-services-crippled-following-cyberattack/

  16. The Quiet Crisis in Legacy System Modernization

    Government agencies have started experimenting with AI—particularly large language models (LLMs)—to accelerate the long-standing problem of modernizing legacy systems. A recent MITRE analysis, Legacy IT Modernization with AI, shows early promise. LLMs can be used to extract logic from old codebases and generate “intermediate representations” that help teams refactor or rewrite aging systems. It’s not a perfect solution, and it still requires human oversight, but it’s a serious step forward.

    So far, the conversation on AI-assisted legacy modernization has centered on large, mission-critical federal systems—mainframe applications that support tax processing, logistics, or entitlement programs. But this focus overlooks a vast and growing problem: the thousands of small, back-office systems that keep state and local governments running. These applications don’t often make headlines, but they quietly power licensing, payroll, casework, and many other daily operations.

    Many of these systems are written in obscure, decades-old languages (think MS Access). Documentation is sparse or nonexistent. The people who built and maintained them are retiring. And the government’s ability to recruit and retain technical staff has not kept pace with demand. What’s more, the sheer number of these systems—and the institutional knowledge they depend on—makes traditional modernization approaches slow and expensive.

    The MITRE report provides a useful proof point: AI can help accelerate modernization. But that benefit needs to reach beyond a few flagship systems. If modernization efforts stay focused only at the federal level or only on the biggest programs, governments at every level will be stuck maintaining outdated software with dwindling staff and rising risk.

    To meet this challenge, governments needs a broader approach. That means funding, staffing, and supporting modernization efforts that include every level of government—not just those at the federal level. It means experimenting with AI-assisted refactoring tools on a wider range of systems. And it means ensuring that institutional knowledge doesn’t retire out of reach before the code is made maintainable again.

    AI won’t solve legacy modernization on its own. But it’s the first tool in a long time that changes the speed and scale of what’s possible. We should use it—everywhere we can.

    #AI #artificialIntelligence #ChatGPT #governmentServices #legacySystems #llm #systemModernization #technology

  17. The Quiet Crisis in Legacy System Modernization

    Government agencies have started experimenting with AI—particularly large language models (LLMs)—to accelerate the long-standing problem of modernizing legacy systems. A recent MITRE analysis, Legacy IT Modernization with AI, shows early promise. LLMs can be used to extract logic from old codebases and generate “intermediate representations” that help teams refactor or rewrite aging systems. It’s not a perfect solution, and it still requires human oversight, but it’s a serious step forward.

    So far, the conversation on AI-assisted legacy modernization has centered on large, mission-critical federal systems—mainframe applications that support tax processing, logistics, or entitlement programs. But this focus overlooks a vast and growing problem: the thousands of small, back-office systems that keep state and local governments running. These applications don’t often make headlines, but they quietly power licensing, payroll, casework, and many other daily operations.

    Many of these systems are written in obscure, decades-old languages (think MS Access). Documentation is sparse or nonexistent. The people who built and maintained them are retiring. And the government’s ability to recruit and retain technical staff has not kept pace with demand. What’s more, the sheer number of these systems—and the institutional knowledge they depend on—makes traditional modernization approaches slow and expensive.

    The MITRE report provides a useful proof point: AI can help accelerate modernization. But that benefit needs to reach beyond a few flagship systems. If modernization efforts stay focused only at the federal level or only on the biggest programs, governments at every level will be stuck maintaining outdated software with dwindling staff and rising risk.

    To meet this challenge, governments needs a broader approach. That means funding, staffing, and supporting modernization efforts that include every level of government—not just those at the federal level. It means experimenting with AI-assisted refactoring tools on a wider range of systems. And it means ensuring that institutional knowledge doesn’t retire out of reach before the code is made maintainable again.

    AI won’t solve legacy modernization on its own. But it’s the first tool in a long time that changes the speed and scale of what’s possible. We should use it—everywhere we can.

    #AI #artificialIntelligence #ChatGPT #governmentServices #legacySystems #llm #systemModernization #technology

  18. The Quiet Crisis in Legacy System Modernization

    Government agencies have started experimenting with AI—particularly large language models (LLMs)—to accelerate the long-standing problem of modernizing legacy systems. A recent MITRE analysis, Legacy IT Modernization with AI, shows early promise. LLMs can be used to extract logic from old codebases and generate “intermediate representations” that help teams refactor or rewrite aging systems. It’s not a perfect solution, and it still requires human oversight, but it’s a serious step forward.

    So far, the conversation on AI-assisted legacy modernization has centered on large, mission-critical federal systems—mainframe applications that support tax processing, logistics, or entitlement programs. But this focus overlooks a vast and growing problem: the thousands of small, back-office systems that keep state and local governments running. These applications don’t often make headlines, but they quietly power licensing, payroll, casework, and many other daily operations.

    Many of these systems are written in obscure, decades-old languages (think MS Access). Documentation is sparse or nonexistent. The people who built and maintained them are retiring. And the government’s ability to recruit and retain technical staff has not kept pace with demand. What’s more, the sheer number of these systems—and the institutional knowledge they depend on—makes traditional modernization approaches slow and expensive.

    The MITRE report provides a useful proof point: AI can help accelerate modernization. But that benefit needs to reach beyond a few flagship systems. If modernization efforts stay focused only at the federal level or only on the biggest programs, governments at every level will be stuck maintaining outdated software with dwindling staff and rising risk.

    To meet this challenge, governments needs a broader approach. That means funding, staffing, and supporting modernization efforts that include every level of government—not just those at the federal level. It means experimenting with AI-assisted refactoring tools on a wider range of systems. And it means ensuring that institutional knowledge doesn’t retire out of reach before the code is made maintainable again.

    AI won’t solve legacy modernization on its own. But it’s the first tool in a long time that changes the speed and scale of what’s possible. We should use it—everywhere we can.

    #AI #artificialIntelligence #ChatGPT #governmentServices #legacySystems #llm #systemModernization #technology

  19. The Quiet Crisis in Legacy System Modernization

    Government agencies have started experimenting with AI—particularly large language models (LLMs)—to accelerate the long-standing problem of modernizing legacy systems. A recent MITRE analysis, Legacy IT Modernization with AI, shows early promise. LLMs can be used to extract logic from old codebases and generate “intermediate representations” that help teams refactor or rewrite aging systems. It’s not a perfect solution, and it still requires human oversight, but it’s a serious step forward.

    So far, the conversation on AI-assisted legacy modernization has centered on large, mission-critical federal systems—mainframe applications that support tax processing, logistics, or entitlement programs. But this focus overlooks a vast and growing problem: the thousands of small, back-office systems that keep state and local governments running. These applications don’t often make headlines, but they quietly power licensing, payroll, casework, and many other daily operations.

    Many of these systems are written in obscure, decades-old languages (think MS Access). Documentation is sparse or nonexistent. The people who built and maintained them are retiring. And the government’s ability to recruit and retain technical staff has not kept pace with demand. What’s more, the sheer number of these systems—and the institutional knowledge they depend on—makes traditional modernization approaches slow and expensive.

    The MITRE report provides a useful proof point: AI can help accelerate modernization. But that benefit needs to reach beyond a few flagship systems. If modernization efforts stay focused only at the federal level or only on the biggest programs, governments at every level will be stuck maintaining outdated software with dwindling staff and rising risk.

    To meet this challenge, governments needs a broader approach. That means funding, staffing, and supporting modernization efforts that include every level of government—not just those at the federal level. It means experimenting with AI-assisted refactoring tools on a wider range of systems. And it means ensuring that institutional knowledge doesn’t retire out of reach before the code is made maintainable again.

    AI won’t solve legacy modernization on its own. But it’s the first tool in a long time that changes the speed and scale of what’s possible. We should use it—everywhere we can.

    #AI #artificialIntelligence #ChatGPT #governmentServices #legacySystems #llm #systemModernization #technology

  20. The Quiet Crisis in Legacy System Modernization

    Government agencies have started experimenting with AI—particularly large language models (LLMs)—to accelerate the long-standing problem of modernizing legacy systems. A recent MITRE analysis, Legacy IT Modernization with AI, shows early promise. LLMs can be used to extract logic from old codebases and generate “intermediate representations” that help teams refactor or rewrite aging systems. It’s not a perfect solution, and it still requires human oversight, but it’s a serious step forward.

    So far, the conversation on AI-assisted legacy modernization has centered on large, mission-critical federal systems—mainframe applications that support tax processing, logistics, or entitlement programs. But this focus overlooks a vast and growing problem: the thousands of small, back-office systems that keep state and local governments running. These applications don’t often make headlines, but they quietly power licensing, payroll, casework, and many other daily operations.

    Many of these systems are written in obscure, decades-old languages (think MS Access). Documentation is sparse or nonexistent. The people who built and maintained them are retiring. And the government’s ability to recruit and retain technical staff has not kept pace with demand. What’s more, the sheer number of these systems—and the institutional knowledge they depend on—makes traditional modernization approaches slow and expensive.

    The MITRE report provides a useful proof point: AI can help accelerate modernization. But that benefit needs to reach beyond a few flagship systems. If modernization efforts stay focused only at the federal level or only on the biggest programs, governments at every level will be stuck maintaining outdated software with dwindling staff and rising risk.

    To meet this challenge, governments needs a broader approach. That means funding, staffing, and supporting modernization efforts that include every level of government—not just those at the federal level. It means experimenting with AI-assisted refactoring tools on a wider range of systems. And it means ensuring that institutional knowledge doesn’t retire out of reach before the code is made maintainable again.

    AI won’t solve legacy modernization on its own. But it’s the first tool in a long time that changes the speed and scale of what’s possible. We should use it—everywhere we can.

    #AI #ChatGPT #governmentServices #legacySystems #systemModernization

  21. Why do billionaires and their minions argue so consistently for privatization of public services (eg, postal, transportation, social security, etc)?

    Simple: because they cannot make money from publicly run services. Once these services are 'for profit', it's dead-certain that some services will be abandoned and others will be squeezed to the breaking point.

    Running government services 'like a business' is an intermediary step, and it's a complete hoax.

    #GovernmentServices

  22. Okay, this is just stupid! One of the benefits of having a #website is that people can access your services at any time, day or night, right? Well, I just went to the #SocialSecurity website to try and get a benefit verification letter, which I need because I'm trying to reestablish a #VocationalRehabilitation case. I clicked "Sign In" and was greeted by this message:

    This service is not available at this time.
    Please try again during our regular service hours (Eastern Time):

    | Day | Service Hours |
    |-----|--------------|
    | Monday-Friday | 4:15 a.m. - 1:00 a.m. |
    | Saturday | 5:00 a.m. - 11:00 p.m. |
    | Sunday | 8:00 a.m. - 11:30 p.m. |
    | Federal Holidays | Same hours as the day the holiday occurs. |

    If you need immediate assistance:

    • You may call us Monday through Friday: 8:00AM - 7:00PM at: 1-800-772-1213
    • If you are deaf or hard-of-hearing, call our toll-free TTY number: 1-800-325-0778

    This of the website limitation is especially frustrating for me because I have a #Non24HourSleepWakeDisorder and an unpredictable schedule. It feels like the website is designed without thinking about people like me who need flexibility.
    Has anyone else out there had similar issues with government services or other websites?
    #Vent #Venting #Disability #ChronicIllness #Accessibility #DisabilityRights #InclusiveDesign #GovernmentServices #PublicServices
    @[email protected] @[email protected] @disabilityjustice @accessibility @chronicillness @spooniechat @spoonies

  23. Route Fifty: How one state is expanding language access for UI applications. “The New Jersey Department of Labor and Workforce Development, in partnership with U.S. Digital Response and Google.org, has developed translation resources that agencies can use with artificial intelligence models to sharpen the accuracy of English-to-Spanish translations for unemployment insurance services and […]

    https://rbfirehose.com/2024/12/08/route-fifty-how-one-state-is-expanding-language-access-for-ui-applications/

  24. Most of the patterns we use (knowingly or not) to guide digital modernization work in government come from the world of software development. We are drawn to these patterns because they are widely used and well understood in the software world, and they enable us to think about complex problems in ways that are easier to understand. We also typically think of digital modernization work as primarily work that revolves around software and technology.

    In practice, these patterns do not always work well (or as well as we think they should) because a big chunk of the hard work of doing digital modernization in government is organizational, legal, and bureaucratic. Software development patterns don’t really help with these things all that much. The change we seek is simply beyond them.

    The Strangler Fig Pattern is a software development pattern that is well known to people in the world of civic tech, and has inspired government digital service teams for many years. It is one of the most widely known pattern for digital service teams in government because it offers one way to migrate from an old legacy system to a new software solution — a very common challenge in government.*

    One of the drawbacks of the Strangler Fig Pattern that I have seen is that using it successfully can require lots of time. Finding the seams in legacy application that you can break off and build out on a new technical stack is often very complex (and contentious). Efforts to use this approach sometimes stall after the migration to a new system has begun, but before it is finished. This sometimes has the effect of leaving organizations in the unenviable position of having multiple systems in place underpinning a service and never fully completing the transition away from the older one.**

    Another very common pattern used in government digital work is the Facade Pattern, an approach that hides the complexity of a particular service or function behind a friendlier front-end.*** Some common examples of the Facade Pattern in action can be seen when digital teams:

    • Create a digitized version of a form that on submission gets converted into a PDF version of a paper form that then goes through the same (or similar) review and approval process as paper forms.
    • Use RPA tools and applications to automate existing business process steps — like in procurement processes — speeding them up and (potentially) reducing the errors resulting from human data entry but keeping the same basic steps of the process in place.
    • Leverage LLMs to simplify or summarize complex or disparate information without changing the structure, quality, or location of that source data.

    All of these uses of the Facade Pattern can result in a better experience for the people inside government or those that depend on government services, and all are worthy of investing time and energy into under the right circumstances. But none of these examples change the fundamental issues that can make service delivery more challenging. They just make changes at the epidermal level of government organizations. Sometimes this is enough, but where do we turn for patterns when we aspire to more fundamental change?

    The book, Platformland, by Richard Pope, came across my radar as I started thinking about the inadequacy of commonly used patterns for digital modernization. It’s a book full of really good ideas about what public digital services can be, and it’s organized around a set of patterns that can be used to build what he refers to as the “next generation of public services.” It’s full of really useful ideas and observations that will assist many of the people working in public sector organizations today to lay the groundwork for better digital services.

    For those of us working to modernize the digital infrastructure of government, we need new patterns that can help us as we pursue true digital modernization. This terrific new book offers a compelling and useful set of new patterns we can start to use.

    For a more comprehensive picture of software patterns specifically focused on legacy system migration, check out the book Monolith to Microservices, by Sam Newman.

    ** Some more recent work by the team that pioneered the Strangler Fig pattern for modernizing large legacy systems are now looking at Generative AI as a way to potentially speed up the process.

    *** If you think I’m misapplying the facade pattern to these examples, I’d love to have you share your thoughts in a comment.

    https://civic.io/2024/09/09/searching-for-patterns-in-digital-modernization/

    #civictech #governmentServices #legacySystems #softwarePatterns #systemModernization

  25. The "Diia" app will have eight new services, including car customs clearance, online marriage, services for veterans, patient's cabinet and medical records, open code, and more documents such as educational certificates and diplomas, name change, marriage and divorce. The app currently offers 14 digital documents and over 30 services, while the portal provides over one hundred services. #DigitalTransformation #GovernmentServices

  26. No military drafts, but other updates will be implemented: car re-registration, modernization of services for sailors, 8 business services combined in a single application, map of shelters and safe points, and updates for individual entrepreneurs registration services. #Ukraine #GovernmentServices