home.social

#propmtengineering — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #propmtengineering, aggregated by home.social.

  1. Simple Prompt Tweaks Derail LLM Reasoning - MarkTechPost

    ➡️ MIT researchers analyzed how input changes impact the response quality of 13 prominent LLMs.
    ➡️Prompt perturbations included irrelevant contexts, misleading (pathological) instructions, and a mix of additional yet unnecessary details.
    ➡️Quality dropped substantially, with average declines of up to 55.89% for irrelevant contexts.

    marktechpost.com/2025/04/15/fr

    #AI #PropmtEngineering #LLMReasoning

  2. Simple Prompt Tweaks Derail LLM Reasoning - MarkTechPost

    ➡️ MIT researchers analyzed how input changes impact the response quality of 13 prominent LLMs.
    ➡️Prompt perturbations included irrelevant contexts, misleading (pathological) instructions, and a mix of additional yet unnecessary details.
    ➡️Quality dropped substantially, with average declines of up to 55.89% for irrelevant contexts.

    marktechpost.com/2025/04/15/fr

  3. Simple Prompt Tweaks Derail LLM Reasoning - MarkTechPost

    ➡️ MIT researchers analyzed how input changes impact the response quality of 13 prominent LLMs.
    ➡️Prompt perturbations included irrelevant contexts, misleading (pathological) instructions, and a mix of additional yet unnecessary details.
    ➡️Quality dropped substantially, with average declines of up to 55.89% for irrelevant contexts.

    marktechpost.com/2025/04/15/fr

    #AI #PropmtEngineering #LLMReasoning

  4. Simple Prompt Tweaks Derail LLM Reasoning - MarkTechPost

    ➡️ MIT researchers analyzed how input changes impact the response quality of 13 prominent LLMs.
    ➡️Prompt perturbations included irrelevant contexts, misleading (pathological) instructions, and a mix of additional yet unnecessary details.
    ➡️Quality dropped substantially, with average declines of up to 55.89% for irrelevant contexts.

    marktechpost.com/2025/04/15/fr

    #AI #PropmtEngineering #LLMReasoning

  5. Simple Prompt Tweaks Derail LLM Reasoning - MarkTechPost

    ➡️ MIT researchers analyzed how input changes impact the response quality of 13 prominent LLMs.
    ➡️Prompt perturbations included irrelevant contexts, misleading (pathological) instructions, and a mix of additional yet unnecessary details.
    ➡️Quality dropped substantially, with average declines of up to 55.89% for irrelevant contexts.

    marktechpost.com/2025/04/15/fr

    #AI #PropmtEngineering #LLMReasoning