home.social

#r2ai — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #r2ai, aggregated by home.social.

  1. 🚀 Big update in r2ai 1.2.6 -- Fixes command/script edition + add comments in commands. Interrupting the auto mode permits a last call to resolve. Added conversation compact, prompts are markdown-only now, r2ai -E to edit the configuration file and more! #r2ai #radare2 #reverseengineering github.com/radareorg/r2ai/rele

  2. #Connect25 is bringing an #AI + reverse engineering session diving into #r2ai!

    @pancake Creator of #radare2 and Senior Mobile Security Research Engineer at NowSecure, will show how AI is transforming reverse engineering. In this session, you’ll see how AI can:
    - Analyze mobile apps
    - Detect privacy issues
    - Help you understand what’s going on step by step

    See the #r2 session and register here: events.bizzabo.com/nowsecure-c

  3. I heard people like seeing r2ai solving crackmes in auto mode. Here's the 5th ioli crackme for windows #radare2 #r2ai

  4. Comparing #meta #llama 4 (maverick / scout) vs #qwen 32b for decompilation purposes #r2ai #reverseengineering
    PD: groq is the best place to try all these models if you don't have the hardware
    PD: qwen-qwq reasoning takes more time, but improves the output, much better than openai/claude/meta for decompilation usecases

  5. If anyone is curious about r2mcp, yes, it now runs in local with openwebui and mcpo #r2ai #radare2 #reverseengineering #llm

  6. The whole #mcp ecosystem is pure magic, here's a quick demo seamlessly running the r2mcp server. Kudos to the plugin's author @dnakov #r2ai #reverseengineering #llm #claude if you want to try it out, just run “r2pm -Uci r2mcp" and add the json block described in the repo’s readme!

  7. Some updates on #r2ai:
    - decai now support auto mode for function calling, with ANY model
    - Added support for Gemini and X.AI endpoints
    - Start the full rewrite of the Python/JS code in plain C
    - Switch to gpt4-turbo for 128K context instead of 8K on OpenAI backend
    - Recursive decompilation mode for inlining stubs and better type propagation

    #reverseengineering #radare2 #llm #ai #decompilation