home.social

#amazon-bedrock — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #amazon-bedrock, aggregated by home.social.

fetched live
  1. Python dominiert die KI-Diskussion. Aber müssen #Java-Teams wirklich die Sprache wechseln? Yuriy Bezsonov & @sascha242 zeigen, wie produktionsreife #KI-Agenten mit #Java, #SpringAI & #AmazonBedrock entstehen – mit Memory, RAG & Tools.

    Entdecke: champ.ly/zjVUSsAx

    @awscloud

  2. Python dominiert die KI-Diskussion. Aber müssen #Java-Teams wirklich die Sprache wechseln? Yuriy Bezsonov & @sascha242 zeigen, wie produktionsreife #KI-Agenten mit #Java, #SpringAI & #AmazonBedrock entstehen – mit Memory, RAG & Tools.

    Entdecke: javapro.io/de/produktionsreife

    @awscloud

  3. Strands Agents 시작하기, LLM이 알아서 판단하는 AI 에이전트 프레임워크

    AWS가 개발한 Strands Agents는 LLM이 자율적으로 판단하고 실행하는 AI 에이전트 프레임워크입니다. 복잡한 워크플로우 코딩 없이 실전 에이전트를 만드는 방법을 소개합니다.

    aisparkup.com/posts/8469

  4. Your AI feels smart… but still fails in production? ⚠️
    That’s not a model issue. It’s a fine-tuning decision.
    Opus vs Sonnet on Amazon Bedrock — explained from the trenches. 🧠⚙️
    Read before you fine-tune. 👇
    medium.com/@rogt.x1997/opus-vs

    #GenAI #LLMs #AmazonBedrock
    medium.com/@rogt.x1997/opus-vs

  5. 🚀 AWS just launched S3 Vectors GA, slashing vector storage costs by 90% and adding native integration with Amazon Bedrock. This could reshape how we build generative‑AI pipelines and compete with vector DBs like Pinecone, Weaviate, and Qdrant. Curious how it impacts your stack? Read the full breakdown. #AWS #S3Vectors #AmazonBedrock #GenerativeAI

    🔗 aidailypost.com/news/aws-says-

  6. 🚀 Hướng dẫn nhanh xây dựng RAG trên Amazon Bedrock chỉ trong 30 phút: tạo bucket S3, tải PDF, cấu hình Knowledge Base với Titan Embeddings, đồng bộ, query trực tiếp. Chi phí dưới $0.50, an toàn, giám sát CloudWatch. Thích hợp cho người mới bắt đầu AI trên AWS. #AI #ML #AWS #AmazonBedrock #RAG #GenAI #Vietnam #tech

    dev.to/sinariver/deje-de-habla

  7. Build a Supervisor Agent with Amazon Bedrock to orchestrate EC2 listing & CloudWatch CPU metrics via Lambda — no direct API calls, fully automated. hackernoon.com/how-to-build-an #amazonbedrock

  8. 🌐 Availability
    Current: Public beta via #Anthropic #API for Tier 4 customers
    Available: #AmazonBedrock integration
    Coming soon: #GoogleCloud #VertexAI support
    Future: Exploring integration with other Claude products

    anthropic.com/news/1m-context

  9. Anthropic kontratakuje. Nowy model Claude Opus 4.1 ma być mistrzem w programowaniu

    Anthropic, jeden z głównych rywali OpenAI, zaprezentował swój najnowszy model sztucznej inteligencji – Claude Opus 4.1.

    Nowa wersja, udostępniona zaledwie trzy miesiące po debiucie serii Claude 4, skupia się na ulepszeniu zdolności w zakresie programowania, rozumowania i wykonywania złożonych, wieloetapowych zadań, tzw. zadań agentowych.

    Głównym atutem Claude Opus 4.1 ma być jego precyzja w zadaniach związanych z inżynierią oprogramowania, która według wewnętrznych testów Anthropic osiągnęła poziom 74,5%. Jest to zauważalny postęp w porównaniu do poprzednich modeli firmy, w tym Claude Opus 4 (72,5%) oraz Claude Sonnet 3.7 (62,3%). Nowy model ma być również znacznie lepszy w „dogłębnej analizie danych i śledzeniu szczegółów”.

    Claude Opus 4.1 jest dostępny od dzisiaj dla klientów Anthropic, w usłudze Claude Code oraz za pośrednictwem API. Model został również udostępniony na platformach chmurowych kluczowych partnerów: Amazon Bedrock oraz Vertex AI od Google Cloud.

    Anthropic zapowiedziało, że to nie koniec nowości i w „nadchodzących tygodniach” planuje wydać „znacznie większe ulepszenia” swoich modeli. Ta premiera to kolejny element zaciętej rywalizacji na rynku AI, zwłaszcza w kontekście spodziewanych w tym tygodniu ogłoszeń ze strony głównego konkurenta, firmy OpenAI.

    Tresura „złego” AI kluczem do bezpieczeństwa? Ciekawa technika badaczy z Anthropic

    #AI #AmazonBedrock #Anthropic #Claude #ClaudeOpus41 #GoogleVertexAI #inżynieriaOprogramowania #LLM #news #programowanie #sztucznaInteligencja

  10. KI für Forschung? 🧬 Benchling + Claude starten durch!

    • Datencheck & Doku automatisch
    • KI-Assistenten für Labor & SQL
    • Sicherheit durch Amazon Bedrock

    #ai #ki #artificialintelligence #Benchling #AmazonBedrock #LifeSciences #Forschung

    Jetzt LIKEN, teilen, LESEN und FOLGEN! Schreib uns in den Kommentaren!

    kinews24.de/benchling-claude-a

  11. #OpenAI decided to expire the remaining credit that I bought just over a year ago, so rather than buy more expiring credit, I moved my low-traffic #Discord #GenerativeAI fun bots over to #AmazonBedrock. At least #AWS will bill me as little as a few cents.

  12. AWS Machine Learning – Harness the power of MCP servers with Amazon Bedrock Agents

    Harness the power of MCP servers with Amazon Bedrock Agents

    https://aws.amazon.com/blogs/machine-learning/harness-the-power-of-mcp-servers-with-amazon-bedrock-agents

    I’ve been digging into this new AWS blog post about Model Context Protocol (MCP) servers and their integration with Amazon Bedrock Agents, and I have to say, I’m pretty excited about what this means for those of us building with AI.

    What the heck is MCP anyway?

    MCP (Model Context Protocol) is Anthropic’s open protocol for connecting large language models to basically any data source or tool. Think of it as a standard way for AI models to talk to everything from databases to APIs without needing custom code for every single connection. AWS has now integrated this with Amazon Bedrock Agents.

    However, in the past, connecting these agents to diverse enterprise systems has created development bottlenecks, with each integration requiring custom code and ongoing maintenance—a standardization challenge that slows the delivery of contextual AI assistance across an organization’s digital ecosystem. This is a problem that you can solve by using Model Context Protocol (MCP), which provides a standardized way for LLMs to connect to data sources and tools.

    I’m always looking for ways to simplify infrastructure. I don’t want to write endless custom integrations that I’ll have to maintain forever. MCP promises to solve exactly this problem, which means I can focus more on what I want to build and less on the plumbing.

    An ecosystem in the making

    What really caught my attention was this bit:

    Today, MCP is providing agents standard access to an expanding list of accessible tools that you can use to accomplish a variety of tasks. In time, MCP can promote better discoverability of agents and tools through marketplaces, enabling agents to share context and have common workspaces for better interaction, and scale agent interoperability across the industry.

    This is where things get interesting! It’s not just about making individual connections easier—it’s about creating an entire ecosystem of interoperable tools. Are MCPs like APIs?

    The right architecture for the job

    The post explains that MCP uses a client-server architecture.

    Whether you’re connecting to external systems or internal data stores or tools, you can now use MCP to interface with all of them in the same way. The client-server architecture of MCP enables your agent to access new capabilities as the MCP server updates without requiring any changes to the application code.

    Ok, I like the separation of concerns. I can update my data sources without touching my application logic, and vice versa.

    Real problems, real solutions

    The example AWS gives is all about understanding cloud spending:

    Imagine asking questions like “Help me understand my Bedrock spend over the last few weeks” or “What were my EC2 costs last month across regions and instance types?” and getting a human-readable analysis of the data instead of raw numbers on a dashboard. The system interprets your intent and delivers precisely what you need—whether that’s detailed breakdowns, trend analyses, visualizations, or cost-saving recommendations.

    I like this. In my work, I am often dealing with complex AWS bills. There’s a whole world of innovation around simplifying IT spend (AWS and beyond).

    Simple setup (I hope!)

    The blog post claims the setup process is straightforward:

    You’re now ready to create an agent that can invoke these MCP servers to provide insights into your AWS spend. You can do this by running the python main.py command.

    I think I’m gonna try this walkthrough sometime soon. It looks easy enough, and I have not yet played with Agents enough to fully understand what is going on, so this will be a good way to explore.

    Future possibilities

    The post lists some additional ideas:

    A multi-data source agent that retrieves data from different data sources such as Amazon Bedrock Knowledge Bases, Sqlite, or even your local filesystem.

    A developer productivity assistant agent that integrates with Slack and GitHub MCP servers.

    A machine learning experiment tracking agent that integrates with the Opik MCP server from Comet ML for managing, visualizing, and tracking machine learning experiments directly within development environments.

    It’s going to be a fun year exploring all the potential of MCPs and Amazon Bedrock. Let’s go!

  13. Extended Thinking with Anthropic’s Claude 3.7 Sonnet on Amazon Bedrock is Wow!

    I think the title of this blog post kinda sums it up. I’ve been testing out the new Extended Thinking capability with Anthropic’s Claude 3.7 Sonnet on Amazon Bedrock, and it’s just wow.

    In Bedrock, just switch on “Model reasoning” in the playground and give it a whirl.

    So many possibilities. I’m still collecting my thoughts. More to come.

  14. Implementing least privilege access for Amazon Bedrock

    This is a really useful and well explained blog post on how to apply the Principal of Least Privilege with Amazon Bedrock. This is a topic I get asked about on a regular basis. “How do I limit access to the LLMs available in Amazon Bedrock?” This blog post does a great job of explaining by example how to do just that!

    The PoLP is a security concept that advises granting the minimal level of access—or permissions—necessary for users, programs, or systems to perform their tasks. The main idea is that the fewer permissions an entity has, the lower the risk of malicious or accidental damage.

    Amazon Bedrock provides access to a variety of high-performing FMs from leading AI companies such as AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon.

    With a third-party FM, approval might include accepting a EULA. You can limit identities and the models that they can subscribe to in order to follow compliance with EULAs that have been reviewed by your legal department.

    Implementing least privilege access for Amazon Bedrock