#otlp — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #otlp, aggregated by home.social.
-
-
🛠️ Monitor logs & events: user prompts, tool results with accept/reject decisions, #API requests with detailed cost/duration/token counts, error tracking with retry attempts & tool decision patterns
⚙️ Setup options: Launch via #VSCode or terminal with environment variables, configure centralized #OTLP exports via #gRPC or #HTTP protocols, use administrator-managed settings for organization-wide control
-
You asked, we listened: OTLP/gRPC support is now available in Coroot 1.14.3 onwards! 🧑💻 🔥 🤌 https://github.com/coroot/coroot/releases
#OpenTelemetry #OTLP #OTEL #observability #linux #opensource #softwarelibre #freesoftware #eBPF #Coroot
-
Wednesday Links - Edition 2025-08-13
https://dev.to/0xkkocel/wednesday-links-edition-2025-08-13-1mch
#java #jvm #springboot #json #otlp -
Is anybody I know (or who reads this post 😅) using #Azure Container Apps with #OpenTelemetry collection enabled sending the telemetry to an external provider?
https://learn.microsoft.com/en-us/azure/container-apps/opentelemetry-agents#otlp-endpoint
-
Score! Managed to hook the cluster at home up to #Grafana #cloud over #OTLP for shipping #traces 🎉 . Next up:
* Figure out how to do a better presentation in Grafana
* Optimize OTEL collection deployment, work out all the issues etcThen the real fun begins. As I have two WIP packages to bring out:
* One for @reactphp 's filesystem
* And one for BunnyBoth of those need to be refined, but with Shawn Maddock's (https://github.com/smaddock) initial push in this direction and help
-
[Перевод] Эй, где моя ошибка? Как OpenTelemetry фиксирует ошибки
Языки программирования расходятся во мнении, что такое ошибки или исключения и как их обрабатывать. Возникает вопрос: что использовать, если нужны стандартизированная телеметрия и отчёты об ошибках для написанных на этих языках микросервисов? Ответом может быть OpenTelemetry. Перевели материал, из которого вы узнаете, как OpenTelemetry обрабатывает ошибки, чем отличаются ошибки в спанах и логи, а также как добавить в спаны OTel метаданные и события.
https://habr.com/ru/companies/flant/articles/892784/
#monitoring #opentelemetry #span #events #спан #события #otlp #jaeger #микросервисы #ошибки
-
Time flies. A while back I wrote my own #otel pipeline tool in #rust. Right now it’s doing a few things I can’t do else where.
In addition to #otlp, I’m also accepting remote writes from Prometheus servers, splitting those according to some routing criteria, and routing them all to different metric storage backends, right not just Mimir and Prometheus.I had a lot of trouble finding existing tools to handle these things. I’m really happy about the progress made as things get productionized.
-
Prometheus 3.0 has been released bringing new features and improvements while maintaining stability and compatibility. #Prometheus #CloudNative #monitoring #otlp
-
Hey! 👋 Check out my new blog post "Let's use OpenTelemetry with Spring" https://spring.io/blog/2024/10/28/lets-use-opentelemetry-with-spring #java #opentelemetry #openzipkin #otlp #observability #micrometerio
-
It's not totally clear to me what needs to be done to create a consumer of #opentelemetry data in #rust.
I am planning to just run my own HTTP and gRPC services that then use the opentelemetry-proto crate to parse #OTLP messages and then process and send them along downstream.
-
This weekend I was frustrated with my debugging, and just not up to digging in and carefully, meticulously analyzing what was happening. So … I took a left turn (at Alburquerque) and decided to explore an older idea to see if it was interesting and/or useful. My challenging debugging was all about network code, for a collaborative, peer to peer sharing thing; more about that effort some other time.
A bit of back story
A number of years ago when I was working with a solar energy manufacturer, I was living and breathing events, APIs, and running very distributed, sometimes over crap network connections, systems. One of the experiments I did (that worked out extremely well) was to enable distributed tracing across the all the software components, collecting and analyzing traces to support integration testing. Distributed tracing, and the now-popular CNCF OpenTelemetry project weren’t a big thing, but they were around – kind of getting started. The folks (Yuri Shkuro, at least) at Uber had released Jaeger, an open-source trace collector with web-based visualization, which was enough to get started. I wrote about that work back in 2019 (that post still gets some recurring traffic from search engines, although it’s pretty dated now and not entirely useful).
We spun up our services, enabled tracing, and ran integration tests on the whole system. After which, we had the traces available for visual review. It was useful enough that we ended up evolving it so that a single developer could stand up most of their pieces locally (with a sufficiently beefy machine), and capture and view the traces locally. That provided a great feedback loop as they could see performance and flows in the system while they were developing fixes, updates and features. I wanted to see, this time with an iOS/macOS focused library, how far I could get trying to replicate that idea (time boxed to the weekend).
The Experiment!
I’ve been loosely following the server-side swift distributed tracing efforts since it started, and it looked pretty clear that I could use it directly. Moritz Lang publishes swift-otel, which is a Swift native, concurrency supported library. With his examples, it was super quick to hack into my test setup. The library is set up to run with service-lifecycle pieces over SwiftNIO, so there’s a pile of dependencies that come in with it. To add to my library, I’d be a little hesitant, but an integration test thing, I’m totally good with that. There were some quirks to using it with XCTest, most of which I hacked around by shoving the tracer setup into a global actor and exposing an idempotent
bootstrapcall. With that in place, I added explicit traces into my tests, and then started adding more and more, including into my library, and could see the results in a locally running instance of Jaeger (running Jaeger using Docker).Some Results
The following image is an overview of the traces generated by a single test (
testCreate):The code I’m working with is all pushing events over web sockets, so inside of the individual spans (which are async closures in my test) I’ve dropped in some span events, one of which is shown in detail below:
In a lot of respects, this is akin to dropping in os_signposts that you might view in Instruments, but it’s external to Xcode infrastructure. Don’t get me wrong, I love Instruments and what it does – it’s been amazing and really the gold standard in tooling for me for years – but I was curious how far this approach would get me.
Choices and Challenges
Using something like this in production – with live-running iOS or macOS apps – would be another great end-to-end scenario. More so if the infrastructure your app was working from also used tracing. There’s a separate tracing project at CNCF – OpenTelemetry Swift – that looks oriented towards doing just that. I seriously considered using it, but I didn’t see a way to use that package to instrument my library and not bring in the whole pile of dependencies. With the swift-distributed-tracing library, it’s an easy (and small) dependency add – and you only need to take the hit of the extra dependencies when you want to use the tracing.
And I’ll just “casually” mention that if you pair this with server-side swift efforts, the Hummingbird project has support for distributed tracing currently built in. I expect Vapor support isn’t too far off, and it’s a continued focus to add more distributed tracing support for a number of prevalent server-side swift libraries over this coming summer.
See for Yourself (under construction/YMMV/etc)
I’ve tossed up my hack-job of a wrapper for tracing during testing with iOS and macOS – DistributedTracer, if you want to experiment with this kind of thing yourself. Feel free to use it, although if you’re amazed with the results – ALL credit should go to Moritz, the contributors to his package and the contributors to swift-distributed-tracing, since they did the heavy lifting. The swift-otel library itself is undergoing some major API surface changes – so if you go looking, I worked from the current
mainbranch rather than the latest release. Moritz shared with me that while the API was not completely solid yet, this is more of the pattern he wants to expose for an upcoming 1.0 release.Onward from here
I might push the DistributedTracer package further in the future. I think there’s real potential there, but it is not without pitfalls. Some of the challenges stem from constantly exporting data from an iOS app, so there’s a privacy (and privacy manifest) bit that needs to be seriously considered. There are also challenges with collecting enough data (but not too much), related choices in sampling so that it aligns with traces generated from infrastructure, as well as how to reliably transfer it from device to an endpoint. Nothing that can’t be overcome, but it’s not a small amount of work either.
Weekend hacking complete, I’m calling this a successful experiment. Okay, now back to actually debugging my library…
https://rhonabwy.com/2024/04/02/distributed-tracing-with-testing-on-ios-and-macos/
-
Do you have to forward large amounts of logs between two syslog-ng instances? #OTLP ( @opentelemetry protocol) support in #syslog_ng can solve this problem. syslog-ng-otlp() forwards most name-value pairs while it also scales well with multiple CPU cores.
https://www.syslog-ng.com/community/b/blog/posts/using-opentelemetry-between-syslog-ng-instances
-
I hacked together opentelemetry distributed tracing support for ebpf_exporter: https://github.com/cloudflare/ebpf_exporter/pull/297
So far I managed to add some block i/o tracing via tracepoints, but it's unclear how to tie this together to userspace traces, since there's no way for userspace to pass the trace id.
Are there any other kernel areas that people are interested in having integrated with distributed tracing? Sockets? Scheduling? Something else?
-
I hacked together opentelemetry distributed tracing support for ebpf_exporter: https://github.com/cloudflare/ebpf_exporter/pull/297
So far I managed to add some block i/o tracing via tracepoints, but it's unclear how to tie this together to userspace traces, since there's no way for userspace to pass the trace id.
Are there any other kernel areas that people are interested in having integrated with distributed tracing? Sockets? Scheduling? Something else?
-
I hacked together opentelemetry distributed tracing support for ebpf_exporter: https://github.com/cloudflare/ebpf_exporter/pull/297
So far I managed to add some block i/o tracing via tracepoints, but it's unclear how to tie this together to userspace traces, since there's no way for userspace to pass the trace id.
Are there any other kernel areas that people are interested in having integrated with distributed tracing? Sockets? Scheduling? Something else?
-
I hacked together opentelemetry distributed tracing support for ebpf_exporter: https://github.com/cloudflare/ebpf_exporter/pull/297
So far I managed to add some block i/o tracing via tracepoints, but it's unclear how to tie this together to userspace traces, since there's no way for userspace to pass the trace id.
Are there any other kernel areas that people are interested in having integrated with distributed tracing? Sockets? Scheduling? Something else?
-
I hacked together opentelemetry distributed tracing support for ebpf_exporter: https://github.com/cloudflare/ebpf_exporter/pull/297
So far I managed to add some block i/o tracing via tracepoints, but it's unclear how to tie this together to userspace traces, since there's no way for userspace to pass the trace id.
Are there any other kernel areas that people are interested in having integrated with distributed tracing? Sockets? Scheduling? Something else?
-
self-managed or fleet-managed #elastic APM server? https://www.elastic.co/guide/en/apm/guide/8.8/getting-started-apm-server.html has the details (release coming very soon)
PS: including #OTLP support of course