home.social

#ffmpeg — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #ffmpeg, aggregated by home.social.

  1. Fascinating talk with and contributors Jean-Baptiste Kempf and Kieran Kunhya:

    > "And everything we’ve just said in the past couple of minutes, every sentence is someone’s lifetime’s work. There are books about- … every sentence"

    lexfridman.com/ffmpeg-transcri

  2. Wochenrückblick, Ausgabe 141 (2026-19)

    Themen:

    🗺️ Neuer Server für Bikerouter

    🗺️ Geteilte Bikerouter-Kurzlinks zeigen jetzt eine Vorschau

    ✈️ Map Enhancement für X-Plane 12

    ✈️ Ich habe jetzt ein Laminiergerät und mache meine FlightSim-Checklisten damit haltbar

    🎥 ffmpeg mit Video Toolbox-Encoder auf Apple Silicon 🚀

    📸 FlowVision Bildbetrachter für macOS

    🔊 In dieser Woche gehört: MAXXIMUM, Eric Prydz, Hophiluck, Katja Kilig, ĀLLY

    #Wochenrückblick #Bikerouter #XPlane #FlightSim #MSFS #MapEnhancement #Checkliste #ffmpeg #VideoToolbox #FlowVision

    https://www.marcusjaschen.de/blog/2026/2026-19/

  3. How to create a 4x4 tile video thumbnail (or any other dimension) with #ffmpeg:

    ffmpeg -i input-video.avi -vf 'thumbnail=n=16,tile=4x4,scale=w=640:h=-1' output-image.jpg

    In my case I have a cheapo Aliexpress camera that can only save 60s of video at a time and needed to find the one where something happened among hundreds of videos.

  4. Warum ich das erst jetzt gelernt habe? Ich weiß es nicht, aber es hätte mir hunderte Stunden Rechenzeit sparen können.

    Nutzt man #ffmpeg auf #macOS mit Apples Silicon Chips, ist es u. U. eine gute Idee, die Codecs aus Apples Video Toolbox zu verwenden statt der üblichen libx264 oder libx265.

    Es ergibt sich eine beachtliche (!) Geschwindigkeitssteigerung.

    Material in 1080p wird jetzt in 16-facher Geschwindigkeit (384 fps) in HEVC konvertiert – und das auf einem alten #Apple M1 Max 😲

  5. Реально большая стейт-машина: как мы строили облачную запись и ИИ-конспектирование в Телемосте

    Всем привет! Меня зовут Илья Григорьев, я старший бэкенд-разработчик в команде Телемоста. В этой статье я разберу наш опыт разработки двух фич последнего года — ИИ-конспект с Алисой Про и облачной записи на Диск. Покажу, как мы проектировали их архитектуру, почему не всё получилось с первого раза, с какими системными и техническими ограничениями столкнулись при работе с медиаданными и как в итоге выстроили пайплайн их обработки и анализа.

    habr.com/ru/companies/yandex/a

    #бэкенд #java #postgresql #ffmpeg #стейтмашина #телемост #медиасервер #оптимизация #оптимизация_производительности #backendразработка

  6. For the #wmhack demo, my screen recorder app was failing so I fell back to this handy shell function:

    record_screen () {
    file=${1:-output.mp4}
    screen_size=$(xdpyinfo | awk '/dimensions/ {print $2}')
    ffmpeg -video_size $screen_size -f x11grab -i :0.0+0,0 "$file"
    }

    #shell #ffmpeg #cli #screencast

  7. The following hashtags are trending across South African Mastodon instances:

    #ryzen
    #ffmpeg
    #whisper
    #opera
    #church

    Based on recent posts made by non-automated accounts. Posts with more boosts, favourites, and replies are weighted higher.

  8. The following hashtags are trending across South African Mastodon instances:

    #Wordle
    #wordle1778
    #internationalscurvyawarenessday
    #scurvyawareness
    #vitaminc
    #ryzen
    #ffmpeg
    #whisper
    #opera
    #church

    Based on recent posts made by non-automated accounts. Posts with more boosts, favourites, and replies are weighted higher.

  9. The following hashtags are trending across South African Mastodon instances:

    #Wordle
    #wordle1778
    #internationalscurvyawarenessday
    #scurvyawareness
    #vitaminc
    #ryzen
    #ffmpeg
    #whisper
    #opera
    #church

    Based on recent posts made by non-automated accounts. Posts with more boosts, favourites, and replies are weighted higher.

  10. The following hashtags are trending across South African Mastodon instances:

    #Wordle
    #wordle1778
    #internationalscurvyawarenessday
    #scurvyawareness
    #vitaminc
    #ryzen
    #ffmpeg
    #whisper
    #opera
    #church

    Based on recent posts made by non-automated accounts. Posts with more boosts, favourites, and replies are weighted higher.

  11. I recently replaced my #Ryzen 7 5825U laptop with a Ryzen 7 7840U laptop. WOW! I expected maybe a 10 or 20% boost in performance, but it's really night and day. Whether I am using #FFMPEG to transcode a video, transcribing in #Whisper or conducting #proteomics in FragPipe or DIA-NN, the new machine is a wonder.

    Details: HP EliteBook 845 G10 with 32 GB RAM; boot drive replaced with WD Black SN770 2TB.

  12. archive DataHoarders

    markdown formatted

    A nice presentable DataHoarders archive has been created regarding the epstein files

    The archive is online accessible as given in the sources matrix.

    Even if the content is less interesting to you, the manner in which the front & backend end is built is quite interesting. I have interests in both backend and frontend programming & networking, thus think this is a treasure trove from both perspectives.

    YMMV

    When you glance through the wikipedia pages of Jeffrey you will find interesting tidbits of his nature rise and fall. When you read it multiple times you will know more than you may want to about this man, enabled by different forces to flourish in his behavour. Go in with a neutral mind and read the sources, go there if you want to know more.

    The wikipedia dbase of epstein is LONG the data ammount is massive. Don't expect to even glance over it in just a few minutes.

    There are 305 references in this document

    When you go to this datahoarders media archive you will have a pleasant representation of the visual and printed data as released by the USA DOJ

    Quotes from the archive creators:

    Hey! We are two college students and we just want to share the technical part of our project because you might appreciate it. The DOJ released the Epstein files and we decided to host the entire thing ourselves and build a proper interface on top of it. Here is what the archive actually looks like.

    354GB total. 160GB of raw data from the original files and 194GB of our own processed data. Around 600,000 PDF files which actually contain roughly 1,400,000 individual pages inside them since many PDFs bundle multiple pages together when you scroll down. All 3,200 videos have been converted to HLS with adaptive bitrate streaming so quality adjusts automatically to your connection the same way Netflix does it.

    For the videos we ran a full audio extraction pipeline, converting video to audio MP4 and then audio to text, generating SRT subtitle files for every single video that contains spoken content. This means you can search for a word that was spoken in any video and find the exact moment it was said

    For the PDFs we converted every single page to PNG and ran OCR across all 1,400,000 pages. We then used Go to run AI agents that analyze and summarize the OCR output across the documents. The search engine works through tags associated to each specific file, built on top of all that processed data.

    The frontend is React Native, infrastructure runs through Cloudflare.

    We also added the possibility for a user to make an anonymous account to like, add a comment and reply to others or make your own investigation post on our platform.

    We are not stopping here. There is still a lot to do and we are pushing updates constantly.

    Z

    Naturally ffmpeg / curl are crucial tool combo's for all this conversion fetch and serve to work smoothly, but I don't need to tell you that. There are many more tools used, go in read and learn!

    Sources:

    exposingepstein.com/home

    en.wikipedia.org/wiki/Jeffrey_

    reddit.com/r/DataHoarder/comme

    #programming #database #video #HLS #pdf #recoding #streaming #json #backend #frontend #react #srt #subtitles #FFMPEG

  13. archive DataHoarders

    markdown formatted

    A nice presentable DataHoarders archive has been created regarding the epstein files

    The archive is online accessible as given in the sources matrix.

    Even if the content is less interesting to you, the manner in which the front & backend end is built is quite interesting. I have interests in both backend and frontend programming & networking, thus think this is a treasure trove from both perspectives.

    YMMV

    When you glance through the wikipedia pages of Jeffrey you will find interesting tidbits of his nature rise and fall. When you read it multiple times you will know more than you may want to about this man, enabled by different forces to flourish in his behavour. Go in with a neutral mind and read the sources, go there if you want to know more.

    The wikipedia dbase of epstein is LONG the data ammount is massive. Don't expect to even glance over it in just a few minutes.

    There are 305 references in this document

    When you go to this datahoarders media archive you will have a pleasant representation of the visual and printed data as released by the USA DOJ

    Quotes from the archive creators:

    Hey! We are two college students and we just want to share the technical part of our project because you might appreciate it. The DOJ released the Epstein files and we decided to host the entire thing ourselves and build a proper interface on top of it. Here is what the archive actually looks like.

    354GB total. 160GB of raw data from the original files and 194GB of our own processed data. Around 600,000 PDF files which actually contain roughly 1,400,000 individual pages inside them since many PDFs bundle multiple pages together when you scroll down. All 3,200 videos have been converted to HLS with adaptive bitrate streaming so quality adjusts automatically to your connection the same way Netflix does it.

    For the videos we ran a full audio extraction pipeline, converting video to audio MP4 and then audio to text, generating SRT subtitle files for every single video that contains spoken content. This means you can search for a word that was spoken in any video and find the exact moment it was said

    For the PDFs we converted every single page to PNG and ran OCR across all 1,400,000 pages. We then used Go to run AI agents that analyze and summarize the OCR output across the documents. The search engine works through tags associated to each specific file, built on top of all that processed data.

    The frontend is React Native, infrastructure runs through Cloudflare.

    We also added the possibility for a user to make an anonymous account to like, add a comment and reply to others or make your own investigation post on our platform.

    We are not stopping here. There is still a lot to do and we are pushing updates constantly.

    Z

    Naturally ffmpeg / curl are crucial tool combo's for all this conversion fetch and serve to work smoothly, but I don't need to tell you that. There are many more tools used, go in read and learn!

    Sources:

    exposingepstein.com/home

    en.wikipedia.org/wiki/Jeffrey_

    reddit.com/r/DataHoarder/comme

    #programming #database #video #HLS #pdf #recoding #streaming #json #backend #frontend #react #srt #subtitles #FFMPEG

  14. archive DataHoarders

    markdown formatted

    A nice presentable DataHoarders archive has been created regarding the epstein files

    The archive is online accessible as given in the sources matrix.

    Even if the content is less interesting to you, the manner in which the front & backend end is built is quite interesting. I have interests in both backend and frontend programming & networking, thus think this is a treasure trove from both perspectives.

    YMMV

    When you glance through the wikipedia pages of Jeffrey you will find interesting tidbits of his nature rise and fall. When you read it multiple times you will know more than you may want to about this man, enabled by different forces to flourish in his behavour. Go in with a neutral mind and read the sources, go there if you want to know more.

    The wikipedia dbase of epstein is LONG the data ammount is massive. Don't expect to even glance over it in just a few minutes.

    There are 305 references in this document

    When you go to this datahoarders media archive you will have a pleasant representation of the visual and printed data as released by the USA DOJ

    Quotes from the archive creators:

    Hey! We are two college students and we just want to share the technical part of our project because you might appreciate it. The DOJ released the Epstein files and we decided to host the entire thing ourselves and build a proper interface on top of it. Here is what the archive actually looks like.

    354GB total. 160GB of raw data from the original files and 194GB of our own processed data. Around 600,000 PDF files which actually contain roughly 1,400,000 individual pages inside them since many PDFs bundle multiple pages together when you scroll down. All 3,200 videos have been converted to HLS with adaptive bitrate streaming so quality adjusts automatically to your connection the same way Netflix does it.

    For the videos we ran a full audio extraction pipeline, converting video to audio MP4 and then audio to text, generating SRT subtitle files for every single video that contains spoken content. This means you can search for a word that was spoken in any video and find the exact moment it was said

    For the PDFs we converted every single page to PNG and ran OCR across all 1,400,000 pages. We then used Go to run AI agents that analyze and summarize the OCR output across the documents. The search engine works through tags associated to each specific file, built on top of all that processed data.

    The frontend is React Native, infrastructure runs through Cloudflare.

    We also added the possibility for a user to make an anonymous account to like, add a comment and reply to others or make your own investigation post on our platform.

    We are not stopping here. There is still a lot to do and we are pushing updates constantly.

    Z

    Naturally ffmpeg / curl are crucial tool combo's for all this conversion fetch and serve to work smoothly, but I don't need to tell you that. There are many more tools used, go in read and learn!

    Sources:

    exposingepstein.com/home

    en.wikipedia.org/wiki/Jeffrey_

    reddit.com/r/DataHoarder/comme

    #programming #database #video #HLS #pdf #recoding #streaming #json #backend #frontend #react #srt #subtitles #FFMPEG

  15. archive DataHoarders

    markdown formatted

    A nice presentable DataHoarders archive has been created regarding the epstein files

    The archive is online accessible as given in the sources matrix.

    Even if the content is less interesting to you, the manner in which the front & backend end is built is quite interesting. I have interests in both backend and frontend programming & networking, thus think this is a treasure trove from both perspectives.

    YMMV

    When you glance through the wikipedia pages of Jeffrey you will find interesting tidbits of his nature rise and fall. When you read it multiple times you will know more than you may want to about this man, enabled by different forces to flourish in his behavour. Go in with a neutral mind and read the sources, go there if you want to know more.

    The wikipedia dbase of epstein is LONG the data ammount is massive. Don't expect to even glance over it in just a few minutes.

    There are 305 references in this document

    When you go to this datahoarders media archive you will have a pleasant representation of the visual and printed data as released by the USA DOJ

    Quotes from the archive creators:

    Hey! We are two college students and we just want to share the technical part of our project because you might appreciate it. The DOJ released the Epstein files and we decided to host the entire thing ourselves and build a proper interface on top of it. Here is what the archive actually looks like.

    354GB total. 160GB of raw data from the original files and 194GB of our own processed data. Around 600,000 PDF files which actually contain roughly 1,400,000 individual pages inside them since many PDFs bundle multiple pages together when you scroll down. All 3,200 videos have been converted to HLS with adaptive bitrate streaming so quality adjusts automatically to your connection the same way Netflix does it.

    For the videos we ran a full audio extraction pipeline, converting video to audio MP4 and then audio to text, generating SRT subtitle files for every single video that contains spoken content. This means you can search for a word that was spoken in any video and find the exact moment it was said

    For the PDFs we converted every single page to PNG and ran OCR across all 1,400,000 pages. We then used Go to run AI agents that analyze and summarize the OCR output across the documents. The search engine works through tags associated to each specific file, built on top of all that processed data.

    The frontend is React Native, infrastructure runs through Cloudflare.

    We also added the possibility for a user to make an anonymous account to like, add a comment and reply to others or make your own investigation post on our platform.

    We are not stopping here. There is still a lot to do and we are pushing updates constantly.

    Z

    Naturally ffmpeg / curl are crucial tool combo's for all this conversion fetch and serve to work smoothly, but I don't need to tell you that. There are many more tools used, go in read and learn!

    Sources:

    exposingepstein.com/home

    en.wikipedia.org/wiki/Jeffrey_

    reddit.com/r/DataHoarder/comme

    #programming #database #video #HLS #pdf #recoding #streaming #json #backend #frontend #react #srt #subtitles #FFMPEG

  16. archive DataHoarders

    markdown formatted

    A nice presentable DataHoarders archive has been created regarding the epstein files

    The archive is online accessible as given in the sources matrix.

    Even if the content is less interesting to you, the manner in which the front & backend end is built is quite interesting. I have interests in both backend and frontend programming & networking, thus think this is a treasure trove from both perspectives.

    YMMV

    When you glance through the wikipedia pages of Jeffrey you will find interesting tidbits of his nature rise and fall. When you read it multiple times you will know more than you may want to about this man, enabled by different forces to flourish in his behavour. Go in with a neutral mind and read the sources, go there if you want to know more.

    The wikipedia dbase of epstein is LONG the data ammount is massive. Don't expect to even glance over it in just a few minutes.

    There are 305 references in this document

    When you go to this datahoarders media archive you will have a pleasant representation of the visual and printed data as released by the USA DOJ

    Quotes from the archive creators:

    Hey! We are two college students and we just want to share the technical part of our project because you might appreciate it. The DOJ released the Epstein files and we decided to host the entire thing ourselves and build a proper interface on top of it. Here is what the archive actually looks like.

    354GB total. 160GB of raw data from the original files and 194GB of our own processed data. Around 600,000 PDF files which actually contain roughly 1,400,000 individual pages inside them since many PDFs bundle multiple pages together when you scroll down. All 3,200 videos have been converted to HLS with adaptive bitrate streaming so quality adjusts automatically to your connection the same way Netflix does it.

    For the videos we ran a full audio extraction pipeline, converting video to audio MP4 and then audio to text, generating SRT subtitle files for every single video that contains spoken content. This means you can search for a word that was spoken in any video and find the exact moment it was said

    For the PDFs we converted every single page to PNG and ran OCR across all 1,400,000 pages. We then used Go to run AI agents that analyze and summarize the OCR output across the documents. The search engine works through tags associated to each specific file, built on top of all that processed data.

    The frontend is React Native, infrastructure runs through Cloudflare.

    We also added the possibility for a user to make an anonymous account to like, add a comment and reply to others or make your own investigation post on our platform.

    We are not stopping here. There is still a lot to do and we are pushing updates constantly.

    Z

    Naturally ffmpeg / curl are crucial tool combo's for all this conversion fetch and serve to work smoothly, but I don't need to tell you that. There are many more tools used, go in read and learn!

    Sources:

    exposingepstein.com/home

    en.wikipedia.org/wiki/Jeffrey_

    reddit.com/r/DataHoarder/comme

    #programming #database #video #HLS #pdf #recoding #streaming #json #backend #frontend #react #srt #subtitles #FFMPEG

  17. #handbrake encoding with default #Mesa stack does not utilize GPU acceleration without proprietary amf-amdgpu-pro drivers, which are available in #AUR but look to have been abandoned after AMD stopped publishing separated AMF from their stack, building on top of open amdgpu drivers instead, and offer inferior performance in other contexts.

    #ffmpeg can with #VAAPI but hammering out scripts for the amount of media I'm archiving, that's less than ideal.

    Could spin up a VM with amf drivers, but setting up with SingleGPUPassthrough seems like overkill/pain in the ass.

    Any other tools utilizing VAAPI that could make things a little easier?

  18. Robe di FFMPEG:

    - nano-ffmpeg (nano-ffmpeg.vercel.app/): è la TUI che vedete in azione in questo video
    - FFMPEG da zero a 100 (maulonobile.codeberg.page/soft): se volete imparare come funziona FFMPEG, questo è il tutorial con me e Andrea Ciraolo (youtube.com/watch?v=AHkD5XwFffE) che dovete vedere
    - i comandi del tutorial sono sul mio MLNotes (maulonobile.codeberg.page/soft)

    Video disponibile qui: t.me/mauriziolonobile/454

    #ffmpeg #nano-ffmpeg #MLNotes #unolinux #opensource

    @opensource

  19. New app. Cinch - a video compressor for Windows that fits your file under whatever upload limit you're dealing with. Drop a clip in, trim it down, pick a target size, and it just handles it.

    Six presets for the usual cases (Discord 8MB, Gmail 25MB, Slack, Matrix etc) or type your own number. It auto-retries until it lands inside the budget.

    FFmpeg under the hood, NVENC/AMF/QSV when your GPU plays along. Free, CC0, runs offline.

    apps.lashman.live/cinch/

    #OpenSource #CC0 #FOSS #Windows #FFmpeg

  20. Launching 30 FFmpeg streams at once will deadlock your GPU. Stop treating IPTV production like a lab experiment.

    Skip the marketing fluff and learn the realities of scaling:
    ✅ Staggered Startups to prevent PCIe deadlocks
    ✅ Active-Active Redundancy (K8s restarts = blackouts)
    ✅ IP-bound JWTs to stop token leakage
    ✅ Cloud Egress Tax vs Bare Metal

    Read the guide:
    🔗 servermo.com/howto/ffmpeg-nven

    #IPTV #FFmpeg #DevOps #SysAdmin #BareMetal #Kubernetes

  21. Launching 30 FFmpeg streams at once will deadlock your GPU. Stop treating IPTV production like a lab experiment.

    Skip the marketing fluff and learn the realities of scaling:
    ✅ Staggered Startups to prevent PCIe deadlocks
    ✅ Active-Active Redundancy (K8s restarts = blackouts)
    ✅ IP-bound JWTs to stop token leakage
    ✅ Cloud Egress Tax vs Bare Metal

    Read the guide:
    🔗 servermo.com/howto/ffmpeg-nven

    #IPTV #FFmpeg #DevOps #SysAdmin #BareMetal #Kubernetes

  22. drawtext filter chokes on umlauts, causing it to drop trailing characters 😩

  23. Befreie gerade ein paar mehr meiner selbst gekauften Audible Hörbücher von ihrem Kopierschutz. Mit Hillfe von FFMPEG geht's vom geschützten AAX-Container ins freie MP3- und M4A-Format und danach auf meine Sicherungsplatte. Man sollte sich niemals auf die Cloud verlassen und immer lokale Backups erstellen! 😉

    #Audible #AudioBook #Hörbuch #Medien #Backup #Cloud #CopyProtection #Crack #ffmpeg

  24. 🆕 blog! “Reprojecting Dual Fisheye Videos to Equirectangular (LG 360)”

    I still use my obsolete LG 360 Camera. When copying MP4 videos from its SD card, they come out in "Dual Fisheye" format - which looks like this:

    VLC and YouTube will only play "Equirectangular" videos in spherical mode. So, how to convert a dual fisheye to…

    👀 Read more: shkspr.mobi/blog/2026/04/repro

    #ffmpeg #HowTo #LG360 #linux #video

  25. Reprojecting Dual Fisheye Videos to Equirectangular (LG 360)

    shkspr.mobi/blog/2026/04/repro

    I still use my obsolete LG 360 Camera. When copying MP4 videos from its SD card, they come out in "Dual Fisheye" format - which looks like this:

    VLC and YouTube will only play "Equirectangular" videos in spherical mode. So, how to convert a dual fisheye to equirectangualr?

    The Simple Way

     Bashffmpeg \
      -i original.mp4 \
      -vf "v360=input=dfisheye:output=equirect:ih_fov=189:iv_fov=189" \
      360.mp4
    

    However, this has some "quirks".

    The first part of the video filter is v360=input=dfisheye:output=equirect - that just says to use the 360 filter on an input which is dual fisheye and then output in equirectangular.

    The next part is :ih_fov=189:iv_fov=189 which says that the input video has a horizontal and vertical field of view of 189°. That's a weird number, right?

    You'd kind of expect each lens to be 180°, right? Here's what happens if :ih_fov=180:iv_fov=180 is used:

    The lenses overlaps a little bit. So using 180° means that certain portions are duplicated.

    I think the lenses technically offer 200°, but the physical casing prevents all of that from being viewed. I got to the value of 189° by trial and error. Mostly error! Using :ih_fov=189:iv_fov=189 get this image which has less overlap:

    It isn't perfect - but it preserves most of the image coherence.

    Cut Off Images

    There's another thing worth noticing - the top, right, bottom, and left "corners" of the circle are cut off. If the image sensor captured everything, the resultant fisheye would look something like this:

    I tried repaging the video to include the gaps, but it didn't make any noticeable difference.

    Making Equirectangular Videos Work With VLC

    Sadly, ffmpeg will not write the metadata necessary to let playback devices know the video is spherical. Instead, according to Bino3D, you have to use exiftool like so:

     Bashexiftool \
            -XMP-GSpherical:Spherical="true" \
            -XMP-GSpherical:Stitched="true" \
            -XMP-GSpherical:ProjectionType="equirectangular" \
            video.mp4
    

    Putting It All Together

    The LG 360 records audio in 5.1 surround using AAC. That's already fairly well compressed, so there's no point squashing it down to Opus.

    The default video codec is h264, but the picture is going to be reprojected, so quality is always going to take a bit of a hit. Pick whichever code you like to give the best balance of quality, file size, and encoding time.

    Run:

     Bashffmpeg \
      -i original.mp4 \
      -vf "v360=input=dfisheye:output=equirect:ih_fov=189:iv_fov=189" \
      -c:v libx265 -preset fast -crf 28 -c:a copy \
      out.mp4; exiftool \
            -XMP-GSpherical:Spherical="true" \
            -XMP-GSpherical:Stitched="true" \
            -XMP-GSpherical:ProjectionType="equirectangular" \
            out.mp4
    

    That will produce a reasonable equirectangular file suitable for viewing in VLC or in VR.

    If this has been useful to you, please stick a comment in the box!

    #ffmpeg #HowTo #LG360 #linux #video
  26. Das ist übrigens das erste Video, das ich mit #DavinciResolve für Social Media gemacht habe.

    Ganz hohles Gefühl im Bauch… nach vielen Jahren #Lightworks #NLE, das wegen einem seit über einem Jahr nicht behobenen Bug für mich nicht mehr schon lange nicht mehr richtig nutzbar ist. Ist das nun der Abschied?

    Also lerne ich einiges nochmal neu.

    Leider gibt's nicht so viele Optionen unter #Linux, was #Video angeht.

    Und nein, weder Kdenlive noch Blender spielen meine Footage überhaupt richtig ab auf meinem Rechner, ich hab das Thema hier ja bereits vor längerem mal totgeschlagen.

    (Liebe 2000%-Nerds: Wer jetzt #ffmpeg sagt, riskiert den atomaren Endzeit-Plonk. 🤪 )

    Die Hoffnung stirbt natürlich zuletzt, dass #LWKS nochmal die Kurve kriegt.

    #Resolve #Blackmagic #Davinci #VideoEditing

  27. Das ist übrigens das erste Video, das ich mit #DavinciResolve für Social Media gemacht habe.

    Ganz hohles Gefühl im Bauch… nach vielen Jahren #Lightworks #NLE, das wegen einem seit über einem Jahr nicht behobenen Bug für mich nicht mehr schon lange nicht mehr richtig nutzbar ist. Ist das nun der Abschied?

    Also lerne ich einiges nochmal neu.

    Leider gibt's nicht so viele Optionen unter #Linux, was #Video angeht.

    Und nein, weder Kdenlive noch Blender spielen meine Footage überhaupt richtig ab auf meinem Rechner, ich hab das Thema hier ja bereits vor längerem mal totgeschlagen.

    (Liebe 2000%-Nerds: Wer jetzt #ffmpeg sagt, riskiert den atomaren Endzeit-Plonk. 🤪 )

    Die Hoffnung stirbt natürlich zuletzt, dass #LWKS nochmal die Kurve kriegt.

    #Resolve #Blackmagic #Davinci #VideoEditing

  28. Das ist übrigens das erste Video, das ich mit #DavinciResolve für Social Media gemacht habe.

    Ganz hohles Gefühl im Bauch… nach vielen Jahren #Lightworks #NLE, das wegen einem seit über einem Jahr nicht behobenen Bug für mich nicht mehr schon lange nicht mehr richtig nutzbar ist. Ist das nun der Abschied?

    Also lerne ich einiges nochmal neu.

    Leider gibt's nicht so viele Optionen unter #Linux, was #Video angeht.

    Und nein, weder Kdenlive noch Blender spielen meine Footage überhaupt richtig ab auf meinem Rechner, ich hab das Thema hier ja bereits vor längerem mal totgeschlagen.

    (Liebe 2000%-Nerds: Wer jetzt #ffmpeg sagt, riskiert den atomaren Endzeit-Plonk. 🤪 )

    Die Hoffnung stirbt natürlich zuletzt, dass #LWKS nochmal die Kurve kriegt.

    #Resolve #Blackmagic #Davinci #VideoEditing

  29. Das ist übrigens das erste Video, das ich mit #DavinciResolve für Social Media gemacht habe.

    Ganz hohles Gefühl im Bauch… nach vielen Jahren #Lightworks #NLE, das wegen einem seit über einem Jahr nicht behobenen Bug für mich nicht mehr schon lange nicht mehr richtig nutzbar ist. Ist das nun der Abschied?

    Also lerne ich einiges nochmal neu.

    Leider gibt's nicht so viele Optionen unter #Linux, was #Video angeht.

    Und nein, weder Kdenlive noch Blender spielen meine Footage überhaupt richtig ab auf meinem Rechner, ich hab das Thema hier ja bereits vor längerem mal totgeschlagen.

    (Liebe 2000%-Nerds: Wer jetzt #ffmpeg sagt, riskiert den atomaren Endzeit-Plonk. 🤪 )

    Die Hoffnung stirbt natürlich zuletzt, dass #LWKS nochmal die Kurve kriegt.

    #Resolve #Blackmagic #Davinci #VideoEditing

  30. Das ist übrigens das erste Video, das ich mit #DavinciResolve für Social Media gemacht habe.

    Ganz hohles Gefühl im Bauch… nach vielen Jahren #Lightworks #NLE, das wegen einem seit über einem Jahr nicht behobenen Bug für mich nicht mehr schon lange nicht mehr richtig nutzbar ist. Ist das nun der Abschied?

    Also lerne ich einiges nochmal neu.

    Leider gibt's nicht so viele Optionen unter #Linux, was #Video angeht.

    Und nein, weder Kdenlive noch Blender spielen meine Footage überhaupt richtig ab auf meinem Rechner, ich hab das Thema hier ja bereits vor längerem mal totgeschlagen.

    (Liebe 2000%-Nerds: Wer jetzt #ffmpeg sagt, riskiert den atomaren Endzeit-Plonk. 🤪 )

    Die Hoffnung stirbt natürlich zuletzt, dass #LWKS nochmal die Kurve kriegt.

    #Resolve #Blackmagic #Davinci #VideoEditing

  31. Do you know what your #radio station actually plays? The #liquidsoap webhooks said the DJ was live, the metadata agreed but the listeners heard the wrong track bleed through for half a second during a live set.

    So I captured the #Icecast stream like a real listener and used #ffmpeg and spectral analysis to test the whole pipeline end to end and fix my #audio pipeline.

    Then the audio compressor generated harmonics that broke my tests in an unexpected way.

    attilagyorffy.com/blog/do-you-

  32. Když s sebou nechceš tahat laptop, ale jsi až moc hluboko v terminálu, abys konvertoval videa na cestách přes random appku 😄

    #termux #ffmpeg

  33. AGC или как перестать подстраивать громкость вручную

    Многие наверняка сталкивались с проблемой: смотришь по ТВ спокойный фильм, который прерывается резкой и громкой рекламой; или, например, при общении по ВКС всех собеседников слышно нормально, но у кого-нибудь одного микрофон будет шуметь так, будто он в данный момент находится рядом с двигателем самолета, готовящегося взлетать. Конечно, всегда можно подрегулировать громкость динамиков, но всегда ли это удобно и возможно?

    habr.com/ru/articles/1022424/

    #ffmpeg #agc #ару #алгоритмы #звук

  34. Soooo, you can now listen to the talking part from the intro of Undone (The Sweater Song) by Weezer forever and ever and ever

    localhose.com/sweatshop.html

    Edited + looped audio and created page entirely on my phone in termux with ffmpeg.

    Direct opus link (1h): localhose.com/24h-sweater.opus
    Direct mp3 link (1h): localhose.com/24h-sweater.mp3

    #weezer #sweater #audio #termux #ffmpeg #android

    corteximplant.com/@zeyus/11636

  35. Soooo, you can now listen to the talking part from the intro of Undone (The Sweater Song) by Weezer forever and ever and ever

    localhose.com/sweatshop.html

    Edited + looped audio and created page entirely on my phone in termux with ffmpeg.

    Direct opus link (1h): localhose.com/24h-sweater.opus
    Direct mp3 link (1h): localhose.com/24h-sweater.mp3

    #weezer #sweater #audio #termux #ffmpeg #android

    corteximplant.com/@zeyus/11636

  36. Soooo, you can now listen to the talking part from the intro of Undone (The Sweater Song) by Weezer forever and ever and ever

    localhose.com/sweatshop.html

    Edited + looped audio and created page entirely on my phone in termux with ffmpeg.

    Direct opus link (1h): localhose.com/24h-sweater.opus
    Direct mp3 link (1h): localhose.com/24h-sweater.mp3

    #weezer #sweater #audio #termux #ffmpeg #android

    corteximplant.com/@zeyus/11636

  37. Soooo, you can now listen to the talking part from the intro of Undone (The Sweater Song) by Weezer forever and ever and ever

    localhose.com/sweatshop.html

    Edited + looped audio and created page entirely on my phone in termux with ffmpeg.

    Direct opus link (1h): localhose.com/24h-sweater.opus
    Direct mp3 link (1h): localhose.com/24h-sweater.mp3

    #weezer #sweater #audio #termux #ffmpeg #android

    corteximplant.com/@zeyus/11636

  38. Soooo, you can now listen to the talking part from the intro of Undone (The Sweater Song) by Weezer forever and ever and ever

    localhose.com/sweatshop.html

    Edited + looped audio and created page entirely on my phone in termux with ffmpeg.

    Direct opus link (1h): localhose.com/24h-sweater.opus
    Direct mp3 link (1h): localhose.com/24h-sweater.mp3

    #weezer #sweater #audio #termux #ffmpeg #android

    corteximplant.com/@zeyus/11636

  39. Я устал настраивать ПК и написал свое приложение для Windows на Flutter

    Привет. Меня зовут Никита. Наверняка многим знакома ситуация: кто‑то из друзей или родственников просит «почистить комп», «передалать PNG в JPG» или «раскидать свалку файлов в загрузках». В какой‑то момент мне это надоело и я решил обернуть все свои рутинные скрипты в удобный графический интерфейс. Чтобы можно было просто скинуть человеку один.exe файл, и всё работало из коробки. Никаких установок питона, никаких консолей. Так появился мое приложение SmartLauncher.

    habr.com/ru/articles/1019174/

    #flutter #python #windows #автоматизация #ffmpeg #open_source #утилиты #скрипты #desktop #petproject

  40. The long test is looking good for the `av1an`, `mkvmerge`, and `cleanup` modules, the Golang client is doing exactly what its supposed to which is a welcome change from yesterday. I need to do long tests on the `ffmpeg` and `handbrake` modules next, but I'll handle those tomorrow.

    After that, I'll update documentation, release the `2.0.0` client, and deprecate the old client.

    #sisyphus #av1an #ffmpeg #matroska #encoding #programming #golang

  41. 🐢 Oh, look! Another overconfident developer trying to flex by reimagining #FFmpeg in #Rust. Because nothing screams "innovative genius" like rewriting a battle-tested, highly optimized codebase in a slower language for... reasons? 😂🚀
    github.com/sharifhsn/wedeo #overconfidentdeveloper #programminghumor #techdebate #innovation #HackerNews #ngated

  42. Speaking of that, it's still free download (might have to initiate that from the album page instead of song)

    kikiala.bandcamp.com/album/pyr

    Features some 45 minutes of mixed down road ambience, birdsong, ambient guitar, a few "heavier" parts (metal-ish), overall just another background vibe like i tend to do

    #music #ambient #freedom #doyouspeakit #linux #ardour #ffmpeg #pipewire

  43. Вайбкодим .EXE под Windows с GUI на AutoHotkey v2

    История о том, как превратить консольный скрипт в полноценное Windows приложение с GUI на AutoHotkey v2 при помощи нейросетей и вайбкодинга. Разбираем этапы от поиска инструментов до борьбы с интерфейсом в стиле софта нулевых без единой строчки кода, написанной вручную.

    habr.com/ru/articles/1016392/

    #вайбкодинг #нейросети #программирование #autohotkey #cmd #ffmpeg #gemini #qwen

  44. The Sisyphus client rewrite continues after a bit of a break. The `ffmpeg` module is mostly finished and should serve as a good template for `handbrake`, `av1an`, `mkvmerge`, and `cleanup` modules. Logging is progressing pretty well. The config has been expanded slightly and can now pull from TOML files on top of the standard environment variables.

    #sisyphus #encoding #ffmpeg #golang

  45. So here's my best attempt to generate #TikTok style on-screen #subtitles with #ffmpeg and ASS alone. Not perfect (no nice rounded boxes) and a tad more tedious than it ought to be.

    #OpenSource #libass

    PS: Had to mute the video because #mastodon for some unexplicable reason does not support the mixing of audible video and image attachments (on the web interface)

  46. #Linux Weekly Roundup for March 22nd, 2026: #GNOME 50, #FFmpeg 8.1, #Blender 5.1, #KiCad 10.0, #OpenShot 3.5, #KDE Plasma 6.6.3, #antiX 26, Emmabuntüs #Debian Edition 6 1.01, #PipeWire 1.6.2, #Mageia 10 beta, #Fedora Asahi Remix 43, #SparkyLinux 2026.03, #GStreamer 1.30, new Linux computers, and more 9to5linux.com/9to5linux-weekly

    #OpenSource #FOSS #GNU