home.social

#deepfakedetection — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #deepfakedetection, aggregated by home.social.

  1. TechSpot: AI images are getting harder to spot, but physics still gives them away if you know where to look. “A study published in the journal Science explains that while modern image generators are rapidly improving, the models behind them remain fundamentally ignorant of how light and geometry work in the real world. Measuring simple details like reflections or shadows can still give away a […]

    https://rbfirehose.com/2026/05/12/techspot-ai-images-are-getting-harder-to-spot-but-physics-still-gives-them-away-if-you-know-where-to-look/
  2. TechSpot: AI images are getting harder to spot, but physics still gives them away if you know where to look. “A study published in the journal Science explains that while modern image generators are rapidly improving, the models behind them remain fundamentally ignorant of how light and geometry work in the real world. Measuring simple details like reflections or shadows can still give away a […]

    https://rbfirehose.com/2026/05/12/techspot-ai-images-are-getting-harder-to-spot-but-physics-still-gives-them-away-if-you-know-where-to-look/
  3. TechSpot: AI images are getting harder to spot, but physics still gives them away if you know where to look. “A study published in the journal Science explains that while modern image generators are rapidly improving, the models behind them remain fundamentally ignorant of how light and geometry work in the real world. Measuring simple details like reflections or shadows can still give away a […]

    https://rbfirehose.com/2026/05/12/techspot-ai-images-are-getting-harder-to-spot-but-physics-still-gives-them-away-if-you-know-where-to-look/
  4. TechSpot: AI images are getting harder to spot, but physics still gives them away if you know where to look. “A study published in the journal Science explains that while modern image generators are rapidly improving, the models behind them remain fundamentally ignorant of how light and geometry work in the real world. Measuring simple details like reflections or shadows can still give away a […]

    https://rbfirehose.com/2026/05/12/techspot-ai-images-are-getting-harder-to-spot-but-physics-still-gives-them-away-if-you-know-where-to-look/
  5. TechSpot: AI images are getting harder to spot, but physics still gives them away if you know where to look. “A study published in the journal Science explains that while modern image generators are rapidly improving, the models behind them remain fundamentally ignorant of how light and geometry work in the real world. Measuring simple details like reflections or shadows can still give away a […]

    https://rbfirehose.com/2026/05/12/techspot-ai-images-are-getting-harder-to-spot-but-physics-still-gives-them-away-if-you-know-where-to-look/
  6. Science: Reality check. “[Hany] Farid, a specialist at the University of California (UC), Berkeley, is one of the world’s leading experts in determining whether a photo or video has been manipulated. Since helping to found the field of digital forensics more than 20 years ago, he has kept pace with massive technological change.”

    https://rbfirehose.com/2026/05/07/science-reality-check/
  7. Science: Reality check. “[Hany] Farid, a specialist at the University of California (UC), Berkeley, is one of the world’s leading experts in determining whether a photo or video has been manipulated. Since helping to found the field of digital forensics more than 20 years ago, he has kept pace with massive technological change.”

    https://rbfirehose.com/2026/05/07/science-reality-check/
  8. Science: Reality check. “[Hany] Farid, a specialist at the University of California (UC), Berkeley, is one of the world’s leading experts in determining whether a photo or video has been manipulated. Since helping to found the field of digital forensics more than 20 years ago, he has kept pace with massive technological change.”

    https://rbfirehose.com/2026/05/07/science-reality-check/
  9. Science: Reality check. “[Hany] Farid, a specialist at the University of California (UC), Berkeley, is one of the world’s leading experts in determining whether a photo or video has been manipulated. Since helping to found the field of digital forensics more than 20 years ago, he has kept pace with massive technological change.”

    https://rbfirehose.com/2026/05/07/science-reality-check/
  10. Science: Reality check. “[Hany] Farid, a specialist at the University of California (UC), Berkeley, is one of the world’s leading experts in determining whether a photo or video has been manipulated. Since helping to found the field of digital forensics more than 20 years ago, he has kept pace with massive technological change.”

    https://rbfirehose.com/2026/05/07/science-reality-check/
  11. TechCrunch: YouTube expands its AI likeness detection technology to celebrities. “YouTube is expanding its new ‘likeness detection’ technology, which identifies AI-generated content, such as deepfakes, to people within the entertainment industry, the company announced on Tuesday.”

    https://rbfirehose.com/2026/04/22/techcrunch-youtube-expands-its-ai-likeness-detection-technology-to-celebrities/
  12. TechCrunch: YouTube expands its AI likeness detection technology to celebrities. “YouTube is expanding its new ‘likeness detection’ technology, which identifies AI-generated content, such as deepfakes, to people within the entertainment industry, the company announced on Tuesday.”

    https://rbfirehose.com/2026/04/22/techcrunch-youtube-expands-its-ai-likeness-detection-technology-to-celebrities/
  13. TechCrunch: YouTube expands its AI likeness detection technology to celebrities. “YouTube is expanding its new ‘likeness detection’ technology, which identifies AI-generated content, such as deepfakes, to people within the entertainment industry, the company announced on Tuesday.”

    https://rbfirehose.com/2026/04/22/techcrunch-youtube-expands-its-ai-likeness-detection-technology-to-celebrities/
  14. TechCrunch: YouTube expands its AI likeness detection technology to celebrities. “YouTube is expanding its new ‘likeness detection’ technology, which identifies AI-generated content, such as deepfakes, to people within the entertainment industry, the company announced on Tuesday.”

    https://rbfirehose.com/2026/04/22/techcrunch-youtube-expands-its-ai-likeness-detection-technology-to-celebrities/
  15. TechCrunch: YouTube expands its AI likeness detection technology to celebrities. “YouTube is expanding its new ‘likeness detection’ technology, which identifies AI-generated content, such as deepfakes, to people within the entertainment industry, the company announced on Tuesday.”

    https://rbfirehose.com/2026/04/22/techcrunch-youtube-expands-its-ai-likeness-detection-technology-to-celebrities/
  16. Darren Chaker (darren-chaker) explores AI-forensics: detecting deepfakes, algorithmic bias & digital evidence integrity. Combining First Amendment expertise with OSCP/EnCE certifications to protect constitutional rights in the AI era. Counter-forensics meets machine learning. darrenchaker.com #AIForensics #DigitalRights #DeepfakeDetection #CyberSecurity #FirstAmendment #darren-chaker-court-records. 😍

  17. Darren Chaker (darren-chaker) explores AI-forensics: detecting deepfakes, algorithmic bias & digital evidence integrity. Combining First Amendment expertise with OSCP/EnCE certifications to protect constitutional rights in the AI era. Counter-forensics meets machine learning. darrenchaker.com #AIForensics #DigitalRights #DeepfakeDetection #CyberSecurity #FirstAmendment #darren-chaker-court-records. 😍

  18. Tubefilter: A new platform “by creators, for creators” will root out AI deepfakes. “To wage war against the world of deepfakes, Zander Small has co-founded FanLock. That’s the name of an independet platform that will help creators identify, manage, and crack down on AI deepfakes across more than four million websites.”

    https://rbfirehose.com/2026/03/03/tubefilter-a-new-platform-by-creators-for-creators-will-root-out-ai-deepfakes/
  19. Tubefilter: A new platform “by creators, for creators” will root out AI deepfakes. “To wage war against the world of deepfakes, Zander Small has co-founded FanLock. That’s the name of an independet platform that will help creators identify, manage, and crack down on AI deepfakes across more than four million websites.”

    https://rbfirehose.com/2026/03/03/tubefilter-a-new-platform-by-creators-for-creators-will-root-out-ai-deepfakes/
  20. Tubefilter: A new platform “by creators, for creators” will root out AI deepfakes. “To wage war against the world of deepfakes, Zander Small has co-founded FanLock. That’s the name of an independet platform that will help creators identify, manage, and crack down on AI deepfakes across more than four million websites.”

    https://rbfirehose.com/2026/03/03/tubefilter-a-new-platform-by-creators-for-creators-will-root-out-ai-deepfakes/
  21. Tubefilter: A new platform “by creators, for creators” will root out AI deepfakes. “To wage war against the world of deepfakes, Zander Small has co-founded FanLock. That’s the name of an independet platform that will help creators identify, manage, and crack down on AI deepfakes across more than four million websites.”

    https://rbfirehose.com/2026/03/03/tubefilter-a-new-platform-by-creators-for-creators-will-root-out-ai-deepfakes/
  22. 🗳️ Deepfakes aren't just a cybersecurity problem — they're a democratic one.

    Biometric liveness detection and injection attack prevention aren't just technical challenges — they're civic imperatives.

    🔗 provadivita.com/biometric-inje

    #DeepfakeDetection #ElectionSecurity #BiometricLiveness #AIDisinformation #DigitalTrust #IdentitySecurity #CyberSecurity #FightingFakes

  23. The UK is moving toward mandatory proactive detection of nonconsensual intimate images.

    Under proposals backed by Keir Starmer, platforms must:
    • Remove flagged content within 48 hours
    • Prevent reuploads using hash matching
    • Deploy proactive detection “at source”
    • Face fines up to 10% of global revenue

    Regulator Ofcom is accelerating its decision on requiring technical enforcement mechanisms.
    Technical considerations:
    - Hash collision and false-positive risks
    - Cross-platform hash database coordination
    - Encryption vs scanning tradeoffs
    - Abuse-report automation workflows
    - AI-generated image detection accuracy
    Is mandatory proactive scanning the future of online content governance?

    Source: therecord.media/united-kingdom

    Drop your technical analysis below.

    Follow @technadu for advanced cybersecurity and policy reporting.

    #Infosec #DetectionEngineering #AIsecurity #HashMatching #ContentModeration #DigitalForensics #CyberPolicy #OnlineSafety #DeepfakeDetection #PrivacyEngineering #ThreatModeling #SecurityArchitecture

  24. The UK is moving toward mandatory proactive detection of nonconsensual intimate images.

    Under proposals backed by Keir Starmer, platforms must:
    • Remove flagged content within 48 hours
    • Prevent reuploads using hash matching
    • Deploy proactive detection “at source”
    • Face fines up to 10% of global revenue

    Regulator Ofcom is accelerating its decision on requiring technical enforcement mechanisms.
    Technical considerations:
    - Hash collision and false-positive risks
    - Cross-platform hash database coordination
    - Encryption vs scanning tradeoffs
    - Abuse-report automation workflows
    - AI-generated image detection accuracy
    Is mandatory proactive scanning the future of online content governance?

    Source: therecord.media/united-kingdom

    Drop your technical analysis below.

    Follow @technadu for advanced cybersecurity and policy reporting.

    #Infosec #DetectionEngineering #AIsecurity #HashMatching #ContentModeration #DigitalForensics #CyberPolicy #OnlineSafety #DeepfakeDetection #PrivacyEngineering #ThreatModeling #SecurityArchitecture

  25. The UK is moving toward mandatory proactive detection of nonconsensual intimate images.

    Under proposals backed by Keir Starmer, platforms must:
    • Remove flagged content within 48 hours
    • Prevent reuploads using hash matching
    • Deploy proactive detection “at source”
    • Face fines up to 10% of global revenue

    Regulator Ofcom is accelerating its decision on requiring technical enforcement mechanisms.
    Technical considerations:
    - Hash collision and false-positive risks
    - Cross-platform hash database coordination
    - Encryption vs scanning tradeoffs
    - Abuse-report automation workflows
    - AI-generated image detection accuracy
    Is mandatory proactive scanning the future of online content governance?

    Source: therecord.media/united-kingdom

    Drop your technical analysis below.

    Follow @technadu for advanced cybersecurity and policy reporting.

    #Infosec #DetectionEngineering #AIsecurity #HashMatching #ContentModeration #DigitalForensics #CyberPolicy #OnlineSafety #DeepfakeDetection #PrivacyEngineering #ThreatModeling #SecurityArchitecture

  26. The UK is moving toward mandatory proactive detection of nonconsensual intimate images.

    Under proposals backed by Keir Starmer, platforms must:
    • Remove flagged content within 48 hours
    • Prevent reuploads using hash matching
    • Deploy proactive detection “at source”
    • Face fines up to 10% of global revenue

    Regulator Ofcom is accelerating its decision on requiring technical enforcement mechanisms.
    Technical considerations:
    - Hash collision and false-positive risks
    - Cross-platform hash database coordination
    - Encryption vs scanning tradeoffs
    - Abuse-report automation workflows
    - AI-generated image detection accuracy
    Is mandatory proactive scanning the future of online content governance?

    Source: therecord.media/united-kingdom

    Drop your technical analysis below.

    Follow @technadu for advanced cybersecurity and policy reporting.

    #Infosec #DetectionEngineering #AIsecurity #HashMatching #ContentModeration #DigitalForensics #CyberPolicy #OnlineSafety #DeepfakeDetection #PrivacyEngineering #ThreatModeling #SecurityArchitecture

  27. 🚀 CheckHC chính thức ra mắt trên Product Hunt!

    Sau hàng tháng phát triển, CheckHC đã sẵn sàng:
    ✅ Phát hiện deepfake
    ✅ Đăng ký blockchain
    ✅ Lưu trữ vĩnh viễn 200+ năm hoặc tuân thủ GDPR
    ✅ Tích hợp C2PA

    🎁 Tặng 3 phân tích AI miễn phí + 40 credits khi đăng ký!

    #CheckHC #ProductHunt #DeepfakeDetection #Blockchain #AI #Cybersecurity #C2PA #Startup #Tech #PhátHiệnDeepfake #Blockchain #AnToànMạng

    reddit.com/r/SideProject/comme

  28. 🚀 CheckHC chính thức ra mắt trên Product Hunt!

    Sau hàng tháng phát triển, CheckHC đã sẵn sàng:
    ✅ Phát hiện deepfake
    ✅ Đăng ký blockchain
    ✅ Lưu trữ vĩnh viễn 200+ năm hoặc tuân thủ GDPR
    ✅ Tích hợp C2PA

    🎁 Tặng 3 phân tích AI miễn phí + 40 credits khi đăng ký!

    #CheckHC #ProductHunt #DeepfakeDetection #Blockchain #AI #Cybersecurity #C2PA #Startup #Tech #PhátHiệnDeepfake #Blockchain #AnToànMạng

    reddit.com/r/SideProject/comme

  29. 🚀 ĐÃ RA MẮT! CheckHC chính thức lên kệ tại Product Hunt!

    Sau hàng tháng phát triển, CheckHC tự hào ra mắt với:
    ✅ Phát hiện deepfake
    ✅ Đăng ký blockchain
    ✅ Lưu trữ vĩnh viễn 200+ năm hoặc tuân thủ GDPR
    ✅ Tích hợp C2PA

    🎁 Tặng 3 phân tích AI miễn phí + 40 tín dụng khi đăng ký!

    #ProductHunt #CheckHC #DeepfakeDetection #Blockchain #AI #C2PA #LuuTruVinhCu #PhatHienDeepfake #AI #CongNghe #Startup #Innovation

    reddit.com/r/SideProject/comme

  30. Chúng tôi vừa ra mắt CheckHC - Công cụ phát hiện deepfake bằng AI & xác thực nội dung qua blockchain Solana. Giúp người sáng tạo chứng minh tính xác thực của hình ảnh gốc, lưu trữ bằng chứng hơn 200 năm hoặc tuân thủ RGPD. Cung cấp chứng chỉ & tiêu chuẩn C2PA. Đăng ký dùng thử miễn phí: 3 phân tích AI + 40 credits. #CheckHC #DeepfakeDetection #Blockchain #AI #Cybersecurity #XácThựcẢnh #PhátHiệnDeepfake #CôngNghệMới

    reddit.com/r/SideProject/comme

  31. Chúng tôi vừa ra mắt CheckHC - Công cụ phát hiện deepfake bằng AI & xác thực nội dung qua blockchain Solana. Giúp người sáng tạo chứng minh tính xác thực của hình ảnh gốc, lưu trữ bằng chứng hơn 200 năm hoặc tuân thủ RGPD. Cung cấp chứng chỉ & tiêu chuẩn C2PA. Đăng ký dùng thử miễn phí: 3 phân tích AI + 40 credits. #CheckHC #DeepfakeDetection #Blockchain #AI #Cybersecurity #XácThựcẢnh #PhátHiệnDeepfake #CôngNghệMới

    reddit.com/r/SideProject/comme

  32. AI Literacy Influencers Detect Fakes on TikTok, Challenge Authenticity Economy

    The Rise of AI Spotters: How TikTok’s Synthetic Surge is Reshaping Digital Influence In the fast-paced world of…
    #Economy #AIliteracy #deepfakedetection #influencereconomy #syntheticcontent #TikTokAItrend
    europesays.com/2617584/

  33. TikTok just rolled out an AI slider, letting you dial AI‑generated content up or down. The new tool also adds invisible watermarks and deep‑fake detection, while giving creators a free AI Editor Pro. Open‑source fans can see how the algorithm now lets you manage topics more transparently. Curious how this changes your feed? Read the full story. #TikTokAI #AISlider #DeepfakeDetection #AIEditorPro

    🔗 aidailypost.com/news/tiktok-ad

  34. TikTok just rolled out an AI slider, letting you dial AI‑generated content up or down. The new tool also adds invisible watermarks and deep‑fake detection, while giving creators a free AI Editor Pro. Open‑source fans can see how the algorithm now lets you manage topics more transparently. Curious how this changes your feed? Read the full story. #TikTokAI #AISlider #DeepfakeDetection #AIEditorPro

    🔗 aidailypost.com/news/tiktok-ad

  35. South Korea’s Deputy Prime Minister Baek Kyung-hoon met with experts to discuss AI safety policy, as the government ramps up efforts to address risks from deepfakes and AGI, and advances plans for a national AI safety ecosystem.
    #YonhapInfomax
    #AISafety #BaekKyungHoon #DeepfakeDetection #AGIRisks #ScienceAndICT
    #Economics #FinancialMarkets #Banking #Securities #Bonds #StockMarket
    en.infomaxai.com/news/articleV

  36. South Korea’s Deputy Prime Minister Baek Kyung-hoon met with experts to discuss AI safety policy, as the government ramps up efforts to address risks from deepfakes and AGI, and advances plans for a national AI safety ecosystem.
    #YonhapInfomax
    #AISafety #BaekKyungHoon #DeepfakeDetection #AGIRisks #ScienceAndICT
    #Economics #FinancialMarkets #Banking #Securities #Bonds #StockMarket
    en.infomaxai.com/news/articleV

  37. South Korea’s Deputy Prime Minister Baek Kyung-hoon met with experts to discuss AI safety policy, as the government ramps up efforts to address risks from deepfakes and AGI, and advances plans for a national AI safety ecosystem.
    #YonhapInfomax
    #AISafety #BaekKyungHoon #DeepfakeDetection #AGIRisks #ScienceAndICT
    #Economics #FinancialMarkets #Banking #Securities #Bonds #StockMarket
    en.infomaxai.com/news/articleV

  38. South Korea’s Deputy Prime Minister Baek Kyung-hoon met with experts to discuss AI safety policy, as the government ramps up efforts to address risks from deepfakes and AGI, and advances plans for a national AI safety ecosystem.
    #YonhapInfomax
    #AISafety #BaekKyungHoon #DeepfakeDetection #AGIRisks #ScienceAndICT
    #Economics #FinancialMarkets #Banking #Securities #Bonds #StockMarket
    en.infomaxai.com/news/articleV