#robotstxt — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #robotstxt, aggregated by home.social.
-
Пять неочевидных вещей, которые я узнал, запуская кино-соцсеть: от robots.txt-ловушки до 24-мерной математики вкуса
Последние полгода я работаю над VibeMuvik — кино-соцсетью с рецензиями, дебатами и синхронным просмотром фильмов. Одна из тех штук, которые «ну вроде несложно», пока не начинаешь копать. Эта статья — про неожиданные находки . Не про «как я выбрал стек» (скучно) и не про «туториал по WebRTC» (и без меня есть). Это пять ситуаций, в которых я споткнулся, обнаружил что-то интересное, и подумал «об этом стоит рассказать — другим пригодится». Поехали.
https://habr.com/ru/articles/1027876/
#robotstxt #SEO #WebRTC #Nextjs #IndexNow #sitemap #Googlebot #Cinema_DNA #синхронный_просмотр #рекомендательные_системы
-
The Pope’s Warnings About AI Were AI-Generated, a Detection Tool Claims
https://fed.brid.gy/r/https://www.wired.com/story/pope-tweets-ai-generated-pangram-chrome-extension/
-
The Pope’s Warnings About AI Were AI-Generated, a Detection Tool Claims
https://web.brid.gy/r/https://www.wired.com/story/pope-tweets-ai-generated-pangram-chrome-extension/
-
#Development #Launches
Is Your Site Agent-Ready? · Scan your website for agent-friendly standards https://ilo.im/16c93a_____
#Website #AI #Agents #MCP #Commerce #Content #RobotsTxt #Sitemap #WebDev #Frontend -
Only 7.4% of Fortune 500 have an llms.txt file, study finds: ProGEO.ai research reveals just 7.4% of Fortune 500 companies have implemented llms.txt, while 92.8% use robots.txt and 53.8% use JSON-LD for AI visibility. https://ppc.land/only-7-4-of-fortune-500-have-an-llms-txt-file-study-finds/ #Fortune500 #AI #llms #robotsTxt #JSONLD
-
#Development #Explainers
Inside Googlebot · How Google’s crawl system decides which content gets indexed https://ilo.im/16btho_____
#Business #Google #SearchEngine #SEO #Crawlers #Content #RobotsTxt #Development #WebDev #Frontend -
Oh, this is #fun.
#Applebot - Apple's web crawler, used for various things - is ignoring robots.txt rules governing crawling of websites.
I have Applebot (and Applebot-Extended, which isn't really a crawler) in my robots.txt files, set to disallow all access. Has been that way for #yonks.
And Applebot is consistently the highest-traffic crawler to my sites - at least of ones that actually bother to fetch robots.txt. Yesterday, for example, Applebot fetched robots.txt from one of my websites almost 800 times.
Yes, it's really Apple, not someone faking the user-agent identifier. It's coming from the networks that Apple says can be used to identify Applebot access. DNS matches, everything.
e.g. https://support.apple.com/en-ca/119829So: legendary Apple software quality. Documented to do the right thing, but actually doing the wrong thing. And completely failing to cache content, fetching the same file 800 times a day when it hasn't changed in years.
Hey, Apple! Need a software engineer who's actually, you know, good at it? I'm available.
#Apple #AppleInc #TimApple #WebCrawler #RobotsTxt #quality #WeveHeardOfIt #qwality #AppleQwality #legendary #TwoHardThings #caching #fail #engineer #software #SoftwareEngineer
-
#Development #Findings
Markdown, llms.txt, and AI crawlers · Do Markdown and llms.txt matter for your website? https://ilo.im/16b5qb_____
#Business #SEO #SearchEngines #AI #Crawlers #Content #Website #Markdown #LlmsTxt #RobotsTxt -
ИИ уже читает ваш сайт, но по каким правилам? LLMs.txt, robots.txt и контроль агентов
Еще пару лет назад веб жил в простой и понятной модели: есть сайты, есть поисковые роботы, есть пользователи. Роботы приходят, сканируют страницы, кладут их в индекс — дальше начинается привычная борьба за позиции в выдаче. Эта логика десятилетиями определяла, как мы строим сайты, настраиваем SEO и пишем robots.txt. С появлением LLM-агентов эта модель начала трещать по швам.
-
[Перевод] Тихая смерть robots.txt
Десятки лет robots.txt управлял поведением веб-краулеров. Но сегодня, когда беспринципные ИИ-компании стремятся к получению всё больших объёмов данных, базовый общественный договор веба начинает разваливаться на части. В течение трёх десятков лет крошечный текстовый файл удерживал Интернет от падения в хаос. Этот файл не имел никакого конкретного юридического или технического веса, и даже был не особо сложным. Он представляет собой скреплённый рукопожатием договор между первопроходцами Интернета о том, что они уважают пожелания друг друга и строят Интернет так, чтобы от этого выигрывали все. Это мини-конституция Интернета, записанная в коде. Файл называется robots.txt; обычно он находится по адресу вашвебсайт.com/robots.txt . Этот файл позволяет любому, кто владеет сайтом, будь то мелкий кулинарный блог или многонациональная корпорация, сообщить вебу, что на нём разрешено, а что нет. Какие поисковые движки могут индексировать ваш сайт? Какие архивные проекты могут скачивать и сохранять версии страницы? Могут ли конкуренты отслеживать ваши страницы? Вы сами решаете и объявляете об этом вебу. Эта система неидеальна, но она работает. Ну, или, по крайней мере, работала. Десятки лет основной целью robots.txt были поисковые движки; владелец позволял выполнять скрейпинг, а в ответ они обещали привести на сайт пользователей. Сегодня это уравнение изменилось из-за ИИ: компании всего мира используют сайты и их данные для коллекционирования огромных датасетов обучающих данных, чтобы создавать модели и продукты, которые могут вообще не признавать существование первоисточников. Файл robots.txt работает по принципу «ты — мне, я — тебе», но у очень многих людей сложилось впечатление, что ИИ-компании любят только брать. Cегодня в ИИ вбухано так много денег, а технологический прогресс идёт вперёд так быстро, что многие владельцы сайтов за ним не поспевают. И фундаментальный договор, лежащий в основе robots.txt и веба в целом, возможно, тоже утрачивает свою силу.
-
Generative AI, by @christianliebel and @yash-vekaria.bsky.social and others (@httparchive.org):
https://almanac.httparchive.org/en/2025/generative-ai
#webalmanac #studies #research #metrics #ai #robotstxt #llmstxt
-
Google Built Its Empire Scraping The Web. Now It’s Suing To Stop Others From Scraping Google
-
Google Built Its Empire Scraping The Web. Now It’s Suing To Stop Others From Scraping Google
-
Google Built Its Empire Scraping The Web. Now It’s Suing To Stop Others From Scraping Google
-
Google Built Its Empire Scraping The Web. Now It’s Suing To Stop Others From Scraping Google
-
#RSL 1.0 statt robots.txt: Neuer Standard für Internet-Inhalte | heise online https://www.heise.de/news/RSL-1-0-Standard-soll-Verwendung-von-Inhalten-regeln-11111422.html #searchengines #searchengine #ArtificialIntelligence #crawler #ReallySimpleLicensing #robotsTXT
-
Comment bloquer les crawlers IA qui pillent votre site sans vous demander la permission ?
https://fed.brid.gy/r/https://korben.info/bloquer-crawlers-ia-robots-txt-htaccess-nginx.html
-
The New York Times sues Perplexity for producing ‘verbatim’ copies of its work – The Verge
Credit: NYT Times, gettyimages-2249036304The New York Times sues Perplexity for producing ‘verbatim’ copies of its work
The NYT alleges Perplexity ‘unlawfully crawls, scrapes, copies, and distributes’ work from its website.
by Emma Roth, Dec 5, 2025, 7:42 AM PS, Emma Roth is a news writer who covers the streaming wars, consumer tech, crypto, social media, and much more. Previously, she was a writer and editor at MUO.
The New York Times has escalated its legal battle against the AI startup Perplexity, as it’s now suing the AI “answer engine” for allegedly producing and profiting from responses that are “verbatim or substantially similar copies” of the publication’s work.
The lawsuit, filed in a New York federal court on Friday, claims Perplexity “unlawfully crawls, scrapes, copies, and distributes” content from the NYT. It comes after the outlet’s repeated demands for Perplexity to stop using content from its website, as the NYT sent cease-and-desist notices to the AI startup last year and most recently in July, according to the lawsuit. The Chicago Tribune also filed a copyright lawsuit against Perplexity on Thursday.
The New York Times sued OpenAI for copyright infringement in December 2023, and later inked a deal with Amazon, bringing its content to products like Alexa.
Perplexity became the subject of several lawsuits after reporting from Forbes and Wired revealed that the startup had been skirting websites’ paywalls to provide AI-generated summaries — and in some cases, copies — of their work. TheNYT makes similar accusations in its lawsuit, stating that Perplexity’s crawlers “have intentionally ignored or evaded technical content protection measures,” such as the robots.txt file, which indicates the parts of a website crawlers can access.
Perplexity attempted to smooth things over by launching a program to share ad revenue with publishers last year, which it later expanded to include its Comet web browser in August.
Related
- Cloudflare says Perplexity’s AI bots are ‘stealth crawling’ blocked sites
- Perplexity is cutting checks to publishers following plagiarism accusations
“By copying The Times’s copyrighted content and creating substitutive output derived from its works, obviating the need for users to visit The Times’s website or purchase its newspaper, Perplexity is misappropriating substantial subscription, advertising, licensing, and affiliate revenue opportunities that belong rightfully and exclusively to The Times,” the lawsuit states.
Continue/Read Original Article Here: The New York Times sues Perplexity for producing ‘verbatim’ copies of its work | The Verge
Tags: AI, artificial intelligence, Copyright, Crawlers, Distribution, Lawsuit, NYT Work, OpenAI, Perplexity, Robots.txt, Scrapping, Sues, The New York Times, The Verge, Verbatim Copies#AI #artificialIntelligence #Copyright #Crawlers #Distribution #Lawsuit #NYTWork #OpenAI #Perplexity #RobotsTxt #Scrapping #Sues #TheNewYorkTimes #TheVerge #VerbatimCopies
-
#Development #Approaches
Rate-limiting requests with Nginx · An alternative approach to counter AI crawlers https://ilo.im/168axr_____
#RateLimiting #Nginx #WebServer #AI #Scrapers #RobotsTxt #DevOps #WebDev #Backend -
Cloudflare Overhauls Web’s AI Rulebook with New Robots.txt ‘Content Signals’
#AI #Cloudflare #RobotsTxt #DataScraping #Publishing #GenerativeAI
-
Cloudflare launches Content Signals Policy to fight AI crawlers and scrapers
https://web.brid.gy/r/https://nerds.xyz/2025/09/cloudflare-content-signals-policy-ai-crawlers/
-
Cloudflare launches Content Signals Policy to fight AI crawlers and scrapers
https://web.brid.gy/r/https://nerds.xyz/2025/09/cloudflare-content-signals-policy-ai-crawlers/
-
Cloudflare launches Content Signals Policy to fight AI crawlers and scrapers
https://web.brid.gy/r/https://nerds.xyz/2025/09/cloudflare-content-signals-policy-ai-crawlers/
-
#Business #Initiatives
AI’s free web scraping days may be over · Say hello to RSS’s younger, tougher brother https://ilo.im/166s9q_____
#Web #Publishing #Website #Blog #Content #AI #Crawlers #Payments #RSL #RSS #RobotsTxt -
A new #licensingstandard, #ReallySimpleLicensing (#RSL), aims to allow #webpublishers to set terms for #AI companies using their content. Supported by major brands like Reddit and Yahoo, RSL builds upon the existing #robotstxt protocol, enabling #publishers to specify #licensing and #royaltyterms for #AItraining data. https://www.theverge.com/news/775072/rsl-standard-licensing-ai-publishing-reddit-yahoo-medium?eicker.news #tech #media #news
-
RSL is the missing layer for the AI era: set terms, get attribution, and get paid (per crawl or per inference). Open standard, collective leverage. If AI uses your work, it should respect your license. Time to take control.
https://hostvix.com/rsl-a-new-standard-to-make-ai-pay-for-the-content-it-consumes/
#RSL #ReallySimpleLicensing #AI #AIethics #AIsafety #AIdata #ContentRights #Licensing #OpenWeb #RobotsTxt #Publishers #Creators #Attribution #PayPerCrawl #PayPerInference #RSS #WebStandards #DigitalRights #CollectiveLicensing #Fastly
-
Semrush ist eines der bekanntesten SEO-Analyse-Tools auf dem Markt. Es durchsucht Websites regelmäßig mit seinem Bot (SemrushBot), um Daten wie Keywords, Backlinks, Rankings und vieles mehr von deiner Website zu erfassen und zu analysieren. Hier sind 5 effektive, schnell umzusetzende Methoden, wie du Semrush von deiner Website aussperren kannst. 👇
#SEO #semrush #botblocker #bots #website #websecurity #cybersecurity #wordpress #joomla #typo3 #nginx #robotstxt #htaccess
-
Perplexity ignoriert robots.txt: Kontroverse um Daten-Scraping für KI-Training
Das Training großer Sprachmodelle beruht auf einer Vielzahl von Webdaten. Die Einhaltu
https://www.apfeltalk.de/magazin/news/perplexity-ignoriert-robots-txt-kontroverse-um-daten-scraping-fuer-ki-training/
#News #Apple #Applebot #Cloudflare #Cybersecurity #Datenanalyse #Datensicherheit #EthikInDerKI #KITraining #KnstlicheIntelligenz #OpenWeb #Perplexity #robotstxt #Sprachmodell #WebScraping #WebseitenBetreiber -
#Business #Explainers
LLMS.txt isn’t robots.txt · What it is, why it matters, and how to use it https://ilo.im/165du0_____
#SEO #AI #LlmsTxt #RobotsTxt #SitemapXML #Content #Website #Development #WebDev #Frontend -
#Business #Introductions
Meet LLMs.txt · A proposed standard for AI website content crawling https://ilo.im/16318s_____
#SEO #GEO #AI #Bots #Crawlers #LlmsTxt #RobotsTxt #Development #WebDev #Backend -
Search Engine Land: Meet LLMs.txt, a proposed standard for AI website content crawling. “While many content creators are interested in the proposal’s potential merits, it also has detractors. But given the rapidly changing landscape for content produced in a world of artificial intelligence, llms.txt is certainly worth discussing.”
-
AI Crawlers Overwhelm Open-Source Projects, Forcing Developers to Block Entire Countries
#AI #Web #Robotstxt #AIScraping #OpenSource #Cybersecurity #DataScraping #Scraping #WebScraping
-
Protecting your blog from the dead eyed #AI crawlers. You can experiment with specific robots txt, and I also run a script in htaccess. I think there are metadata properties you can declare. None of this stops your pages being crawled but may afford some legal protection. (See the German Laion case recently). I'm doing a short blogpost on this, soon.
-
Hey, #webmasters ... just so you know.
#Facebook's new-ish "meta-externalagent" #webcrawler, which they document is for stealing data for their Grand Theft Autocomplete (cough #AI cough), is ignoring robots.txt on my websites.
https://developers.facebook.com/docs/sharing/webmasters/web-crawlers
Is anyone surprised?
-
#Development #Reports
Google clarified support for robots.txt fields · Directives such as ‘crawl-delay’ are not supported https://ilo.im/160cps_____
#Business #Google #SearchEngine #SEO #TechnicalSEO #RobotsTxt #WebDev #Backend -
#Business #Explainers
Scary Google page indexing spikes · What if Search Console reports them for your site? https://ilo.im/15z9u6_____
#Google #SearchIndex #SearchConsole #SEO #TechnicalSEO #SiteQuality #Website #RobotsTxt #Development #Backend -
#Development #Techniques
Blocking bots with Nginx · A way to effectively ban AI bots from a website https://ilo.im/15z7hp_____
#Ai #AiBot #UserAgent #RobotsTxt #HtAccess #Website #WebServer #Nginx #WebDev #Backend -
#Development #Techniques
Blockin’ bots · How to block AI bots effectively with your site’s .htaccess https://ilo.im/15yjxn_____
#Ai #AiModel #GenerativeAI #Bot #WebDev #Frontend #Backend #Server #RobotsTxt #HtAccess -
Robots.txt, OpenAI’s GPTBot, Common Crawl’s CCBot: How to block AI crawlers from gathering text and images from your website: https://katharinabrunner.de/2023/08/robots-txt-openais-gptbot-common-crawls-ccbot-how-to-block-ai-crawlers-from-gathering-text-and-images-from-your-website/
#ai #openAI #crawler #commoncrawl #ccbot #GPTBot #robotstxt #wordpress
-
Sites scramble to block ChatGPT web crawler after instructions emerge
-
#Development #Launches
HTML & CSS code generators · A fabulous collection of code generators for front-end developers https://ilo.im/13azbo_____
#WebDevelopment #WebDev #Frontend #Generator #HTML #CSS #Code #Layout #FormElement #TextElement #Formatter #Filter #Separator #Background #Placeholder #Beautifier #RobotsTxt