AI chatbots and search engines may be making information easy to access, but they are also making it harder for digital publishers to survive. These tools are built on content written by people, but they give almost nothing back to the original sources. Writers spend hours, sometimes days, researching and writing a piece, only for chatbots to summarize it in a few lines. Readers no longer click the article. They just ask a chatbot. And that is a big problem.
Running a news website is not easy. From hosting bills to paying writers, editors, designers, and more, it all costs money. But revenue depends mostly on visits and clicks on ads. If people stop clicking articles and rely on AI-generated summaries, publishers get nothing. With a drop in traffic, they hardly get sponsors and have much lower ad clicks. So, they end up with no ad revenue and no recognition.
As someone who has seen traffic tank on well-researched stories, I can tell you, “This isn’t just about clicks. It’s about survival.”
While AI chatbots are taking away readers, Google is not far behind. For years, it has been using publishers’ content to build its own features, including featured snippets and AI Overviews. When someone searches for a question, Google often shows the answer right at the top. That answer comes from a news site or blog. But the user does not visit the site. They just read the answer and move on. That click is gone. The publisher gets nothing. No traffic. No ad revenue.
Google says it is helping users. But let us not forget, it is helping itself too. Every time it keeps a user on its own page, it shows more ads. It makes more money. The publisher who wrote that answer gets nothing in return.
Even Google’s new AI Overview tool is trained on publisher content. It summarizes answers using information written by others. Again, without giving any value back. An analysis by Search Engine Land showed that AI Overviews may reduce clicks to websites. Some publishers have reported drops in traffic since AI summaries rolled out. TechCrunch also reported how Google’s AI search features are killing traffic to publishers
Google used to be the source of traffic. Now it is the competition.
Small publishers are the worst hit. They do not have paid subscriptions to fall back on. They depend entirely on traffic. When ChatGPT, Perplexity, and other tools scrape their articles and show the main points, people skip the actual story. As a result, even well-written, deeply researched content fails to perform.
Referral clicks from AI platforms like ChatGPT or Perplexity continue to grow, but remain tiny compared to what publishers lose from search engines. From May 2024 to June 2025, AI referrals climbed 770% to over 1.13 billion visits overall, with ChatGPT accounting for 80% of that traffic. Yet news publishers still lost roughly 64 million search visits, while gaining just 5.5 million from AI referrals in the same period. A growth from under 1 million ChatGPT referrals in early 2024 to 25 million in 2025 is real, but still a drop in the ocean compared to the hundreds of millions of lost search clicks.
In 2024 alone, the media industry shed nearly 15,000 jobs from newsrooms, digital outlets, and broadcast media in the US and UK. At least 3,875 editorial positions were cut in the UK and the US during 2024.
This is not just a small publisher problem. Even big media houses are cutting jobs. BuzzFeed News shut down in 2023. Vox Media laid off 7% of its staff. Business Insider cut 8% in 2024. The Los Angeles Times cut 6% of its newsroom in 2025 after losing $50 million in 2024 alone. CNET let go of 15% of union staff amid acquisitions and cost cuts. IGN, part of Ziff Davis, announced layoffs impacting 12% of its editorial guild. Even The Washington Post has seen over 100 journalists exit via voluntary buyouts since late 2024, citing unclear direction under new leadership.
And I think you can easily guess the reasons behind these job cuts. Declining traffic, falling ad revenue, and the rise of platforms that “repurpose” content without credit or payment.
These AI search engines and chatbots are not just browsing websites. They are aggressively scraping them. Tools like Perplexity use automated bots that crawl websites without following the rules set by publishers. Even when websites block crawlers using robots.txt, these bots often ignore those settings. That is not just unethical, it is theft.
In fact, we also reported that Perplexity was scraping content even after being blocked. The Atlantic, Forbes, and others have raised alarms. Yet these AI companies continue to build on others’ work without permission or payment.
Some people are defending AI companies like Perplexity by saying, “Well, it’s just like a human browsing the web on my behalf.” But that’s not how the web works. There’s a big difference between a person visiting a website and an automated bot doing it at scale. Humans visit, read, and maybe click ads or sign up. Bots don’t. They extract the content in seconds, show it somewhere else, and leave. No credit. No traffic. No revenue. Just theft, wrapped in a shiny tech excuse.
If publishers have clearly said “no” by blocking bots using robots.txt or Cloudflare’s tools, then that should be respected. You can’t just bypass those protections and claim it’s for the user. If that logic were allowed, there would be no rules left.
Here, it is important to note that OpenAI follows the rules and doesn’t fake user agents. So why can’t Perplexity do the same? Why are we letting some AI startups rewrite the rules of the web just because they claim it’s the future?
This constant scraping also increases server load. Publishers pay for bandwidth and server resources. When bots flood their sites with requests, costs go up. So not only are AI tools taking revenue, they are also adding to operational expenses. Since I own some websites, I can see how these AI chatbots put an extra load on the server. If you don’t trust my opinion, read the report by Business Insider confirming the same.
No one is saying that AI should be stopped. It can be helpful. But it must be fair. Writers and publishers need to be respected. Their work must not be taken without credit or compensation.
The New York Times has sued OpenAI and Microsoft for copyright infringement. Other publishers are preparing similar legal action. But legal battles take time, and many small publishers may not survive that long.
As users, we need to care about where our information comes from. AI-generated content can be helpful, but it should not come at the cost of real human creators. If the current trend continues, we will see fewer independent websites, less original reporting, and more AI-generated content that lacks depth, context, and accuracy.
Writers deserve fair credit for their work because they spend hours researching to produce valuable content. Publishers deserve traffic, visibility, and revenue for their effort. AI companies should start respecting content owners instead of silently scraping it. If we want a healthy and unbiased internet, we need to support ethical AI practices, not ones that exploit creators and leave nothing in return.
It is about protecting the entire ecosystem of publishers, the people, and platforms that still create valuable content for real audiences.
So, what’s the fix? Because just talking about AI eating traffic will not help.
There are real solutions. But they will only work if AI companies, publishers, and even readers take it seriously.
As readers, we can support publishers by visiting their websites directly, subscribing to their services, and sharing content with proper credit. These small actions matter. They help drive traffic, support revenue, and keep independent platforms alive. If we want quality content to exist tomorrow, we need to value the people creating it today.
Revenue-sharing is the most basic one. If tools like ChatGPT, Perplexity, or Google’s AI Overviews are using news content to answer questions, then the people who create that content deserve a share. Spotify pays artists for streams. Why can’t AI tools pay publishers for using their work?
Micro-payments could help. Even if it is just a few cents ($0.01 to $0.10) every time an article powers a chatbot reply, that adds up. Millions of queries could mean real money for newsrooms.
A bigger step would be proper licensing deals. Like what Australia did with its News Media Bargaining Code. That forced Google and Meta to pay news outlets. Canada did the same, and it brought in over $100 million every year for publishers. Something like this should happen everywhere.
We also need transparency. AI bots should follow robots.txt rules, give proper credit, and scrape website content only if the publisher has given consent. Publishers should be able to decide if they want to allow AI bots or not.
And we need clear industry rules. AI companies should be held to a code of conduct. They should be required to mention sources, link back clearly, and actually send traffic to the people doing the hard work of reporting. Back in November 2024, we heard that the Indian IT Ministry is finalising a voluntary code of conduct for AI companies. But we are still waiting for anything fruitful.
But it is not just about tech platforms. Publishers also need to work together. They can form groups to stand together and demand fair deals.
It is really important to reiterate that it is not just about losing traffic or missing out on ad revenue. It is about the survival of publishers, the websites, blogs, forums, and independent creators that keep the internet alive. AI tools like ChatGPT, Perplexity, and Google’s AI Overviews cannot create original content. They do not research topics, test products, report from events, or verify facts. They simply pull information that already exists, and that content exists because real people took the time to create it.
If we keep letting these tools profit from publisher content without giving anything back, we are slowly killing the very source they rely on. No new articles, no updated guides, no fresh data, just recycled content with no one left to create more. Eventually, these tools will have nothing new to learn from.
People often forget that AI tools learn from what publishers put online: tech blogs, health sites, news portals, entertainment websites, tutorials, and reviews. Without these sources, AI would be empty. So if the current system keeps draining publishers while feeding tech giants, the entire ecosystem breaks. Once publishers stop creating, AI tools will not be able to replace them. And when that happens, where will users go for real, reliable, and up-to-date information?
If we do not fix this now, AI will keep killing the very thing it depends on. In the end, we all lose.