• AI Fire
  • Posts
  • πŸ”Ž 60% of Top Search Engines Are Incorrect

πŸ”Ž 60% of Top Search Engines Are Incorrect

AI Said Do It Yourself

ai-fire-banner

Plus: ❌ AI: Do it for you. βœ… AI: Do it yourself.

Read time: 5 minutes

I know how unpleasant it feels when someone refuses to help you. But if an AI chatbot tells you "no" and suggests you do it yourself, how would that make you feel?

IN PARTNERSHIP WITH THE MOTLEY FOOL

Fortune Favors The Bold

Ever wish you could turn back time and invest in Amazon's early days? Well, buckle up because the AI revolution is offering a second chance.

In The Motley Fool's latest report, dive into the world of AI-powered innovation. Discover why experts are calling it "the rocket fuel of AI" and predicting a market cap 41 times larger than Amazon's.

Don't let past regrets hold you back. Take charge of your future and capitalize on the AI wave with The Motley Fool's exclusive report.

Whether it's AI or Amazon, fortune favors the bold.

AI INSIGHTS

πŸ€– Al Search Engines Give Incorrect Answers At An Alarming 60% Rate

ai-search-engines-incorrect-60-percent

Generative AI search tools are becoming more popular, but they have a major issue: poor citation of news sources. While tools like ChatGPT and Perplexity provide quick answers, they often fail to properly credit the original sources of news content.

Key Findings

A recent study by the Tow Center for Digital Journalism tested eight AI search tools and found:

  • 60% of responses were incorrect. Some tools, like Grok 3, answered 94% of queries wrong.

  • Premium models like Perplexity Pro were worse than free versions, giving confident but incorrect answers.

  • Misattribution: AI tools often cited syndicated versions of articles or fabricated URLs, denying original publishers proper credit and traffic.

Issues with Crawler Access

  • AI tools ignored publisher restrictions (e.g., robots.txt), using content they were supposed to avoid.

  • Fabricated citations: Tools often misattributed articles, damaging the credibility of news sources.

Why It Matters

  1. Publishers lose control over their content, which is used without proper permission.

  2. Misinformation risk increases as users are misled by wrong citations.

  3. Revenue loss for publishers who miss out on web traffic and ad income.

The Future of AI in News

To be trustworthy, AI search tools need to improve citation practices:

  • Transparency: Users must know where information comes from.

  • Publisher rights: AI must respect publishers' wishes about using their content.

Conclusion

As AI search tools continue to evolve, there’s hope for improvement. However, the current state shows serious flaws in how these tools handle news citations. If we are to trust AI with our information, these issues must be addressed. For now, it’s essential for users to be cautious about the accuracy of AI-generated information and for publishers to fight for their rights to proper attribution.

Ultimately, AI must evolve to respect the integrity of news content and improve citation practices to ensure that the information we receive is reliable and sourced properly.

🎁 Today's Trivia - Vote, Learn & Win!

Get a 3-month membership at AI Fire Academy (500+ AI Workflows, AI Tutorials, AI Case Studies) just by answering the poll.

Which AI agent reached full autonomy for independent decision-making today?

Login or Subscribe to participate in polls.

PRESENTED BY BELAY

Accomplish More. Juggle Less.

When you love what you do, it can be easy to take on more β€” more tasks, more deadlines, more hours – but before you know it, you don’t have time to do what you loved in the beginning. Don’t just do more – do more of what you do best.

BELAY’s flexible staffing solutions leverage industry experience with AI systems to increase productivity without sacrificing quality. You can accomplish more and juggle less with our exceptional U.S.-based Virtual Assistants, Accounting Professionals, and Marketing Assistants. Learn how with our free ebook, Delegate to Elevate, and leave the more to BELAY.

TODAY IN AI

AI HIGHLIGHTS

πŸ‘¨β€πŸ’» CEO of Anthropic, is worried about espionage, particularly by Chinese spies, targeting AI companies in the U.S. He believes these spies are after valuable β€œalgorithmic secrets” that could be worth up to $100 million, hidden in just a few lines of code.

πŸ”’ OpenAI has released a policy proposal describing the Chinese AI lab DeepSeek as "state-subsidized" and "state-controlled" and suggests that the U.S. government consider banning models from DeepSeek and other similar People’s Republic of China (PRC)-supported operations as part of the "AI Action Plan."

πŸ”‘ Anthropic has made several updates to the Claude 3.7 Sonnet API to help developers increase throughput and reduce token usage by up to 70%, with early users reporting an average reduction of 14%.

πŸš€ X user @EHuanglu shared eight cool examples of what people have created with Manus' viral AI agent. These include an Apple-style website and an app that helps you prep for presentations on the go.

πŸ” A paper generated by The AI Scientist-v2 by Sakana AI passed the peer-review process at an ICLR 2025 workshop. This marks the first time a fully AI-generated paper has passed the same peer-review process that human scientists go through.

🌐 Perplexity has launched a new Model Context Protocol (MCP) server for its Sonar model, which lets Claude access real-time web search features.

βš–οΈ The double-edged sword of AI development: while it has the potential to bring great advancements, it also carries significant risks that are hard to control. This has led AI leaders to experience their own "Oppenheimer moment," a reference to a pivotal scene in the Oppenheimer film, where scientists realize they have lost control over their creation.

πŸ’° Daily AI Fundraising: Lila Sciences has raised $200 million in seed funding to develop an AI platform aimed at accelerating life sciences research. Their goal is to create "scientific superintelligence" to design and run experiments faster than traditional methods.

AI SOURCES FROM AI FIRE

ai-fire-academy

NEW EMPOWERED AI TOOLS

  1. 🌐 Same.dev clones websites with pixel-perfect accuracy including Nike and Apple TV

  2. πŸ“Š Eraser creates live auto-updating diagrams from your codebase with AI

  3. 🎬 Marey by Moonvalley is AI for filmmakers trained on licensed data

  4. πŸ’₯ Mirage generates realistic high-converting ads with animated body language and micro-expressions (UGC-style content)

  5. πŸ”„ Cuckoo is a real-time AI translator for global sales, marketing, and support

AI QUICK HITS

  1. πŸš€ Deepseek Plans to Release R2 as China Goes All In (Link)

  2. πŸ€– Sesame's Maya Takes a Leap with Their New AI Model (Link)

  3. πŸ’» AI Coding Assistant Refuses to Write Code and Tells You to Learn Programming Instead (Link)

  4. πŸ“Š ChatGPT Now Has Advanced Data Analysis Powered by o1 and o3-mini Models (Link)

  5. 🎨 Gemini 2.0 Flash Can Now Edit Any Image with Just Natural Language (Link)

AI CHART

ai-chart

The 2025 Student Generative AI Survey shows a big jump in the use of AI tools among undergraduate students.

Key Findings:

  • 92% of students are using AI tools now, compared to 66% in 2024. Most students, 88%, have used generative AI for their assessments, a significant rise from 53% last year.

  • Students mainly use AI to save time (51%) and improve the quality of their work (50%). It’s clear AI is helping students get tasks done faster and better.

  • 80% of students say their institutions have a clear AI policy, and 42% believe their professors are well-prepared to support AI use, up from just 18% in 2024.

  • However, there’s still a digital divide. Men use AI more than women, and wealthier students have better access to premium AI tools.

  • Even though students want more support, only 36% have received training to develop their AI skills, and 53% feel their institutions should provide AI tools.

Conclusion:

With AI becoming a bigger part of education, it’s important for schools to review their policies, offer better training, and ensure all students have equal access to these tools to get ready for a future where AI is everywhere.

AI JOBS

  • Robinhood: Senior Software Engineer, AI Infrastructure (Link)

  • Oracle: Principal Applied Scientist (OCI/GEN AI) (Link)

  • Google: Product Manager, Gen AI Agents, Google Cloud (Link)

  • Tenstorrent: FPGA Prototyping Engineer - High-Performance CPUs for AI/ML (Link)

We read your emails, comments, and poll replies daily

How would you rate today’s newsletter?

Your feedback helps us create the best newsletter possible

Login or Subscribe to participate in polls.

Hit reply and say Hello – we'd love to hear from you!

Like what you're reading? Forward it to friends, and they can sign up here.

Cheers,
The AI Fire Team

*Disclaimer: AI Fire is a news publisher. The opinions in this content are from the authors or paid advertisers. The information shared is for general purposes only and is not financial advice. It should not be seen as an offer to buy or sell investments. AI Fire does not guarantee the accuracy of the information. You should do your own research and talk to a financial adviser before making any investments. The publisher and its affiliates are not responsible for any losses from using this information.

Reply

or to participate.