- AI Fire
- Posts
- 👩🏻🍳 GPT-4.1 Ate Sonnet 3.7 (Left No Crumbs)
👩🏻🍳 GPT-4.1 Ate Sonnet 3.7 (Left No Crumbs)
Google’s 601 Real Gen AI Use Cases

Read time: 5 minutes
Google just published 601 real AI use cases across every major industry. If you’ve been wondering where AI is actually being used (beyond hype), this is it. But that’s just one headline: from dolphin-language models to OpenAI dropping a model smarter than War and Peace, this week’s AI updates are seriously next level!
What are on FIRE 🔥
AI INSIGHTS
OpenAI just dropped a new family of models called GPT-4.1. Yes, “4.1” - as if the company’s nomenclature wasn’t confusing enough already. The lineup includes GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano, all available via the OpenAI API (not ChatGPT).
These models come with a 1 million token context window — that’s roughly 750,000 words in one go. To put it in perspective, that’s longer than “War and Peace”. And surprisingly, it just cooked and outperformed Claude Sonnet 3.7 in coding reviews.
What’s Actually Quite Interesting Here?
GPT-4.1 lands just as the AI coding race heats up. Google’s Gemini 2.5 Pro, Anthropic’s Claude 3.7 Sonnet, and DeepSeek’s V3 - all have posted strong scores on coding benchmarks and also support 1 million-token context windows.
CFO Sarah Friar said OpenAI’s long-term vision is to create an “agentic software engineer” that can program entire apps end-to-end.
Highlights You Don’t Want to Miss:
Coding beast mode unlocked: GPT-4.1 hits 54.6% on SWE-bench Verified, up from GPT-4o’s 33.2% and even beats GPT-4.5.
Inference-first, agent-ready: GPT-4.1’s new mini and nano models aren’t just lighter, nano is 83% cheaper than GPT-4o and faster by half.
Visual reasoning got a glow-up: It scores 72% on Video-MME (long-form video with no subs), 75% on MMMU, and 57% on chart-heavy CharXiv tasks. That’s big for anyone working with charts, diagrams, or scientific data.
Instruction following finally feels natural: GPT-4.1 outperforms GPT-4o, and understands “do this, not that” instructions way better.
Latency? Solved: Nano returns a response in under 5 seconds for 128k tokens. That’s basically instant. And prompt caching is now 75% cheaper.
Why It Matters: Honestly, GPT-4.1 handles structure well, doesn’t lose the plot in long chats, and with the right prompts, it writes in a pretty natural flow. It’s not perfect, you’ll still hit filters if you push it, but for casual creative work or building agents, it just works well.
Do you think Google will drop a new model this week too?Rumor is Google might respond this week with Gemini 2.5 Flash. |
TODAY IN AI
AI HIGHLIGHTS
🐬 Google DeepMind is training an AI called DolphinGemma to talk to dolphins and decipher their language.
📊 Google Cloud has compiled 601 real-world generative AI use cases across 11 major industries using Vertex AI, Gemini, and Google Workspace AI agents - basically unlimited business ideas.
🧠 Meta will resume using public content from adult users in the 27 EU countries to train its AI models. It’ll use all public posts and comments you had with Meta AI.
🏭 Hundreds of thousands of AI jobs will be created when Nvidia is now mass producing AI supercomputers entirely in the U.S. Blackwell chips are in production, coming soon to Texas.
🐭 Stanford and NVIDIA just created an AI model to generate fully animated, original Tom & Jerry-style videos up to 1 minute long from just a single line of text. But “is it shitty?”
🦯 Scientists made a wearable system with AI to help blind people to navigate their surroundings. It’ll partially replace the eyes and become more invisible.
💰 AI Daily Fundraising: Alphabet and Nvidia just backed SSI, the AI startup co-founded by OpenAI’s ex-chief scientist with $32B valuation. SSI mainly uses Google’s TPUs over Nvidia’s GPUs to build advanced AI systems.
AI SOURCES FROM AI FIRE
IN PARTNERSHIP WITH SPACEBAR STUDIOS
GTM Acceleration for B2B SaaS & AI
If you’re a B2B SaaS or AI company with $2M–$50M ARR focused on enterprise deals, Spacebar Studios is your GTM Acceleration partner to help you scale.
What We Do:
Custom Growth Strategies – We don’t do cookie-cutter. Every plan is built from the ground up.
Hands-On Execution – We act as your in-house team to tackle demand gen, marketing ops, and more.
Month-to-Month Flexibility – No long-term lock-ins. We move quickly and adapt as you grow.
Quarterly KPIs – Clear benchmarks, real accountability.
Ready to accelerate?
Book a call and discover how our GTM Acceleration approach can power your next stage of growth.

NEW EMPOWERED AI TOOLS
🦾 Forget Assistants. moxby* Agents Actually Do the Work. Get Early Access.
👨💻 Agentforce Writes 20% of Salesforce’s Code in The Last 30 Days.
🔊 Nova Sonic Understands How You Speak & Responds in Real-Time.
🧠 ALTAR Organizes Everything You Save into a Smart Knowledge Base.
🚀 Nily AI for Marketers with 20+ AI Models for Dynamic Social Media Features.
* indicates a promoted tool, if any

AI QUICK HITS
🐉 DeepSeek’s low-cost AI model is shaking up China’s tech scene.
💡 AI accelerator running on light instead of electricity changes AI future.
🛑 Stop humanizing AI, even advanced models like GPT-4.5 or Claude, is not truly intelligent?
💸 Amazon and Alphabet are betting big all in on AI with $175 billion in 2025.
🎮 Debates between Gemini & Claude over AI benchmarking have reached Pokémon.
AI CHART
AI is booming, but no one really knows how much it's costing our planet. Major AI companies like OpenAI, Anthropic, xAI, and Google have not disclosed specific data on the energy consumption of their models during training and inference phases.
AI Uses Significant Energy and Water Consumption:
AI systems consume substantial amounts of electricity and water, both during the initial training of models and their ongoing use (inference).
=> A typical AI data center consumes electricity equal to 100,000 homes.The carbon emissions associated with training AI models have escalated over time.
Training GPT-4 produced an estimated 5,184 tons of CO₂.
Training Llama 3.1 405B resulted in approximately 8,930 tons of CO₂ emissions.
Historically, the energy used to train AI models exceeded that used during inference. However, as AI becomes more integrated into daily applications, the energy consumed during inference is growing rapidly and may surpass training energy consumption.
But without hard numbers, it's tough to tell if we're building the future or just shifting the damage.
Are we flying blind on AI’s environmental cost?
AI JOBS
Grammarly: Software Engineer, Machine Learning
Hyundai America Technical Center: ADS Machine Learning Data Scientist
Snap: Principal Machine Learning Engineer, Generative AI for Ads
We read your emails, comments, and poll replies daily
How would you rate today’s newsletter?Your feedback helps us create the best newsletter possible |
Hit reply and say Hello – we'd love to hear from you!
Like what you're reading? Forward it to friends, and they can sign up here.
Cheers,
The AI Fire Team
Reply