Background Advisory Ventures Dispatches Contact
Google Bard Is Actually Good at One Thing
All dispatches

Google Bard Is Actually Good at One Thing

I’ve been spending a lot of time with Bard over the last couple of weeks, and I have to say — my initial take was wrong.

When Google first opened up access, I did what most people probably did: I threw marketing copy at it. “Write me an executive summary.” “Draft a product description.” “Give me a LinkedIn post about X.” And honestly? It was underwhelming. The output felt flat, generic, and weirdly corporate in that way only Google can manage. I walked away thinking this was a swing and a miss.

But then I started using it differently. And that changed everything.

Bard as a Research Tool Is Surprisingly Strong

Here’s what I’ve landed on after a few weeks of real use: Bard is pretty darn good at identifying accurate facts and data. Ask it a specific question — about a market, a regulation, a historical event, a technical concept — and you get solid, well-sourced answers. It’s not hallucinating wildly the way some models do. It’s pulling from what Google does best: organizing and retrieving the world’s information.

That shouldn’t be surprising, right? Google has been answering our questions for 25 years. But somehow I expected their AI product to be something OTHER than that — something more creative, more generative. And when it wasn’t, I dismissed it. That was the wrong move.

The right framing is this: Bard is a research assistant. A GOOD one. If you need to quickly validate a claim, pull together background on a topic, or get a clear explanation of something technical, it delivers. I’ve been asking it questions I’d normally spend 20 minutes Googling, and I’m getting better answers faster.

Where It Falls Short

Let’s be honest about the gaps though. Bard is NOT good at generating executive summaries or marketing communications. I tested this pretty extensively, and the output just doesn’t have the punch or personality you need. It reads like.. a very well-organized Wikipedia entry. Accurate? Sure. Compelling? Not really.

This is the opposite problem from what we see with GPT-4. OpenAI’s model is a GREAT writer — creative, adaptable, surprisingly good at matching tone and voice. It can draft emails, write copy, brainstorm positioning, even code. GPT-4 is good at being your wingman. It takes your rough idea and makes it better, sharper, more polished.

Bard takes your question and gives you the facts. Different tools, different strengths.

Why This Matters More Than People Think

I wrote a few weeks back about how AI chatbots are about to break the internet’s business model. This Bard development fits right into that thesis. Google isn’t trying to out-create OpenAI — they’re trying to protect their core business. Search. Answers. Information retrieval.

And if Bard gets good enough at answering questions directly — with accurate, sourced data — then the traditional Google search results page starts to feel redundant. Why click through ten blue links when the AI just TOLD you the answer?

This is Google playing defense and offense simultaneously. They’re not going to win the “write me a blog post” war. They don’t need to. They need to win the “give me the right answer” war. And from what I’ve seen, they’re making real progress.

The Two-Tool Workflow

Here’s what I’m actually doing now in practice: I use Bard for research and fact-finding, and GPT-4 for writing and creative work. It sounds simple, but the combination is pretty powerful.

Need to understand a market before writing about it? Bard. Need to turn that understanding into a compelling piece of content? GPT-4. Need to validate a claim before publishing? Back to Bard.

It’s early days, and I know this landscape is moving fast. We just saw the GTC keynote this week with Jensen Huang and Ilya Sutskever from OpenAI — the conversation about GPT-3 and GPT-4’s capabilities was genuinely fascinating. The pace of improvement on BOTH sides is staggering.

The Takeaway

Stop evaluating these tools against each other on the same criteria. That’s the mistake I made initially. Bard isn’t a worse GPT-4. It’s a different tool with a different sweet spot. Google built a research engine, not a creative engine — and honestly, that tracks perfectly with who they are as a company.

If you’ve written off Bard after a bad first experience with content generation, go back and try asking it questions instead. Real questions. Specific questions. I think you’ll be surprised at how good the answers are.

The AI landscape isn’t going to be one winner takes all. It’s going to be a toolkit. And knowing which tool to reach for — that’s the skill that’s actually going to matter.

Get my weekly AI dispatch

Real analysis from someone who's been building on the internet since 1996. Join 500+ founders and operators getting my take on AI, tools, and what's actually working.

Robertson Price

Robertson Price

Serial entrepreneur who has built and exited multiple internet companies over 25 years — from search (iWon.com, $750M acquisition) to content networks (32M monthly visitors) to e-commerce (Rebates.com). He now builds enterprise AI infrastructure at Ragu.AI.