The 5 Hidden Rules Behind Successful AI Products | Chris Pedregal
The secrets behind a consumer AI app with 70%+ weekly retention that users can't stop raving about
Dear subscribers,
Today, I'm excited to share a new episode with Chris Pedregal.
Chris is the co-founder and CEO of Granola, a $20M AI meeting notes app that I can’t live without. Granola’s 70%+ weekly user retention is almost unheard of for consumer AI. Chris shared his five hidden rules for building successful AI products in our interview. I think it’s a must-watch for any product builder.
By the way, if you never want to take meeting notes again, you owe it to yourself to give Granola a try (it’s free for the first 25 meetings):
For my paid subscribers, Chris is also generously offering three months of unlimited meetings with Granola (I’ll share the coupon in the next few days). Upgrade to paid today to unlock this offer, my AI prompt library, and $100 off my new AI course.
Watch now on YouTube, Apple, and Spotify.
Chris and talked about:
(00:00) The #1 mistake you can make when building AI products
(01:35) The difference between big tech PM and startup founder
(06:22) Why Chris decided to use AI to solve meetings
(09:26) Don't solve problems that won't be problems soon
(12:44) How can you predict what LLMs can do in the future
(19:09) Why context is king for great AI products
(23:56) How to give your AI product a soul
(28:43) When to listen to user feedback and when to trust your gut
(31:39) Closing advice for people who want to build AI apps
How Granola achieved 70%+ weekly retention which is unheard of for most AI products
Welcome Chris! Since you’re both a founder and former Google PM, what do you think is the difference between the two jobs?
They’re fundamentally different sports.
At Google, a PM's work spans project management, planning, leadership, and design.
But the reality is that product design was an extremely small percentage of my time as a Google PM due to all the meetings and constraints.
I remember my days at Google being filled with back-to-back meetings. If I wanted to do deep thinking, I had to do it on weekends or evenings. It's sad when a PM has to do actual product work outside regular hours, but that's the reality of larger companies.
So why did you decide to tackle the meeting problem after leaving Google?
We were initially reluctant because there are so many established products in the meeting space (e.g., Zoom, Google Meet). But two things convinced us:
Meetings are a natural hook for user engagement. Meeting notifications are welcomed instead of seen as spam. Meetings are also perfect for building habits since they are already part of people's workdays.
AI excels at streamlining meetings. When we showed people a tool that could transform meeting transcripts into concise, useful notes using LLMs, their eyes lit up. Among LLMs' key strengths – code generation, search, and summarization – this ability to distill long transcripts resonated most powerfully with users.
Your 70% retention is impressive for an AI product. What makes Granola so sticky?
Yes we’re proud that 70% of people return a week after installation.
The main reason is that Granola is simple and convenient to use.
There's no bot joining your meeting or special UI to open—just an app on your computer that looks like a notepad, like Apple Notes. You can open it when you want and close it when you don't. It integrates seamlessly into people's lives.
It sounds simple, but it took a lot of work. We had to build many features and cut them out to determine what was truly core to the product.
5 hidden rules for building successful AI products
Can you share some of your rules for building successful AI apps?
With AI, the underlying technology - the models - is quickly evolving. So:
Rule #1: Don't solve problems that won't be problems soon.
Your product has two types of problems: those that will be solved automatically by the next model release and those that will remain challenges no matter how smart the models become.
The easiest mistake is to solve problems that users are screaming about that future models will naturally solve. For example, the product could only handle short meetings when we started Granola because of model context window limitations.
Users constantly asked why it didn't work for longer meetings.
As a product person, it goes against every instinct to deny users something they're actively requesting. But, in AI, sometimes the best strategy is to focus on problems that will still matter even as the tech evolves.
Building complex chunking and reconciliation features would have been a waste of effort since newer models would handle longer meetings natively. The same thing happened with multi-language support, another problem that newer models solved.
How can you predict what future models will be capable of?
Look at what the state-of-the-art models can do today and assume they will soon be cheap and accessible. For example, everyone should prepare for a world where feeding live video into an LLM is cost-effective.
Trying to predict beyond that gets into sci-fi territory. It's much harder to imagine what the next generation of cutting-edge models will do.
Can you share another rule for building AI products?
Rule #2: Go narrow, go deep.
General-purpose tools like Claude and ChatGPT are surprisingly good at many tasks, so if you are building a startup, it needs to solve a problem 10x better. The only way to achieve that is by choosing a narrow use case and making that experience exceptional.
Interestingly, making a narrow use case great often involves work unrelated to AI.
For Granola, we had to build our echo cancellation system to handle users with and without headphones. It had nothing to do with note-taking but was crucial for making the product feel seamless and thoughtless.
How did you build feedback loops to ensure you pick the right use case?
We ran a closed beta for a year, starting with just three users and growing to about a hundred people we were building with. Once you put the product in front of real people, it becomes obvious what basic things you need to do for them to trust AI to take their notes. The beta was focused on getting continuous feedback rather than mapping too far into the future.
Can you give us a peek under the hood at how Granola works?
When we started writing prompts for Granola, they were very instruction-based—"If this happens, write notes like this." We quickly realized that the real world is too nuanced and complex for binary instructions. You might ask it to "Include this type of detail” and “Don’t make the meeting notes too long,” but the model won’t easily resolve these conflicts.
So our mental model shifted to thinking of the LLM as an intern on their first day – a smart person with no context about how you do things.
Much of our work focuses on giving this "intern" the right context to do a great job. This includes things like:
Who you're meeting with
Their companies and roles
What they're likely optimizing for
Who you are and what you need
I think this relates to your 3rd rule.
Rule #3: Context is king.
We realized that context can be specific to certain people and meetings. Take VCs evaluating startup pitches—there are specific things investors need to capture in their notes to make investment decisions. We don't tell the model exactly what to write, but we articulate the context: “They need to make an investment decision, so these details are important.” It's about providing the context of what's valuable for those people without being overly prescriptive.
What's your day-to-day process for prompting and evaluation? Do you have a formal evaluation process or rely on user feedback?
We have a manual evaluation process that we're working to systematize. We're taking the same approach to evaluation as we do with Granola — making it easier for humans to evaluate rather than fully automating the process.
Evaluating meeting notes requires tremendous nuance because it involves stack-ranking information by importance. This is a very difficult problem, so our internal tooling is human-centered and focused on making our evaluators more efficient.
Let’s cover your 4th rule next. What do you mean by “Your marginal cost is my opportunity” mean?
Rule #4: Your marginal cost is my opportunity.
Since the early Internet, the amazing thing has been that you can create a website and have millions of people visit it with minimal additional cost. The marginal cost of serving each new user was nearly zero. Good products could scale incredibly well, and big companies like Google could easily scale to millions of users.
AI is different because these models are still expensive to run. At Granola, we pay for every set of meeting notes we generate—our costs scale linearly with users. This creates an opportunity: As a small startup with fewer users, we can use cutting-edge models that would be financially impossible for big companies to deploy at scale.
For example, if you're working on Google Drive, which has a massive user base, rolling out advanced AI features to all users isn't feasible from a financial or computing perspective.
In the best-case scenario, a startup's user base grows exponentially. While people say models haven't gotten cheaper, running GPT-3-level capability today costs far less than three years ago if you keep the intelligence level fixed. So, hopefully, as you scale exponentially, your inference costs decrease exponentially, and the math works out long-term.
Finally, you also talk about building products with a "soul" - what does that mean?
Rule #5: Build products that have a soul.
Soul is about giving users a sense of cohesiveness. When something is Frankenstein-ed together by different teams, you don't get a clear sense of its essence.
We interact with products similarly to how we interact with people - we attribute characteristics and form relationships with them.
Sometimes, when using a product, you can feel the people who designed it - their intentions, what they wanted you to feel. Other times, that feeling is completely absent. Early Mac products had this quality - you could feel the team in Cupertino pouring themselves into the work. Early versions of Snapchat also had a strong point of view on the world.
How do you balance customer feedback with product intuition?
There are two extremes:
The "I'm an artist" approach, where you design purely from intuition
The "customer is always right" approach, where you just build what people want.
To have a soul, products need cohesion and a consistent worldview, which comes from intuition and instinct. However, putting yourself in others' shoes is extremely hard, so you need constant context about users' thoughts.
At Granola, rather than making lists of feature requests, we immerse ourselves in user feedback. I try to have a user call daily. We have screens showing real-time feedback, and we get regular digests. Everyone on the team should be swimming in customer context. But when we design, we work from first principles. For example, when prompting an LLM, you want the context to put yourself in the customer's shoes.
Our brains are good at filtering information, so when you're constantly immersed in user feedback, you develop an emotional sense of what matters rather than just analyzing metrics.
The future of meetings with AI and Granola
What's next for Granola? I love how it combines my notes with AI notes. Can users upload their templates for formatting?
Yes, you can create custom templates, though it's hard to find now. After a meeting, you can select from existing templates or create your own. We're working on making this more discoverable.
What's the broader vision for where Granola is headed?
Currently, Granola focuses on giving you good notes that feel like your own. What differentiates us from other AI note-taking apps is that we're a text editor first – you can take your notes, and when the meeting ends, we flesh them out while anchoring on what you’ve written.
But the bigger picture isn't just about taking good notes. It's about what happens after the meeting and all the work that follows it. Given the context of the meeting and the series of meetings leading up to it, Granola should be able to help with many repetitive post-meeting tasks.
We believe in giving people superpowers rather than fully automating everything.
Take follow-up emails – there are strategic decisions you want humans making, but an AI can handle the specifics of what was agreed upon in the meeting.
How do you see AI tools shaping how we work?
There's a famous saying:
"We shape our tools, and thereafter, our tools shape us."
With AI, the potential for tools to shape our thinking is exponentially higher. While this has positive and negative implications, I see a future where AI handles the boring details so people can think more about what matters.
Any closing advice for people who want to build AI apps?
It's an incredibly exciting time to build. When my co-founder Sam and I started, I was pinching myself. It feels similar to the early computing pioneers of the 1950s and 1960s, like Engelbart and Alan Kay, except they had to be super visionary to imagine everyone having a computer. Now, we can imagine AI doing something; six months later, it can do it.
My advice would be to adapt quickly because the world is moving fast.
It's hard to predict what will happen, so there's a lot of value in playing with the latest technology like a kid with toys and seeing what you discover.
That mindset can be hard to maintain when you have a busy job with specific goals.
Thank you, Chris! If you loved this interview, please check out Granola and remember you can get 3 months of free meeting transcriptions as a paid subscriber (upgrade today, and I’ll share the link in a few days).