How Top Experts Use AI to Write, Code, and Run Their Business | Dan Shipper (Every)
Lessons from interviewing 25+ AI experts and why “managers of AI models” might be a new career path
Dear subscribers,
Today, I want to share a new episode with Dan Shipper.
Dan co-founded Every and hosts a podcast called AI & I, where he has interviewed 25+ AI experts. He also recently launched Spiral, a new AI tool to help writers and creators save time and get more done.
We had a great chat about how Dan uses AI to write, how top AI experts use these tools to get more done, and why “managers of AI models” could be a new career path.
Watch now on YouTube, Apple, and Spotify.
Dan and I talked about:
(00:00) Using AI to turn rambling thoughts into writing
(05:12) How AI helps in each step of the writing process
(11:27) AI's role in podcast production
(15:23) The traits that top AI experts have in common
(16:36) Tips to run your business with AI
(18:37) Creating your personal AI coach
(22:04) Using AI to build apps without code
(29:10) Managers of AI models as a new career
(31:20) 3 steps for beginners to get started with AI
Keep reading for the interview takeaways.
This episode is brought to you by…Miro
70M+ users and 99 Fortune 100 companies use Miro. They let you use AI to:
One-click convert brainstorm stickies into draft docs
Generate roadmaps, retros, and other templates from simple text
Get feedback from virtual AI product and marketing experts
How to use AI to write
Welcome, Dan! I’d love to understand how you wrote new articles before these AI tools existed.
Writing involves gathering information from the web, books, and live conversations → typing a coherent narrative → editing.
Before AI tools, I had to manually synthesize all my notes about a topic into a first draft, which I would then edit before publishing.
Did you write a first draft and then edit or edit along the way?
I edit as I go, which I don't necessarily recommend. If I notice I'm over-editing, I'll force myself just to write something terrible first.
So you have a quality bar but try not to be a perfectionist.
I try not to be. I publish weekly, so I’d go crazy if I tried to be a perfectionist each time.
It goes back and forth, though. Some pieces are close to my heart. Those can be the hardest to get out because I’ve been thinking about them for so long. You feel you have this brilliant idea, but once it's on the page, you think it sucks.
How has this process changed now that you’re an AI power user?
I use AI tools in every part of my process:
Ideation and research: I'm constantly looking things up in Claude. I'll take a picture of what I'm reading and ask AI to help me understand or go deeper on specific points. To have good output as a writer, you need good input. AI is useful for maximizing what you get out of your inputs.
First draft: I often go on a walk and turn on my voice memo app to create a first draft. I'll talk out loud about whatever’s on my mind and use AI to transcribe it. When I get home, I’ll ask AI to “summarize what I said and pull out any interesting insights.” AI is great at pulling insights from my ramblings.
Outline: I’ll often have a big document full of notes on a piece I’ve been working on for a long time. I’ll throw it into Claude and ask it to turn it into an outline. AI is great at figuring out how to take my unstructured blob of notes and find a simple structure to help me get started.
Editing: For editing, AI is excellent at tasks like summarizing a piece of content in the context of your writing, finding the right metaphor or simile, and helping you understand what’s not working about a piece.
Finally:
AI gives me more confidence as a writer.
I recently published a piece about how language models work. It covers deeper technical topics in which I am not an expert.
Previously, I would have hesitated to publish a piece like this. I'd have to find an expert friend and convince them to read it. With ChatGPT and Claude, I can just ask them to look through what I wrote and see if there are any problems.
It's not perfect, but it does find stuff and gives me confidence that it's pretty much right. That dramatically expands the number of things I can write about.
What prompts do you use to talk to Claude or ChatGPT for the steps above?
I always start by talking to it in the simplest way possible. If it gets it right, great. If not, I'll add more details to get better results.
These details include asking AI to:
Simulate an expert (e.g., a professional editor or an expert in CS).
Use my raw and edited article example and then edit a new one in that style.
Analyze my examples, write a style book, and then put it into the prompt.
Critique its work and revise its output based on its critique.
How to use AI to run your business, get professional advice, and learn how to code
You’ve also interviewed everyone from Reid Hoffman to David Perrell for your podcast AI & I. What are some common themes in their use of AI?
People who use AI well are always curious and constantly experimenting.
They don't have a knee-jerk reaction when it doesn't work. They're investing in learning how to use this new tool and find it fun and exciting, even when it fails.
That spirit of open-mindedness, curiosity, and exploration allows people to get the most out of these tools, which are incredibly powerful but also early in their life cycle.
I’m especially curious about how founders and creators use AI to run their businesses. Do you have any examples from your personal experience?
Totally. I think it helps in 3 ways:
It’s a crazy power-up for every employee. It lets us all write and code much faster. Some people on my team went from English being a 2nd language and being unable to code to sending perfect emails and making apps.
It’s great for decision-making. I frequently use it to help me make a strategic decision after giving it the right context.
It’s been great for me. I tend to be like a puppy chasing shiny new things, which can be chaotic when running a company. AI knows this about me and helps me evaluate whether a new opportunity matches my long-term goals.
It's very good at saying: "Hey, remember you have a bit of shiny object syndrome sometimes. Think about how this connects to your goals." And I'm like, "Yeah, you're right. I should just not do this right now."
I have a similar GPT that I created to be my coach. I give a long prompt, including my goals and current situation, and then ask for advice. It’s a great listener—arguably more patient than my spouse and close friends!
I know, right? It's very patient and also knows everything. I have a therapist who knows nothing about running a media business. He can listen but lacks context to help me make business decisions.
It's nice to have someone who's consistently empathetic and has the entire knowledge of the internet in their brain to help you figure out what to do.
I also want to ask about using AI to code. Do you think AI is at a point where a non-technical person can build an app to make money?
Spiral has made six figures, and Lucas Crespo, our creative lead, built the 1st version.
We needed a tool to summarize our articles into a Sunday digest. He thought he could do something about that and used ChatGPT to build and ship a Heroku app. You could paste an article in, and it would produce a summary in our style using AI.
He deployed it to Heroku but had no idea how it worked. We used it for a year. I wouldn’t say it’s polished software, but we use it constantly.
Can AI teach you how to code, too?
AI can help you stay motivated because it lets you ship quickly.
I teach a course taking people from 0 to shipping an app with AI. On the first day of the course, you build something with AI, and you're like, "Holy shit, I can't believe this works!" Then, you're motivated to understand how it works and improve it. We often say, "Here's how you can use AI to figure out what’s happening in your code.”
This is so much better than when I was learning programming without AI. It was six months of if statements, while loops, and variables—stuff I had no idea how it would level up into anything practical. It was very taxing to get through that.
AI means you immediately get to something that looks cool, motivating you to understand the underlying stuff. The people who use AI and understand how it works will move faster.
That’s the key step - you can use AI to generate some toy programs, but you have to take the next step and say, "Okay, now explain how this works.”
When I tried using AI to build an app, debugging was a pain. I’d say, "Hey, go fix this specific part," and it’d generate the whole code base again. I ended up in this loop where I kept pasting its code and hoping it would work.
Yeah, definitely. You can get into that mode with AI where you're like, "I don't want to think about this. I want you to do it."
Sometimes, being less lazy and doing it yourself is helpful.
Building Spiral to automate 80% of your repeat writing, thinking, and creative tasks
Let’s talk about Spiral. You built the 1st version with ChatGPT in 2 days, and it’s now a top product on Product Hunt. What does it do?
At Every, we have a lot of repetitive tasks that involve summarizing content from one form to another. Examples include:
Turning an article intro into a headline
Turning a podcast transcript into a tweet to promote the episode
It was time-consuming to do all these manually, even with AI tools like ChatGPT and Claude. So what Spiral does is:
It writes a guide on converting your content based on your examples.
It then asks you to paste in your draft that it’ll convert to the other content format based on the guide (e.g., from a transcript to a tweet).
This has been an incredible time saver for my entire team.
It sounds like a tool that gets you 80% of the way there. It’s much easier to start with something than from a blank page.
Exactly. It’s excellent at creating a consistent tone, voice, and decision-making style across the company.
Have you tried using it to generate podcast episode titles and packaging?
Yeah, we use it for that. We'll give it a transcript and ask what headlines we should use. We have examples of YouTube or other headlines that we think work or headlines from Every that have done well.
AI’s implications for society and why “managers of AI models” might be a new role
Let’s wrap up by discussing AI's implications for society. You know, I’m always surprised by how few people are using AI tools.
For example, I worked with someone who was too wordy in their writing. Before, I would give them specific advice on what to do. Now I just say, "Why don't you copy and paste your stuff into AI and ask it to make it more concise?”
I agree—AI adoption is lower than you'd think. The problem is that today's top AI products are just general-purpose chat boxes. You have to think about where to fit it into your life. Staring at a blank text box is intimidating.
I think everyone needs to make this decision when they’re stressed and trying to get something done quickly:
Do you just do it without AI the way you already know how to? Or do you spend more time trying to make it work with AI, which might save you tons of time later?
What I've tried to do at Every is carve out time and space for people to explore AI without feeling like it will make them fall behind on day-to-day tasks. That way, they can get comfortable enough to use it well.
Another thing companies can do is let the people who are naturally excited about it and will play with it anyway—the tip of the spear people—figure out what they've learned and reduce it into a process that others can use. Then, they can take that process and teach it to those who don't want to explore and just want to get their job done. I think that's a very effective way of getting more AI use in an organization.
So, do you encourage your employees to share AI best practices and tools?
Yeah, I explicitly encourage the early adopters to find the best practices and then share them with others who will follow instructions but don't want to do the discovery work themselves.
How do you think the AI landscape will evolve? There's OpenAI, Anthropic, Google, xAI, Microsoft, and more. What's going to happen in the next six months?
I don't know. Everyone was surprised that Google and Anthropic caught up to GPT-4 when they did. To be fair, GPT-4 came out a year or more ago, so they're about a year behind. But OpenAI was so dominant that people thought they might have had a longer lead time.
I think we'll see GPT-5 in the next 9-12 months. I'm pretty confident about that without any inside information, just reading the tea leaves. I think it will be substantially better. There are questions about whether we're reaching diminishing returns, but I don't think that's true.
The open-source stuff continues to be interesting. We're finding that you can make the smaller models much sparser while retaining much power. The big models are relatively inefficient. We can run surprisingly powerful models on smaller computers, just your laptop. We'll continue to find that over time, and having more personal AI stuff available is good.
Right now, there isn't a personal AI that remembers everything today. That’s what I want - it just remembers my context without me having to write long prompts.
I've been testing this app called Granola over the last day or two, which is fantastic. It's your classic meeting recorder, but I think it's the best, well-done one I've seen. It:
Hooks into your calendar and sits on your desktop
Helps you take notes and produces a transcript
After the meeting, it’ll take your chicken scratch notes and use the transcript to flesh them up into an actual, well-formatted document.
It's very well done.
I’ll have to check it out! How do you think AI will impact jobs? You wrote about a new job called "Managers of AI Models." What will this person do?
We're moving from a world where much IC (individual contributor) work will become work where we’re managing AI models.
AI models will do much preliminary work (e.g., first draft). The skills that will be valuable for this new “managers of AI models” job is:
Still having the basic skills to do it yourself
Add the skills of managers, like using the right AI agents for each task and giving AI-specific instructions to follow.
If the metaphor for previous programming work was a sculptor, where every little chip is something you do, we're going from that to thinking more like a gardener. You're creating the conditions for the plant to grow - adjusting the soil, etc.
That's more of what will be happening, with the caveat that you can still go in and do the sculpting yourself when needed. You can get the best of both worlds, which is cool.
It's like having a bunch of interns—if they're new to the role, you need to give them specific instructions, check-in, and correct their work.
Yeah, it's very intern-like right now. I hope it will become more like an experienced person in the next couple of years.
If I'm just a regular person who's not as into this as you and I, what are three things that I can do to start using AI tools to improve my life?
You're lobbying me for softball here. You must listen to my podcast, AI & I, and read everything we publish on Every. I think those are two hopefully good resources.
But the broader thing is just to be curious about this stuff.
The beauty of ChatGPT and Claude is that anyone can use them. You don't have to pay to use the best model available, which is incredible.
Just say, "Hey, how do I use you?" Say, "This is the kind of person I am; these are my tasks. Interview me to help figure out where I can use you." Just spend time with it and play around. Cultivating that spirit of curiosity and playfulness will make the difference, rather than any skill or resource.
That makes sense. When my parents asked me how to use AI, I said, "Just talk to it like you would a person. That's how you use it. It's not complicated; just send DMs to it."
Exactly!
Thanks so much, Dan! If you enjoyed this interview, please follow Dan on X, check out Every, and listen to his AI & I podcast.
Confidentiality is the biggest barrier to AI use in the private sector. Most of my clients are banks, and I can't include any identifiable information from their meetings, emails, documents, or ideas in AI prompts without risking immediate termination. AI note-taking or summarizing is outright banned.
Under GDPR, I can't use personally-identifying information in AI prompts without violating the law, which is why AI vendors are scarce in Europe. In the USA, even without GDPR, using inside information in AI prompts raises serious concerns. In health and medicine, using patient data in prompts would be a clear HIPAA violation.
These issues persist because there isn't a reliable way to prevent data from being sent back to the vendor. Even running models in-house doesn't fully solve the problem. Consequently, the use of LLM genAI is very limited for me. Stripping out all identifying information and verifying LLM outputs is so time-consuming that it's often faster not to use AI at all. The lax privacy protections in the USA only add to the complexity.