How Creators Can Get Started with AI
How does AI work anyway and how you can start playing with AI tools now
Join 35,000+ others who get my thoughts on product, tech, and creators — in as few words as possible, no more than twice a month.
Dear subscribers,
I believe that AI will dramatically lower the cost of creation in the next few years.
If you’re a creator, consider giving AI tools a try to learn how they can help you create better content. To guide you through this, I’ll expand on my AI art post to explore:
How does AI work anyway?
How is AI lowering the cost of creation?
How can creators get started with AI?
This post is brought to you by…Filecoin
Filecoin is a decentralized protocol that lets anyone rent out storage space on their computer in a peer-to-peer way. Content stored on Filecoin cannot be taken down or easily compromised. Over 7,000 developers are building apps backed by Filecoin. Check out the link below to get started.
How does AI work anyway?
I think every creator should have a basic understanding of how AI works. I’m far from an expert, but let’s start with a few definitions:
AI is when machines demonstrate intelligence at the human level and beyond.
Machine learning (ML) is how machines learn to be intelligent by identifying patterns in data. ML models have two artifacts:
Label: The output that the machine wants to predict (e.g., video watch time).
Features: Inputs to achieve the output (e.g., video views and likes).
At a high level, machine learning works in 3 steps:
Prepare data: Machines need lots of quality data to learn. For example, to convert text to images, a ML model needs to learn from millions of images with text labels. ML engineers typically spend 80% of their time manually cleaning the data in a process called feature engineering.
Train model: Next, ML engineers split the data into a training set and a test set. The machine uses the training set to build the model and then uses the test set to improve the model’s accuracy. The model’s algorithms can be:
Simple like a linear regression (e.g., a person’s weight = 80 + 2 * height). The model will adjust the feature weights (e.g., change 2 to 1.5) through repeat iterations to more accurately predict the label (output).
Complex like a neural network. The model will not only assign weights to features but also create new features automatically. Most models based on images or natural language use neural networks.
Build user experience: After training the model, the team needs to build a UX where people can supply inputs to get their desired output. How the model works is a black box even to ML engineers, so the user experience needs to be clear, believable, and actionable.
Ok, that’s a mouthful. If you remember one thing, it’s this:
Machines need large quality data sets to learn. They get better by identifying patterns and predicting outcomes through repeat iterations.
If you want to learn more, I highly recommend Google’s ML crash course. You don’t need to know how to code to understand the basics.
How is AI lowering the cost of creation?
Many creative pursuits have large quality data sets for AI to learn from:
Writing
OpenAI’s GPT-3 is trained off 1 trillion words from Common Crawl (a non-profit that scrapes billions of webpages monthly), books, and Wikipedia. Since the average book has 100K words, GPT-3 has essentially read ten million books.
It’s no wonder that GPT-3 can easily generate essays from prompts like:
“Write a scary essay about how my dog ate my homework.”
Art and Digital Media
Stable Diffusion is trained off 5 billion image-text pairs (images that have HTML alt-text attributes). Unlike other AI models, Stable Diffusion was open sourced to the public. Since then, there has been a Cambrian explosion of innovation built on top of the model. It has been used for:
Text to video (Meta also announced their closed version recently)
Apparel and shoe design
This pace of innovation shows no signs of slowing down.
Music and Sound
Unlike text and images, music is usually under copyright. That’s why nobody has dropped the Stable Diffusion for music yet, but I think it’s only a matter of time.
Meanwhile, researchers have focused on sound:
OpenAI’s Whisper is a “sound to text” speech recognition AI trained off 680K hours of voice data from the web.
AudioGen is a “text to sound” AI. You can type “whistling and wind blowing” and the AI will generate the corresponding sounds.

How can creators get started with AI?
As I wrote in the creator’s hierarchy of needs, a creator’s workflow is:
Create → Grow → Monetize
Grow is already dominated by AI in the form of social media algorithms. Now we’re seeing Create get disrupted.
But let’s not get ahead of ourselves.
AI tools are magical but not perfect. I gave GPT-3 this prompt…
Write a thoughtful essay that covers the following topics:
1. How does AI work anyway
2. How AI is lowering the cost of creation
3. How creators can use AI
…and this is what it spit out:
This output is decent, but I hope not as good as what you’ve read in my post so far.
So instead of ignoring or worrying about AI, I think creators should embrace it as a tool to help them create better content.
Ignoring AI is like sticking to Microsoft Paint as a digital artist when Photoshop is available. At the minimum, you should explore the following:
Try Stable Diffusion or DALLE-2 via Playground AI (images) and use Lexica for prompt ideas. You can also install Stable Diffusion locally (see this Reddit post).
Sign up for OpenAI’s GPT-3 (text) and check their prompt examples. You might have to wait on a waitlist for access.
Based on my tests so far, AI tools have never given me exactly what I want. Instead, I have to run multiple queries and refine my prompts to get something that I can work with. At best, the AI outputs are inspiration for me to craft my own content.
Of course, AI models are always learning and improving. One thing is clear:
Humans have always been defined by our tools, and AI tools are simply too good for creators to ignore.