How I Built a 21-Course Platform with AI in 24 Hours
Jump to section
In the morning I had nothing. No website, no product, no code. By evening I have a complete learning platform with 21 courses, 140+ lessons, authentication, payments, progress tracking, and 25 blog posts. Everything running in production on a custom domain. And I never saw a single line of backend code.
This is not a tutorial. I have a course for that 😜 (and yes, Claude wrote that course too — praktickai.app/en/courses/build-webapp-ai). This is the story of what actually happened — including every mistake and every moment where I thought 'this should never have shipped.'
Iteration one: 'I will do consulting'
I started with a simple plan — a consulting website for AI training. Next.js frontend, Vercel hosting, no backend. Just a few pages: who I am, what I offer, a contact form. Claude Code had it done in a few hours. Clean, responsive, Czech and English. Deploy to Vercel, done.
And then I thought: what if I added courses?
'Damn, I need a backend'
Courses need user accounts, payments, progress tracking, enrollment management. All of that requires a backend. Here came the pivotal moment: I said 'let me buy a server and try this.'
I chose Django because I know it — well, I used to know it. But in reality? I never saw a single line of code that Claude Code wrote. I defined API endpoints, models, and business logic in plain language. Claude Code implemented it, wrote tests, set up the CI/CD pipeline. I could have used Rails or FastAPI just as well — it did not matter, because I never read the code.
This is the most important thing I learned: with AI, you do not need to understand the implementation. You need to understand the problem you are solving. The rest is delegation.
A Hetzner VPS for EUR 3.49
The backend needs to run somewhere. No AWS, no managed Kubernetes costing hundreds per month. I bought the cheapest Hetzner VPS for EUR 3.49 per month, installed k3s (lightweight Kubernetes), and let Claude Code set up the entire deployment pipeline — Docker image, GitHub Actions CI, k3s manifests, cert-manager for HTTPS, automatic deploy on push to main.
I saw it running, I saw healthy health checks, I saw tests passing. But I did not see the code.
Agent teams: not just parallel, but coordinated
This was the single biggest productivity unlock. Claude Code has a beta Teams feature — instead of a single agent, you spawn a team with named roles (architect, developer, test-writer, reviewer, committer), a shared task list, and automatic coordination. Each agent gets its own tmux pane, you see exactly who is doing what, and agents hand off work to each other via messages.
I ran 3-4 agents simultaneously. One writing courses, another creating blog posts, a third refactoring the frontend. Minimal conflicts because each worked on different files.
My typical setup: four tmux panes running simultaneously. One generating content — courses, lessons, blog posts. Another running periodic e2e tests, watching for breakage. A third fixing bugs those tests find. A fourth adding new features — payments, newsletter, referral program. And me switching between them, reviewing outputs.
Actual time for 21 courses with 140+ lessons? About 4 hours. Sequentially it would have taken a day and a half. With parallel teams? One afternoon.
Me as CTO, AI as the entire team
I operated more like a CTO than a developer. I set the direction — courses for developers and individuals, Czech and English content, payments and progress tracking. AI designed the course structure, decided how many lessons, what difficulty, what pricing.
I reviewed outputs, course-corrected, and gave feedback. Product decisions emerged iteratively — AI proposed, I approved or adjusted. This is the key difference from 'vibe coding.' I did not dictate every detail, but I also did not just say 'build me something.' I defined the vision. AI designed the product.
Every prompt ended with a verification step. 'Run the build.' 'Validate the JSON.' 'Check that all slugs are unique.' Without this, AI has no idea whether the result works.
What went wrong
It would be dishonest to write only about success.
Empty video placeholders in production
An agent was supposed to add YouTube videos to lessons. Instead it created video-placeholder blocks with made-up YouTube IDs. Lessons in production with broken players. Build passed, tests passed. A content problem you can only catch by opening the page in a browser.
Duplicate blog posts
Two parallel agents created posts on the same topic. Different titles, different content, same theme. Parallelization works — but you need clear territory assignments.
Diacritics — total chaos
AI generated Czech text with randomly missing diacritics. 'mel' instead of 'měl', 'ktery' instead of 'který', 'neni' instead of 'není'. In every other word. Fixing 25 blog posts and dozens of lessons took a whole team of agents — three parallel agents, each on 8 posts, reading the text as Czech and fixing contextually. Not find-replace, because words like 'ze' can be correct or wrong depending on context.
Agents that work while I sleep
This might be the most interesting part. Once the foundation was built, I set up periodic agents — skills that run in the background every 10 minutes and take care of the project on their own.
- Content checker — every 10 minutes it validates content consistency: pricing, translations, quizzes, diacritics. If something is off, it reports it.
- PR reviewer — monitors open pull requests, runs code review, fixes issues, and merges when CI passes.
- Improver — reads the improvement plan, picks the highest priority item, and implements it through the appropriate pipeline.
While I browse the site checking how things look, agents in the background find 6 empty quizzes, fix diacritics in 25 blog posts, build a newsletter subscriber system, and merge 3 pull requests.
I do not have to do everything myself and I do not even have to manage every step. I set the rules, define the skills, and agents handle the rest. I make product decisions and review outputs.
The meta moment: a course about how I did it
Sometime during the afternoon, something clicked. I had just built a complete web platform with AI in one day. What if I packaged the exact process as a course?
So I told Claude Code: 'Create a course called Build a Site with AI — 8 lessons, from project to deploy, exactly what I did today, but as a guide for others.' Within an hour the course was done. 8 lessons, exercises, tips, code blocks. A course about how to build a course. Meta.
Cost breakdown
How much does it all cost?
- Vercel (frontend hosting): $0/month (free tier, more than enough)
- Hetzner VPS (backend, k3s): EUR 3.49/month (CPX11, 2 vCPU, 2 GB RAM)
- Domain (praktickai.app): $15/year
- Claude Max (Opus 4.6 + Claude Code): $200/month — unlimited access to the most capable model, essential for parallel teams and periodic self-healing skills
- Resend (emails): $0/month (free tier, up to 100 emails per day)
- Total infra: under EUR 10 per month. Total with AI: ~$200/month
Yes, the most expensive item is the AI. $200 for Claude costs more than all infrastructure combined. But when one agent does a week's work in an afternoon, the ROI is hard to argue with.
Numbers of the day
Here is what was built from zero in one day:
- 21 courses with unique content (3 difficulty levels, for developers and individuals)
- 140+ lessons (each 800-1500 words, with code blocks, exercises, quizzes, tips)
- 25 blog posts (1000-2000 words, in Czech and English)
- 4 course bundles with 45-68% discounts
- Complete Django backend (API, models, 211 tests, CI/CD pipeline)
- k3s cluster on Hetzner VPS with automatic deploy on push
- Social login (Google + LinkedIn OAuth)
- Payment gateway with mock and production modes
- Progress tracking (visited + completed + backend sync)
- Newsletter with 3-email drip sequence via Resend (welcome, quiz day 3, promo day 7)
- Referral program (share code, conversion tracking, reward after 3 referrals)
- Interactive quizzes in lessons + AI readiness quiz with course recommendations
- Contact form with auto-reply email + Cal.com calendar for booking consultations
- i18n (Czech + English, entire site)
- SEO (sitemap, JSON-LD, OG images, hreflang, RSS feed)
- Dark mode with persistence
- 86+ Playwright e2e tests
- Custom domain praktickai.app with HTTPS
- Periodic self-healing agents (content checker, PR reviewer, improver)
- Email automation via Resend (welcome, drip, contact, newsletter)
What I learned
- You do not need to understand the code AI writes. You need to understand the problem you are solving.
- Parallel agent teams work, but they need clearly defined territory and a shared task list.
- Visual QA is king. When the build passes but the pages look wrong, the build will not save you.
- AI is great at volume, but not at uniqueness. Manual editing of the most interesting parts is a necessity.
- Periodic self-healing agents are a game changer — the project fixes and improves itself.
- Smaller infra = faster iteration. A EUR 3.49 VPS deploys faster than any managed service.
One day is not the end
Building a platform in one day does not mean it is finished. Content needs revision, some translations need polishing, and the payment flow is still waiting for production credentials. But the foundation is there — and that is what AI changes. You do not build for months before you have something to show. You build for a day, then iterate.
But more importantly — I am already thinking about how to abstract all of this into a framework. A set of skills, agent definitions, CLAUDE.md templates, pipeline configurations. Something that would let me take a completely different idea — a SaaS for fitness trainers, a marketplace for freelancers, an internal tool for logistics — and have a working MVP in a few days. Full end-to-end: frontend, backend, auth, payments, deploy, e2e tests. Just describe the product and set the parameters.
And maybe the most exciting part: what if I turned this into a tool that lets anyone do this? A platform where you describe what you want to build, and agents create it for you. The payment gateway is the only thing you have to wait for — everything else can be done in a day.
A year ago, PraktickAI alone would have taken me three months. Now I am thinking about how to repeat this process on ten projects simultaneously. And that is the most important message — AI does not just change how fast you write code. It changes how fast you can validate any idea. And in business, that is what matters most.
Want to do the same? I packaged this exact process into the 'Build a Site with AI' course — 8 lessons from idea to deploy, all with AI tools. Not a story, but a practical step-by-step guide.
Karel Čech
Developer and AI consultant. I help technical teams adopt AI in their daily workflow — from workshops to long-term strategies.
LinkedIn →Stay ahead with AI insights
Practical tips on AI for dev teams. No spam, unsubscribe anytime.
Liked this post? Dive deeper with our course:
Related posts
AI Agents in 2026: What Changed and How Developers Use Them
From chat to autonomous agents. 55% of developers regularly use AI agents. What this means for your workflow and how to get started.
The Big AI Model Comparison 2026: Claude, GPT, Gemini, Llama and More
Which AI model should you use in 2026? We compare pricing, context windows, capabilities, and best use cases for every major model. A practical guide for developers.
AI and Technical Debt: The Paradox Defining 2026
AI can 10x development speed — but also 10x the creation of technical debt. 75% of companies already face moderate to high debt levels due to AI. How to break the cycle.
Ready to start?
Free 30-minute consultation — we'll figure out where AI can level up your team the most.
Book a free consultation