If you've ever run cold email campaigns, you know the drill. You need multiple domain variations to protect your primary domain's reputation. Maybe you start with yourcompany.com, but you also want getyourcompany.com, yourcompany.io, yourcompany-hq.com, and ten others. The manual process is mind-numbing: type a domain into your registrar's search bar, wait for the spinner, see it's taken, repeat 40 more times, copy results into a spreadsheet. I've done this enough times to know there had to be a better way.
So when we kicked off our "52 Products in 52 Weeks" challenge at Precode, this was the obvious first build. Not because it's revolutionary technology, but because it's a real problem we face ourselves, simple enough to ship in five days, and a perfect demonstration of the rapid development methodology we preach to clients. This article is the full story of building the Domain Variation Checker—every technical decision, what we deliberately didn't build, and what four hours of focused development actually looks like.
Why Build 52 Products in 52 Weeks?
Before we dive into the tool itself, let me explain the challenge. Precode has spent years helping clients validate product ideas and ship MVPs quickly. Our process typically takes 8-12 weeks from concept to launched product, but clients often don't believe that speed is possible until they see it. We wanted to practice what we preach—publicly.
The idea is simple: build and ship one product every week for a year. Not prototypes or demos. Real, working products that solve actual problems. Some will be tools like this one. Others will be SaaS products, productivity apps, or experimental ideas we've had kicking around. The constraint is brutal: five working days per product, then move on regardless of how "done" it feels.
This isn't about building unicorns or chasing venture capital. It's about demonstrating that most products don't need six months of development before they provide value. It's about ruthless scope discipline. It's about learning in public and showing the real craft behind rapid product development—failures, pivots, and all.
Week One needed to set the tone. We needed something we could actually finish in five days, something genuinely useful, and something that didn't require complex infrastructure or months of iteration to validate. The domain checker ticked every box.
Why This Product First
The selection criteria for Week One were deliberate. We needed a product that would demonstrate our methodology without setting ourselves up for failure. Here's how the domain checker met every requirement:
We needed it ourselves. Precode runs cold outreach campaigns. We've sat through the tedious process of checking domain variations manually. This wasn't a theoretical problem we thought might exist—it was something actively annoying us. When you build something you need, you inherently understand the user better than any research document could tell you.
Clear scope boundaries. The core problem is simple: generate domain variations and check if they're available. That's it. No user accounts, no payment processing, no complex integrations. We could define "done" precisely: enter a brand name, see which domains are available, export the results. If we started adding features like price comparison or automated purchasing, we'd blow the timeline.
No complex backend required. Domain availability checks can be done client-side using RDAP (Registration Data Access Protocol). No database, no authentication system, no API keys to manage. This meant we could focus entirely on the user experience and core functionality without drowning in infrastructure setup.
Immediate value. Unlike products that need network effects or time to demonstrate value, this tool is useful the moment it's built. You type in a brand name, you get results. There's no ambiguity about whether it works or whether anyone wants it.
These criteria became our filter for the entire 52-week challenge. If a product idea doesn't meet at least three of these four criteria, it probably doesn't belong in a five-day sprint.
The Build: 4 Hours Across 5 Days
Here's the reality of rapid development: the actual coding was four hours spread across five days. The rest was thinking, making decisions, and deliberately choosing what not to build. Let me break down what happened each day.
Day One: Foundation and Framework
I started with a blank Next.js project. TypeScript from the start—non-negotiable for anything we plan to maintain. Added Tailwind CSS because fighting with CSS files isn't a good use of limited time. Set up shadcn/ui components for buttons, inputs, and tables. No custom design phase. No Figma mockups. The AI suggested a clean, functional layout, and we went with it.
Total time: 45 minutes. The entire foundation—project structure, dependencies installed, basic layout rendered—was done before lunch. This is the power of modern frameworks. You're not configuring webpack or debating CSS architecture. You're building features.
Day Two: Core Logic
This is where the actual product functionality got built. I needed two things: a variation generator and a domain checker.
The variation generator was straightforward. Take a base brand name (like "precode") and combine it with common prefixes and suffixes. I defined arrays of prefixes (get-, use-, try-, go-, my-) and suffixes (-hq, -team, -app, -studio). Generate all combinations, filter out anything nonsensical, randomise the order, and return up to 50 variations. About 30 lines of code.
The domain checker was more interesting. Most tools use WHOIS, which requires backend servers and often gets rate-limited. RDAP is newer, faster, and can be queried directly from the browser. Each TLD has its own RDAP server—Verisign for .com and .net, Nominet for .uk domains, Google Registry for .app. I built a lookup function that routes each domain check to the correct RDAP endpoint.
The challenge was concurrency. Checking 50 domains sequentially would take forever. Checking all 50 simultaneously would hammer the RDAP servers and likely get throttled. I settled on 10 concurrent checks with a promise pool. Results stream back progressively—you see domains marked as available or taken as the checks complete, not all at once after a long wait.
Total time: 1.5 hours. The variation logic was trivial. The RDAP integration required reading documentation and handling different response formats from different registries.
Day Three: Results and Export
Now we had domains checking, but the results just appeared in a console log. Day Three was about making the data useful. I built a sortable, filterable results table using shadcn's table components. Click a column header to sort by domain name, status, or TLD. Type in the search box to filter results.
Export functionality came next. Three options: download all results as CSV, copy available domains to clipboard, or copy all results with status to clipboard. The CSV export was straightforward—convert the results array to comma-separated values and trigger a download. Clipboard functionality used the Clipboard API with proper fallbacks for older browsers.
I also added a 5-minute cache. If you check the same domain twice within five minutes, the tool returns the cached result instead of hitting RDAP again. This prevents abuse and speeds up repeat checks.
Total time: 1.5 hours. Most of this was UI polish—making sure the table looked good on mobile, handling empty states, ensuring export buttons only appeared when they made sense.
Day Four: Error Handling and Polish
This is the day most people skip, and it shows in their products. Error handling isn't glamorous, but it's the difference between something you'd actually use and something you'd abandon after one broken interaction.
I added proper error states for network failures, RDAP timeouts, and malformed domain names. If RDAP checks fail (which happens—CORS restrictions, rate limiting, server downtime), the tool falls back to a simulation mode that still provides useful output. It's clearly marked as simulated, but you can still generate variations and export them for manual checking.
Dark mode support came next. Tailwind makes this trivial—add dark: variants to your classes, respect the user's system preference. About 20 minutes of work for something that makes the tool feel professional.
Responsive design refinement rounded out the day. The tool works on mobile, but I adjusted spacing and font sizes to make it genuinely usable on small screens. The results table switches to a card layout below 768px width. The slider for controlling variation count gets bigger touch targets.
Total time: 45 minutes. Error handling took most of it. Dark mode and responsive tweaks were quick.
Day Five: Deploy and Launch
Vercel deployment is almost comically simple. Push to GitHub, connect the repository in Vercel, wait two minutes, done. The entire deployment process—including setting up the custom domain—took less than ten minutes.
The rest of Day Five was writing documentation. The README you're reading grew out of this day. I wanted anyone who looked at the code to understand not just how it works, but why certain decisions were made. What's the RDAP fallback strategy? Why these specific prefixes and suffixes? What's the concurrency limit and why?
Total time: 15 minutes for deployment, 30 minutes for documentation.
Total actual coding time: 4 hours. The rest was decision-making, which is arguably more important than the code itself.
Technical Decisions That Mattered
Every technical choice was made through the lens of "can we ship this in five days?" Here are the decisions that shaped the product:
RDAP over WHOIS. WHOIS is the traditional way to check domain availability, but it requires backend servers to send queries (browsers can't do raw TCP connections). RDAP is HTTP-based, which means we can query it directly from the browser. It's also faster and returns structured JSON instead of inconsistent text parsing. The downside? Not all TLDs have RDAP servers, and some block CORS requests. We handle this with a simulation fallback, clearly marked to the user.
No database or authentication. Every feature you add is another thing to build, test, and maintain. User accounts would require authentication, password reset flows, data storage, privacy policies, GDPR compliance. For what? So users can save search history? The value doesn't justify the complexity. The tool works perfectly well as a stateless application. Come back tomorrow, run another search. No friction, no logins, no database costs.
Next.js and Vercel. I could have built this as a static HTML file with vanilla JavaScript. But Next.js gives us TypeScript support, React component architecture, and trivial deployment on Vercel. The developer experience is so much better that it's worth the tiny overhead of a framework. Plus, if we want to add backend features later (like scheduled checks or email alerts), the infrastructure is already there.
Concurrent checking with limits. Ten simultaneous checks is our sweet spot. More than that and we risk rate limiting from RDAP servers. Fewer and the checks feel slow. I tested various concurrency levels and ten gave consistent Total codebase: 287 lines. This includes comments and whitespace. The variation generator is 35 lines. The RDAP checker is 120 lines. The UI components and page structure are another 130 lines. That's it. Most "simple" tools have thousands of lines because they're solving problems that don't exist yet. We built exactly what was needed, nothing more.
What We Learned (The Honest Version)
The first week of a 52-week challenge is supposed to be easy, right? Set the tone, build momentum, celebrate a quick win. In reality, it was harder than expected—not technically, but mentally.
Scope discipline is everything. Every single day I thought of features to add. What if we tracked historical availability and could notify users when a domain became available? What if we integrated with registrar APIs to show prices? What if users could save their searches and share them with team members? Every single one of these ideas would have blown the timeline. The hardest part wasn't building the features we included—it was not building the features we excluded.
"Good enough" feels uncomfortable. The variation generator uses a fixed list of prefixes and suffixes. I wanted to add custom prefix/suffix input, maybe even AI-powered variation suggestions. But the fixed list solves 95% of use cases. Shipping with the fixed list and moving on felt wrong, like I was abandoning the product before it was "done." This is the mental trap that kills most projects. Done is better than perfect, but accepting "done" takes practice.
RDAP is fantastic but imperfect. About 15% of checks hit CORS errors depending on the browser and TLD. The fallback to simulation mode works, but it's not ideal. I could solve this by adding a thin backend proxy, but that would violate the "no complex backend" constraint. So we live with the 15% simulation rate and clearly communicate it to users. Not every problem needs solving immediately.
Documentation is product too. The README took longer to write than some of the features. But a tool without documentation is a tool nobody uses. Explaining why certain decisions were made—why no auth, why these TLDs, why this concurrency limit—turns the project into something other developers can learn from. That's more valuable than a polished UI.
Four hours of focused work > 20 hours of distracted work. The total coding time was four hours, but those were four hours of deep focus. No Slack, no email, no context switching. Just problem, code, test, ship. Most developers get maybe two hours of focused work per day because of meetings, interruptions, and multitasking. If you can protect those hours, you can ship faster than teams ten times your size.
Features We Deliberately Excluded (And Why)
Here's everything we considered and cut. This list is as important as the features we built, because it shows the discipline required to ship in five days.
User accounts and saved searches. Why we wanted it: convenience for repeat users, ability to track checks over time. Why we cut it: authentication adds days of work (signup, login, password reset, email verification) and ongoing maintenance (database costs, security updates, data privacy). The tool works fine without accounts. Users can export to CSV if they want to save results.
Historical availability tracking. Why we wanted it: notify users when a desired domain becomes available. Why we cut it: requires background jobs, database storage, notification infrastructure. This is a v2+ feature if we ever validate that users want it.
Price comparison across registrars. Why we wanted it: help users find the cheapest option. Why we cut it: every registrar has different APIs (some have none), pricing changes frequently, you'd need to maintain integration code for dozens of providers. This is a separate product, not a feature.
Automated purchasing. Why we wanted it: one-click domain procurement. Why we cut it: payment processing, registrar API integrations, legal liability if something goes wrong. This transforms a simple tool into a complicated business.
Custom prefix/suffix creation. Why we wanted it: more flexibility for users with unique branding. Why we cut it: the fixed list covers most cases, and custom input adds validation complexity (what if users enter offensive terms? profanity filters? moderation?). Not worth it for v1.
Bulk CSV upload. Why we wanted it: let users check 100+ domains at once. Why we cut it: file parsing, validation, rate limiting concerns, user flow complexity. The slider goes up to 50 variations, which handles most needs.
Domain name suggestion AI. Why we wanted it: use GPT to generate creative variations beyond the prefix/suffix formula. Why we cut it: API costs, prompt engineering time, unpredictable results, need to handle inappropriate suggestions. Cool feature, wrong product phase.
Looking at this list, you can see how easy it would be to turn a four-hour tool into a six-month project. Every feature sounds reasonable in isolation. Together, they're scope creep that prevents shipping. The art of rapid development is knowing which features move the core value needle and which are nice-to-haves that can wait.
Connection to Precode's Methodology
This isn't just a side project—it's a demonstration of the exact methodology we use with clients. When a company comes to Precode wanting to validate a product idea, we follow the same process you've just read about.
Start with the core problem. What's the one thing this product must do to provide value? For the domain checker, it's "show me available domain variations quickly." Everything else is secondary. We spend the first week of client engagements identifying this core problem and ruthlessly cutting anything that doesn't serve it.
Time-box development. Eight to twelve weeks for most client MVPs, five days for this challenge. The constraint forces focus. Without a deadline, teams endlessly debate features and perfect details that don't matter. With a deadline, you make decisions and ship. Some decisions are wrong, but you can fix them after you've validated whether anyone cares.
Ship incomplete products. The domain checker has obvious gaps. No user accounts, limited TLD coverage, simulation fallbacks. We shipped it anyway because it solves the core problem. Client MVPs work the same way—build the minimum feature set that validates the hypothesis, then iterate based on real user feedback, not theoretical concerns.
Practice what we preach. We tell clients they can ship in weeks, not months. Now we're proving it publicly, 52 times. If we fail to ship a product in a given week, everyone sees it. If we cut corners and ship junk, everyone sees it. This accountability makes the methodology real in a way that client testimonials never could.
The difference between consulting and building your own products is skin in the game. With clients, we have deadlines and contracts, but the ultimate risk is theirs. With the 52 Products challenge, we're betting our reputation that rapid development actually works. It's terrifying and clarifying in equal measure.
Try It Yourself
The Domain Variation Checker is live at here. It's completely free, no signup required, no tracking beyond basic analytics. Type in your brand name, pick your TLDs, and see what's available.
If you're running cold email campaigns, this will save you hours of manual checking. If you're just curious about how we built something useful in four hours, the full source code is on GitHub (linked from the tool itself). Read the code, copy the patterns, build your own variation.
And if you're thinking "this is interesting but I need someone to build my actual product idea"—that's exactly what Precode does. We've spent 20+ years building software for startups, scale-ups, and enterprises. The same rapid development approach we're demonstrating here is what we apply to client projects. Eight to twelve weeks from concept to launched MVP, same discipline around scope, same focus on solving the core problem first.
The difference is we do it for products that actually matter to your business. This domain checker is useful, but it's not going to transform anyone's company. Your product idea might. If you've been stuck in planning mode for months, talking about features instead of shipping, let's have a conversation about how rapid validation could work for you.
This is Week One of 52. Next week we're building something completely different—possibly a productivity tool, possibly a dev utility, possibly something suggested by someone reading this. I haven't decided yet because planning too far ahead defeats the purpose of the challenge.
If you want to follow along, bookmark Precode Insights where we'll post updates every week. Real builds, real timelines, real lessons—wins and failures both. No fake metrics, no inflated claims, just the craft of building products quickly without sacrificing quality.
See you next week with Product #2.
