How strategic guidance enabled a journalist with zero coding experience to build and deploy a production-ready React web application in one month. This case study demonstrates the power of AI-assisted development when combined with expert mentorship on prompting methodology and strategic tool selection.
The Challenge: Building Without Technical Background
In early January 2026, Ollie Peart, a journalist and presenter for The Modern Mann podcast, faced an interesting challenge. As part of the podcast’s monthly “Zeitgeist” segment, where he explores emerging trends, a listener had challenged him to try “vibe coding”—using natural language prompts with AI tools to build functional applications. The concept was straightforward: could someone with zero programming experience create a working web application using only AI assistance?
Ollie’s goal was ambitious yet practical: build a “Mannbassador Locator” app that would help the podcast’s community find their nearest Mannbassador (the show’s term for dedicated supporters who represent specific locations). The application needed to determine a user’s location, search a database of existing Mannbassadors, calculate distances, and provide appropriate actions based on whether a location was already claimed or available. For context, Mannbassadors are listeners who’ve earned recognition by supporting the show, each associated with a specific location and episode.
The challenge extended beyond simply building an app. Ollie needed to complete this within approximately one month, document the process for discussion on the podcast, and create something genuinely useful for the community—not just a proof of concept, but a production-ready tool that could be deployed and used by real people. Most critically, he needed to accomplish all of this without writing a single line of code himself, relying entirely on AI tools and whatever guidance he could find.
The Approach: Strategic Guidance Over Code
When Ollie reached out to me on 7th January 2026 through The Modern Mann’s Discord server, I recognised an opportunity to demonstrate a key principle of modern technology leadership: strategic guidance often delivers more value than hands-on implementation. Rather than building the application for him or even pair-programming, I chose to act as a strategic advisor, teaching him how to effectively leverage AI tools to achieve his goal.
My approach centred on three core principles that align with how I work with businesses on AI adoption and engineering leadership development. First, tool selection matters less than understanding how to use tools effectively. Second, the right prompting methodology dramatically amplifies AI tool capabilities. Third, teaching problem-solving approaches creates long-term value beyond any single project.
This methodology reflects the reality of modern software development. We’re witnessing a fundamental shift from developers as “code authors” to developers as “software editors”—professionals who guide AI systems to generate code, then review, refine, and validate the output. For businesses considering AI adoption or developers seeking to remain relevant, understanding this transition is critical. The question isn’t whether AI will change development practices, but how quickly organisations can adapt their approaches.
ℹ️ Note
Throughout this engagement, I provided strategic guidance and methodology advice but never wrote code directly. Ollie built the entire application through his own prompts to AI tools, with my role limited to teaching effective approaches to problem-solving and tool usage. This distinction is important: the success came from transferring knowledge about how to work with AI tools, not from my technical implementation.
Tool Selection: Finding the Right Fit
Our first strategic decision involved selecting the appropriate AI tool. The landscape of AI-assisted development tools has expanded rapidly, with major options including Claude (by Anthropic), ChatGPT (by OpenAI), and Google AI Studio (using the Gemini model). For professional developers, I typically recommend Claude Code, which integrates directly into development workflows and can interact with local filesystems and terminals. However, Ollie’s situation required different considerations.
The critical constraint wasn’t capability—all major AI platforms can generate functional code. The barrier was deployment complexity. Most AI coding assistants produce code that requires local development environments: installing Node.js, managing dependencies, running build processes, and deploying to hosting services. For someone without development experience, these ancillary tasks could easily consume more time than building the application itself.
Google AI Studio emerged as the optimal choice for three compelling reasons. First, it includes an integrated execution environment that can build and run React applications directly in the browser, eliminating all local setup requirements. Second, it provides immediate visual feedback, allowing Ollie to see changes in real-time without understanding deployment pipelines. Third, it uses the Gemini model with a generous free tier, though Ollie already maintained a subscription, making the project effectively zero-cost beyond his existing commitment.
This decision exemplifies technology-agnostic strategic thinking. The “best” tool isn’t determined by technical superiority but by fitness for purpose, user constraints, and project requirements. A more powerful development environment would have created barriers rather than removing them. Strategic technology leadership involves matching tools to contexts, not prescribing universal solutions.
❓ What Is Vibe Coding?
Vibe coding refers to using natural language prompts with AI systems to generate functional applications without writing code manually. Rather than typing syntax, developers describe what they want in conversational language, and AI tools generate the implementation. The term “vibe” reflects the informal, intuitive nature of the interaction—you convey the “vibe” of what you want, and the AI interprets and implements it.
Whilst this approach has limitations (code quality varies, complex logic requires iteration, and understanding output remains important), it dramatically lowers barriers to entry for application development. Combined with proper guidance, vibe coding enables non-programmers to build functional tools that previously would have required hiring developers.
The Process: Teaching Effective Prompting
With tools selected, the next challenge involved teaching Ollie how to communicate effectively with AI systems. This proved more nuanced than simply typing requests. Effective prompting requires understanding how to provide context, specify constraints, break down complex problems, and iteratively refine outputs. These skills separate successful AI-assisted development from frustrating experiences with hallucinated or unusable code.
I began by helping Ollie articulate the project requirements clearly. Rather than starting with “build me an app,” we broke down the specification into discrete, testable components:
- Obtain user’s geographical location using browser APIs
- Load Mannbassador data from an existing spreadsheet
- Calculate distances between user location and all Mannbassadors
- Display results sorted by proximity
- Show appropriate actions (claim territory or view existing Mannbassador details)
- Link through to relevant podcast episodes
This decomposition provided the foundation for initial prompts. Ollie’s first interaction with Google AI Studio followed this structured approach:
“I want to build a web app that can tell a user, based on their location, how near they are to people in this spreadsheet. If they’re outside of a twenty-kilometre radius from any person in the spreadsheet, I want you to tell them how they can apply to become a Mannbassador.”
Within approximately four minutes, Google AI Studio generated a functional prototype. This initial version demonstrated the core concept but required refinement—images weren’t loading correctly, colours needed adjustment, and the episode linking functionality was missing. This outcome was exactly what I’d expected and, indeed, hoped for: a working foundation that needed iterative improvement rather than a perfect solution requiring no engagement.
Problem-Solving in Action: Images and Episode Links
The real learning occurred during the refinement process. On 4th February 2026, Ollie encountered his first significant obstacle: the podcast’s branding image wasn’t displaying in the application. He’d specified that the app should include The Modern Mann logo, but the generated code referenced a broken or inaccessible image path. This represented a perfect teaching moment about debugging and iterative problem-solving with AI tools.
Rather than providing a direct solution, I introduced Ollie to a powerful meta-prompting technique: asking the AI what he should tell it to solve the problem. This approach might seem paradoxical, but it leverages a crucial insight about AI systems: they often understand what information they need to provide better solutions, but users don’t always provide that information in initial prompts. By explicitly asking the AI to identify gaps in its understanding, you can surface requirements and constraints that might otherwise remain hidden.
Ollie applied this technique, prompting Google AI Studio with: “What questions would you ask me to help fix the image display issue?” The AI responded by requesting specific details: the image URL, preferred hosting method, desired dimensions, and fallback behaviour if the image failed to load. Armed with this guidance, Ollie provided more precise instructions, and the AI generated corrected code that properly referenced and displayed the logo.

The completed Mannbassadors app interface, showing location-based results with distances and episode links
The episode linking functionality presented a more complex challenge. Ollie wanted users to be able to click through to the specific podcast episode where each Mannbassador was featured. However, the podcast website doesn’t maintain a structured API or database that could be queried programmatically. This constraint required creative problem-solving that went beyond straightforward AI prompting.
Through exploratory investigation, we discovered that the podcast website included a search function. By examining how the search worked (visiting the search page and observing URL patterns), I identified that searches could be performed by appending query parameters to the URL. For example, searching for episode “7/5” could be accomplished by constructing the URL https://www.modernmann.co.uk/search?q=7%2F5.
I shared this discovery with Ollie, explaining the concept of HTTP query parameters and URL encoding. Rather than building this functionality for him, I provided a prompt template he could adapt:
“Take the episode number, for example 7/5, and search The Modern Mann website using the search page. You can get to the search page by heading to https://www.modernmann.co.uk/search, and if you add ‘?q=’ then the episode number, you’ll get a list of episodes that match. Find the correct link to the episode from that search page and link directly to it.”
This guidance demonstrated another crucial aspect of AI-assisted development: sometimes you need to perform investigative work before prompting. AI tools can generate code, but they can’t magically understand undocumented systems or discover implementation details that aren’t publicly specified. Strategic problem-solving involves knowing when to research, when to prompt, and how to combine both approaches effectively.
ℹ️ Note
When working with AI coding tools, there’s a constant balance between providing too much detail (which can confuse the AI or lead to over-complicated solutions) and providing too little (which results in generic code that doesn’t meet specific requirements). Effective prompting finds the middle ground: clear objectives, relevant constraints, and enough context for the AI to make informed implementation decisions.
The Result: From Zero to Production
By 10th February 2026, Ollie had completed a fully functional web application that exceeded the original challenge requirements. The Mannbassadors app, built entirely through AI-assisted development by someone with zero coding experience, demonstrates several impressive capabilities:
Location-Based Functionality: The app requests the user’s location through the browser’s Geolocation API, then calculates distances to all Mannbassadors in the database. Results display sorted by proximity, showing the nearest community members within range.
Responsive Design: The interface adapts cleanly to different screen sizes, working equally well on desktop and mobile devices. The visual design uses card-based layouts that present information clearly without overwhelming users.
Dynamic Data Integration: Rather than requiring manual updates, the app pulls Mannbassador information directly from a Google Sheets document. This approach means the community can grow without requiring code changes or redeployment.
Episode Discovery: Each Mannbassador card includes a “Find Episode” button that links through to the relevant podcast episode using the search functionality we identified, allowing users to listen to the episode where that Mannbassador was featured.
Territory Claiming: For users in locations without existing Mannbassadors, the app displays an application form, encouraging community growth whilst providing appropriate calls-to-action.
The complete development timeline spanned approximately one month from initial contact to deployment, with total active development time estimated at under ten hours. This efficiency came not from Ollie’s coding ability (which remained zero throughout) but from strategic tool selection, effective prompting methodology, and iterative problem-solving guidance.

Google AI Studio’s integrated development environment, showing how the app was built through natural language prompts
The Broader Implications: Industry Transformation
The success of this project illuminates broader trends transforming the software development industry. During our conversations, Ollie asked perceptive questions about the impact of AI-assisted development on professional developers and the industry at large. These questions deserve substantive answers, as they affect anyone involved in technology, from developers to business leaders making hiring decisions.
For experienced developers like myself (with over 20 years in the industry), AI tools represent opportunity rather than threat—but only for those who invest time in learning how to use them effectively. I’ve spent four years seriously studying AI-assisted development, including conversations with researchers at Microsoft, OpenAI, and Anthropic. This investment has fundamentally changed how I work. Before August 2025, I wrote code manually, using AI tools primarily for brainstorming and problem decomposition. After August 2025, where permitted by my clients, I orchestrate multiple AI agents working on different aspects of projects simultaneously, reviewing their output and providing strategic guidance rather than typing implementation details.
The real danger exists for two groups. First, experienced developers who refuse to adapt or lack time to learn new methodologies risk finding themselves increasingly unmarketable. The industry is rapidly moving towards valuing developers who can effectively guide AI systems over those who can type syntax quickly. Second, junior developers entering the field face a paradox: AI tools can generate code, but without foundational understanding, they cannot evaluate whether that code is correct, secure, or maintainable.
This creates an emerging professional role that I describe as “Editor-in-Chief” rather than “Author.” Pre-AI development primarily involved writing code; post-AI development increasingly involves prompting AI systems, reviewing generated code, identifying improvements, and iteratively refining outputs. The skill set shifts from syntax knowledge to strategic thinking: understanding system architecture, recognising code quality issues, and knowing how to guide AI tools towards better solutions.
❓ What About Code Ownership and Quality?
Two critical concerns arise with AI-generated code: intellectual property ownership and code quality. Some jurisdictions maintain ambiguous positions on whether code generated by AI tools can be copyrighted or who owns such code. This creates potential issues for startups built on AI-generated codebases when attempting to sell or raise investment.
Additionally, AI-generated code quality varies significantly based on prompt quality and problem complexity. A cottage industry has emerged (companies like Ulam Labs) specifically to refactor and improve AI-generated applications before they can pass technical due diligence reviews. For hobby projects, this matters little; for business-critical applications or startups seeking acquisition, it becomes essential.
Strategic Value for Businesses
From a business perspective, this case study demonstrates several strategic insights relevant to organisations considering AI adoption or evaluating technology leadership needs. The value I provided wasn’t implementation; I never wrote code or directly interacted with the application. Instead, I delivered strategic guidance that enabled Ollie to achieve his goal independently: tool selection, methodology teaching, and problem-solving approaches.
This engagement model mirrors how fractional CTO services deliver value to growing businesses. Rather than becoming a permanent overhead or creating dependency, strategic technology leadership builds internal capability. By teaching effective approaches rather than providing hands-on implementation, organisations develop lasting skills that continue delivering value long after the engagement ends.
For businesses exploring AI adoption, this project offers a practical template. The barriers to entry for AI-assisted development have dropped dramatically, but success still requires understanding how to prompt effectively, when to use different tools, and how to break down problems into manageable components. Strategic guidance accelerates this learning curve whilst avoiding costly false starts or abandoned prototypes.
The timeline and cost structure are particularly relevant. One month from concept to production, effectively zero incremental cost (Ollie used an existing subscription), and minimal time investment demonstrates that AI-assisted development can deliver rapid value without requiring enterprise budgets. However, this efficiency came from strategic guidance that helped Ollie avoid common pitfalls: selecting inappropriate tools, writing unclear prompts, or abandoning the project when encountering obstacles.
Looking Forward: The Future of Development
During our final conversation, Ollie asked whether the next major application or startup would be built through vibe coding. My answer: almost certainly, at least partially. We’re already seeing examples like OpenClaw (an AI agent platform that has gained significant attention in technology circles) that appear to have been substantially built through AI-assisted development, though security concerns around its default configuration have limited adoption.
The hiring landscape is already shifting. I’m seeing clients begin to hire developers specifically to oversee AI agents rather than write code directly. Training programmes are emerging to help organisations transition existing developers into “Editor-in-Chief” roles that emphasise AI guidance over manual coding. Some of my clients explicitly require approval before I use AI tools (though most now encourage it), reflecting ongoing uncertainty about intellectual property and code ownership questions.
For Ollie’s immediate needs, he asked about deploying the application for public use. The answer demonstrated how AI-assisted development integrates with existing platforms: he could ask Google AI Studio for instructions on embedding the React application into his existing podcast website. The AI understands deployment patterns and can generate step-by-step guidance customised to specific hosting environments. This capability transforms what would traditionally require consulting a developer into a self-service task achievable through appropriate prompting.
The episode featuring this challenge aired on 10th February 2026, with Ollie and host Olly Mann discussing the implications for web development and professional coding. Their conversation captured both excitement about democratised development capabilities and appropriate caution about quality, security, and professional expertise. This balanced perspective reflects the current industry reality: AI tools are genuinely transformative, but they augment rather than replace human judgment, particularly for production systems where reliability and security matter.
Key Takeaways for Technology Leaders
This case study offers several lessons for businesses evaluating AI adoption or seeking strategic technology guidance:
Strategic guidance accelerates learning: Teaching methodology and approach delivers more lasting value than providing implementation. Ollie can now build other applications using the same techniques, whereas if I’d built this for him, he’d remain dependent on external expertise for future projects.
Technology choices matter less than approach: The success came not from selecting the “best” AI tool but from matching tool capabilities to user needs and constraints. Strategic technology leadership involves understanding contexts, not promoting universal solutions.
AI tools democratise development but don’t eliminate expertise needs: Non-developers can build functional applications, but production-quality systems still benefit from professional review. The cottage industry for AI code cleanup exists because generating code and generating maintainable, secure, business-ready code remain different challenges.
The industry is transforming rapidly: Organisations that adapt hiring practices, develop AI-assisted development capabilities, and invest in training will gain competitive advantages over those clinging to traditional development models. However, this transition requires strategic planning rather than reactive adoption.
Effective prompting is a learnable skill: The difference between frustrating AI interactions and productive ones often comes down to understanding how to provide context, break down problems, and iterate on solutions. This skill can be taught, making it valuable to include in professional development programmes.
For businesses facing similar challenges—whether building new capabilities, evaluating technology investments, or developing team skills—strategic technology consultation can accelerate success whilst avoiding costly missteps. The investment in strategic guidance often delivers returns far exceeding the direct cost, as demonstrated by Ollie’s ability to build a production application without any development background.
This case study represents actual work completed in January-February 2026. The application remains live and in use by The Modern Mann podcast community. If your organisation is exploring AI adoption, developing internal capabilities, or seeking strategic technology guidance, please visit our services page to learn how we can help accelerate your success.
