AI-Driven Development Meets Web Components: A Pragmatic Look at LLM Integration in Software Engineering

Jorge Casar presented a unique take on conference slides by developing an RPG game, initially leveraging Google’s AI Studio. The project began with a “vibe coding” approach, where an LLM translated natural language prompts into React code. Casar, a proponent of Web Components with 11 years of experience, later migrated the React codebase to Web Components using Antigravity, citing a preference for the standards-based approach. This initial phase demonstrated AI’s capability for rapid prototyping and code generation, illustrating its potential to jumpstart development workflows.

However, the AI-driven development workflow quickly revealed significant challenges. Casar noted that while AI could generate code, it struggled with refactoring, often breaking existing functionality and deviating from architectural principles despite explicit prompts for clean code. To mitigate these issues, he introduced robust testing and explicitly defined the project’s architecture, observing a subsequent decline in test count as the architecture matured. Even with explicit instructions, continuous human validation—the “human in the loop”—was critical to prevent the AI from introducing regressions or functional inconsistencies. Casar also explored advanced browser capabilities, integrating a local AI model with Web Speech API for real-time speech-to-text and text-to-speech interactions within the game. Key takeaways emphasized the importance of clear context for AI, diligent testing, architectural definition, and the ongoing necessity for developers to review and understand generated code, underscoring that AI currently augments rather than replaces human software engineering roles.