What is the difference between causal inference and just association between variables? I LLM. How are AI and Generative AI used in detecting early signs of heart rejection in cases of heart transplants? Can we detect such signals from an Endomyocardial biopsy (EMB)? I LLM. What is a whole-slide histopathological image? I LLM. How is a vehicular actuated control (VAC) system used in adaptive traffic signalling? I LLM. I fall short of LLM-ing for the following: 1) Given my current state, do I need to use the washroom? 2) I am running two hours late. My wife is angry at the repeat offence. Which area must I blame for congestion to sound believable? 3) My boss is calling. Should I take the call or pretend the phone was in silent mode? The above scenarios involve me seeking answers from LLMs. I am not yet referring to the scenarios where LLMs proactively send me relevant notifications, even before I know what I need to search for or what I need to generate, to be precise, in terms of machine learning. 

LLMs are omnipresent, omniscient, and hence, omnipotent. Developing numerous applications that utilise LLMs to assist us in fulfilling our tasks and requirements is the apparent path forward. And who helps us in creating those applications, from ideation to code generation? LLMs. The activity of using an LLM-powered editor to write code based on our prompts or instructions has become known as vibe coding. We cannot simply provide a high-level requirement and expect to see the final product built from end to end. Vibe coding requires us to define our starting point, breaking down high-level requirements into subtasks, iterating to address errors and misses that the LLM might have introduced, and continuously guiding the LLM to achieve the final product. 

Don’t get misled by my usage of “final product”. More often than not, the final output we would get will be suitable for an excellent demonstration of proof of concept. Building production-ready applications will take little more than vibe coding, at least for now. We don’t mind. Let’s start discussing whether the requirements are on track to be fulfilled. The dilemma is that with an excellent coder who lacks vibe coding skills and an average coder with excellent vibe coding skills on our premises, who do you prefer? Don’t tell me we don’t have preferences in the corporate world.

A few suggestions that might help you with vibe coding follow: 1) Ask for one specific thing at a time. 2) Don’t get sold every time the LLM claims it has identified the issue. 3) The LLM might be the engine of the car, but you are the one driving it. 4) Don’t hesitate to push the boundaries of what vibe coding can do, as LLMs are becoming smarter by the week. 5) Creating changes to a user interface screen through vibe coding is more straightforward than making complex functional changes. 6) Vibe coding is “vide coding”. Always request references and additional information.

This is not LLMs vs. humans. This is LLMs and humans. LLMs are not replacements but collaborators, reshaping developers into orchestra conductors of code. The iterative refinement mirrors an evolving partnership. The one where the “average coder with coding prowess” might outshine the rigid expert, not through raw skill, but through adaptability in harnessing the vast potential of generative AI. While today’s LLMs craft proof-of-concept marvels rather than production-ready titans, they democratise innovation, turning vague ideas into tangible prototypes with startling speed. Yet, this power demands vigilance. Every hallucination-check and boundary-pushing prompt reminds us that AI is not the destination. It is a co-passenger. The future belongs to us all. As vibe coding evolves from niche practice to mainstream craft, it challenges us to rethink expertise itself. As I have always said, mastery lies not in knowing all the answers but in asking the right questions. Bon app.

Linkedin
Disclaimer

Views expressed above are the author's own.

END OF ARTICLE