Ever wrestled with an LLM error that left you scratching your head? You're not alone.
Language Models (LLMs) promise to revolutionize software development, but they also introduce new challenges due to their probabilistic nature: LLMs are much better at analogical reasoning than formal logic. In this article, we're tackling these challenges head-on. We'll provide five practical tips to navigate common LLM errors when programming. Let's dive in.
1. Variable Names are Critical
Having appropriate variable names act as reminders for the LLM, clarifying their purpose each time they are referenced (due to their attention window). For instance, using 'productId' instead of 'id' eliminates the risk of confusing it with another entity's ID, like a category. Interestingly, longer, more descriptive variable names often lead to lower token counts. For example, 'productId' is one token, while 'pId' is two.
2. Comments Rule the Land
LLMs generate a multitude of comments, some trivial at first glance. However, these comments clarify the intent of the code, irrespective of the programming language. This clarity not only helps the LLM generate accurate code but also helps identify mistakes during code review sessions. These comments can be removed in a final pass to avoid clutter.
3. Understand your language's landscape
Languages with strong opinions about their code and a widely used standard library or design patterns work best with LLMs. For instance, Golang is an ideal fit. When languages have a wide range of styles and libraries to choose from, specify those as part of the prompt to ensure consistency. This could include your preferred database driver/ORM, web framework, unit testing framework, error handling method, comment style, and import structure. Providing these guidelines to the LLM also benefits human developers.
4. Apply Software Architecture Principles
When programming with LLMs, apply traditional software architecture and design principles. Use intermediate functions, classes, structs, or types to tame complexity--remember to use readable names. This approach makes it easier for LLMs, with their analogical reasoning, to generate correct code, preventing them from spiraling into intricate, hard-to-follow code.
5. Maintain a Clean Context Window
LLMs consider everything in their context window when generating code, so any mistakes can lead to further inaccuracies. Be ruthless in correcting wrong code and restart the conversation with a fresh context when necessary. This approach ensures the LLM works with exactly the information it needs, leading to more accurate and efficient code generation.
Hopefully these tips will help you avoid errors when programming with LLMs. Start applying them today!