GO GO GOLEMS
GO GO GOLEMS

GO GO GOLEMS

WE ARE ROBOTIC COMPUTER SCIENTISTS WRITING ABOUT PROGRAMMING AND LARGE LANGUAGE MODELS.

mastodonTwitter
GO GO GOLEMS
GO GO GOLEMS
WE ARE ROBOTIC COMPUTER SCIENTISTS WRITING ABOUT PROGRAMMING AND LARGE LANGUAGE MODELS.
1y ago
Struggling with LLM Errors? Here are 5 Must-Know Tips for Writing Bug-Free Code with LLMs
GO GO GOLEMS

Ever wrestled with an LLM error that left you scratching your head? You're not alone.

Language Models (LLMs) promise to revolutionize software development, but they also introduce new challenges due to their probabilistic nature: LLMs are much better at analogical reasoning than formal logic. In this article, we're tackling these challenges head-on. We'll provide five practical tips to navigate common LLM errors when programming. Let's dive in.

1. Variable Names are Critical

Having appropriate variable names act as reminders for the LLM, clarifying their purpose each time they are referenced (due to their attention window). For instance, using 'productId' instead of 'id' eliminates the risk of confusing it with another entity's ID, like a category. Interestingly, longer, more descriptive variable names often lead to lower token counts. For example, 'productId' is one token, while 'pId' is two.

2. Comments Rule the Land

LLMs generate a multitude of comments, some trivial at first glance. However, these comments clarify the intent of the code, irrespective of the programming language. This clarity not only helps the LLM generate accurate code but also helps identify mistakes during code review sessions. These comments can be removed in a final pass to avoid clutter.

3. Understand your language's landscape

Languages with strong opinions about their code and a widely used standard library or design patterns work best with LLMs. For instance, Golang is an ideal fit. When languages have a wide range of styles and libraries to choose from, specify those as part of the prompt to ensure consistency. This could include your preferred database driver/ORM, web framework, unit testing framework, error handling method, comment style, and import structure. Providing these guidelines to the LLM also benefits human developers.

4. Apply Software Architecture Principles

When programming with LLMs, apply traditional software architecture and design principles. Use intermediate functions, classes, structs, or types to tame complexity--remember to use readable names. This approach makes it easier for LLMs, with their analogical reasoning, to generate correct code, preventing them from spiraling into intricate, hard-to-follow code.

5. Maintain a Clean Context Window

LLMs consider everything in their context window when generating code, so any mistakes can lead to further inaccuracies. Be ruthless in correcting wrong code and restart the conversation with a fresh context when necessary. This approach ensures the LLM works with exactly the information it needs, leading to more accurate and efficient code generation.

Hopefully these tips will help you avoid errors when programming with LLMs. Start applying them today!

0

Atomic Essay

GO GO GOLEMS
GO GO GOLEMS
WE ARE ROBOTIC COMPUTER SCIENTISTS WRITING ABOUT PROGRAMMING AND LARGE LANGUAGE MODELS.
1y ago
Testing Smarter, Not Harder: Using LLMs for Better Unit Tests
GO GO GOLEMS

Unit tests enable bold refactors and protect against subtle errors introduced by Large Language Models. However, while unit testing strengthens codebases for the long haul, they drain developer bandwidth with rote work.

In this article, we'll explore tactics to leverage LLMs as a unit testing sidekick - taking care of the grunt work so you can focus on the good stuff.

Craft test blueprints

Following the funnel method, we leverage LLMs in crafting unit tests by transforming the code we want into an actionable unit testing plan.

Once you have a testing plan that you are happy with, it is time to take a step back (or a short walk around the block) and make sure that you have taken everything into account.

Reviewing the test blueprints

Reviewing crafted test blueprints is pivotal to ensure they capture the essence and intricacies of your target code. This blueprint will be used for actual code generation; it is crucial to ensure it is concise, correct and exhaustive.

  • Begin by manually scrutinizing each test, checking that it indeed makes sense. Don't hesitate to manually edit the blueprint.

  • When faced with complex tests, don't hesitate to ask the LLM for clarifications on their purpose and expected outcomes; this not only aids comprehension but also doubles as invaluable documentation. It will also enhance the LLM's ability to generate proper code (transcript).

Ready to bring the blueprint to life? Dive into the next section: creating a testing skeleton.

Implementing the tests

After crafting a strong list of unit tests, it's time to let the model write the actual code.

  • Set a consistent style by generating a list of unit test signatures and brief descriptions (remember how self-attention works).

  • Guide the model by providing a template example that shows which framework, checks, and mock objects to use. This will ensure the upcoming tests align with existing code. (it's often beneficial to have the model produce table-driven tests for uniformity).

  • Always ask for clear error messages and reasons for test failure, making failed tests easy to decode.

  • Don't ask the model to implement all tests at once. Instead, iterate through the signatures, and use the same template each time to ensure consistency.

  • Make sure the template implementation is spotless. This will improve the quality of future implementations (transcript, note the LLM-esque off by one in the key iteration).

With straightforward tests and clear error info, there's no need to spend hours sifting through test code when issues arise.

The future of unit testing

LLMs' convenience means that even when faced with tight deadlines, there's always time for thorough tests: a short investment of 10-15 minutes can ensure foundational coverage.

This makes me pretty confident saying that large language models will revolutionize unit testing.

Remember, the aim isn't to let LLMs fully take the reins, but to utilize them as invaluable colleagues.

0

Atomic Essay

GO GO GOLEMS
GO GO GOLEMS
WE ARE ROBOTIC COMPUTER SCIENTISTS WRITING ABOUT PROGRAMMING AND LARGE LANGUAGE MODELS.
1y ago
Using ChatGPT to Write Software Documentation That Developers Actually Want to Read
GO GO GOLEMS

For too long, software documentation has been an afterthought.

We've all encountered software documentation that feels more like a last-minute addition rather than an integral part of the design process. It's no surprise, really. Crafting quality documentation requires skill, time, and a knack for writing. From high-level overviews to how-to guides and references, keeping everything updated can be a daunting task.

But what if I told that ChatGPT makes documentation work easier and more enjoyable; I think this is bound to transform software engineering.

Write documentation for the user

On the ladder of language, writing documentation is moving up from the code level to a more human level.

Documentation should anticipate questions and explain concepts clearly. GPT4 in particular is able to reason well enough to write documentation useful to the user. I ask it to skip over explaining the inner workings and instead to be exhaustive yet concise. A typical prompt might be, "Create documentation that guides a new developer on using the code, highlighting edge cases and crucial details."

Always consider your audience when crafting documentation. Then, ask ChatGPT to tailor your content to meet the reader's needs.

Targeting the different types of documentation

ChatGPT is a versatile tool that can help you create various types of documentation. Here's 4 ways how:

  • For tutorials and how-tos, merge existing documentation, third-party library documentation, and your shell history. Ask ChatGPT to create step-by-step guides for common operations in your codebase.

  • For reference material, prompt ChatGPT to produce documentation in a structured format like JSON. Repeat this for each section to ensure consistency. This also makes it easy to index and render the documentation for your website.

  • Use a diff and commit history to craft a compelling pull request description. You'd be surprised at how much ChatGPT can infer from a few diff lines.

  • When working with existing documentation and actual code, ask ChatGPT to write accompanying examples. Direct examples often prove to be the most useful part of documentation.

So, why wait? It's time to transform your documentation into something actually your colleagues actually enjoy reading!

0

Atomic Essay

GO GO GOLEMS
GO GO GOLEMS
WE ARE ROBOTIC COMPUTER SCIENTISTS WRITING ABOUT PROGRAMMING AND LARGE LANGUAGE MODELS.
1y ago
Write Boldly and Error-Free with The ChatGPT "Funnel" Method
GO GO GOLEMS

Meet the 'Funnel' method: the Swiss Army Knife of ChatGPT prompting.

Marrying the ladder of language with the compress/expand technique, it's your go-to for every ChatGPT interaction. Dive in and master error-catching, long content management, and consistent writing—in just three steps.

1. Put text into the funnel

The Funnel method transforms verbose language into concise statements by "pressing it" into a funnel.

Ask ChatGPT to convert complex inputs—like jumbled text or meeting transcripts—into succinct bullet points, ensuring no detail is overlooked. The bullet point format strips the language bare: no more stylistic construct, implied context, messy grammar, confusing turns of phrases.

Other intermediate formats are lists of keywords and categories, sentences of the form SUBJECT VERB OBJECT, RDF triples or Prolog facts. All of these have the benefit of reducing token count, making it easier for the model to take everything into account.

2. Verify, refine and edit

Now, it's simpler to ensure your list captures all essentials. To hone it:

  • Validate each point.

  • Spot and fill any gaps.

  • Use ChatGPT for enhancements.

  • Adjust manually to your liking.

A polished list not only improves clarity but also serves as an effective reference for future notes.

3. Boldly rearrange your content

With a streamlined data format, manipulating content becomes straightforward. Your bullet list, paired with original or other sources, can be transformed into:

  • Topic slides.

  • Reports, essays, blogs, or tweet threads from rough drafts.

  • Flashcards.

  • Index entries.

  • A book's table of contents.

Where verbose language can muddle output, a concise format acts as a Swiss Army Knife allowing expansive text transformations.

0

Atomic Essay

GO GO GOLEMS
GO GO GOLEMS
WE ARE ROBOTIC COMPUTER SCIENTISTS WRITING ABOUT PROGRAMMING AND LARGE LANGUAGE MODELS.
1y ago
Crushed by Legacy Code? Here's 4 Ways to use AI to Dig Yourself out of Technical Debt
GO GO GOLEMS

Just because it's dusty doesn't mean it can't shine again.

Because they produce boring yet consistent results, large language models (LLMs) are the perfect cleaning crew for your legacy codebase. Primed with proper guidelines, they can defuse your ticking time bomb and turn it into a solid foundation for future work.

Here's 4 ways LLMs can help with modernizing your codebase.

Add type hints and docstrings to existing code

Who wouldn't want to have their code fully documented, in a consistent style?

To generate good docstrings, follow these steps:

  • Gather the code you want to document as well as relevant additional information (for example, the documentation of functions this code depends on).

  • Provide existing docstrings or type definitions to use as style guidelines.

  • Verify and edit the result (LLMs tend to be a bit rigid and verbose, not necessarily a bad thing).

As with all LLM-based coding, the trick is figuring out what to put in the context window for the model to do a good job.

Add unit tests

LLMs are a force multiplier for writing unit tests. It won't entirely remove the need to think hard about confusing legacy code, but it will do 90% of the work in most cases. As usual, priming the context window is key.

  • First, ask for an explanation of the code and potential edge cases (generate output that will cause the model to generate better output). Concise bullet points lists using keywords are the best way to get a lot of information into a few tokens.

  • Ask for a list of tests (without code). This again primes the context window without filling it with too many "low-quality" tokens just yet.

  • Ask the model to write each test one by one, making sure it still has the original code in its context window. If needed, paste the code and explanation again.

  • Special tip: build table-driven tests to save tokens.

Modernize code with intelligent refactoring

Much more flexible than traditional refactoring tools, LLMs can deal with instructions such "introduce a facade pattern", "group the arguments into a struct", "upgrade the class to use the new readonly parameter language feature" or "transform these for loops into a functional chain."

  • Make sure you have good unit tests in place. LLMs are fuzzy magic and can introduce bugs.

  • Paste your legacy code along with a complex refactoring instruction

  • If consistent style is needed across refactors, priming with an example of a previous refactor helps

This one is maybe my favorite. Combined with command-line tooling to automate the process, this can really slice through crusty code!

Extract OpenAPI Specifications from a Chrome HAR Recording

Thanks to a well hidden feature in Chrome, you can generate OpenAPI specifications, HTTP client code, mock REST APIs in a heartbeat. Here's how:

I hope these few tips show you the incredible impact LLMs can have on modernizing your codebase!

0

Atomic Essay

GO GO GOLEMS
GO GO GOLEMS
WE ARE ROBOTIC COMPUTER SCIENTISTS WRITING ABOUT PROGRAMMING AND LARGE LANGUAGE MODELS.
1y ago
Writing Entire Programs with ChatGPT: Here's a Simple Strategy to Get the Best Results
GO GO GOLEMS

Writing entire programs with ChatGPT is not easy.

Too high are the risks for inconsistent style, using the wrong libraries, losing track over longer conversations. So what is the secret to building high-quality, maintainable and effective software with ChatGPT?

This article serves as an introduction to a topic we will examine in depth over the next couple of days: real-world programming with ChatGPT.

Solving longer tasks with ChatGPT: 2 essential skills

Solving longer tasks with ChatGPT requires two fundamental steps:

This is where your knowledge as a software engineer is crucial: decomposing a problem into steps along with knowing which information is required for that step is a skill that only comes through experience and deep study.

From prototype to CLI tool: 3 steps to get high-quality results

I use the following 3 steps to create useful command-line applications (we will use "docstring", a tool that extracts structured data out of source code documentation string, as an example):

  • Ask ChatGPT to create a prototype: I will often ask ChatGPT to create a prototype for a small task, and give it minimal guidance. I chose a task small enough (we will come back to what is small enough): "extract docstrings out of a file using a set of regexp". Note how I already decomposed the problem within that first step.

  • Adapt and refine code: Once I get the first version back, I will rework the code, either by hand or by continuing to prompt ChatGPT with small tasks. When I do hand edits, I paste my changes back into ChatGPT.

    • Fix bugs, extract methods, clean up datastructures

    • Ask ChatGPT to create unit tests, docstrings

    • Incrementally add more functionality. In the docstring case, for example: parse attributes within docstrings, extract a "Scanner" struct, support more languages

  • Transform into a known pattern: When I'm satisfied with my prototype (this is the stage "docstring" is at, I will transform into a well known application pattern, usually by pasting concise instructions. I usually transform my tools into glazed applications by using these instructions (we will come back to gathering context information and the tools I use soon).

This short excursion into building real-world programs is all you need to get started: decompose and transform, by providing the right context information.

0

Atomic Essay

GO GO GOLEMS
GO GO GOLEMS
WE ARE ROBOTIC COMPUTER SCIENTISTS WRITING ABOUT PROGRAMMING AND LARGE LANGUAGE MODELS.
1y ago
Forget Prompt Engineering: This Obvious Habit is the Key to ChatGPT Mastery
GO GO GOLEMS

Much prompt engineering advice online lacks concrete evidence. The reason: no-one really knows how LLMs (Large Language Models like ChatGPT) work, nor how what they are capable to do. The best you can do is to discover things yourself.

The key to mastering ChatGPT is to consciously use it to enhance your life.

Thinking Outside of the Box: Solving Real-World Problems

It's easy to get stuck into thinking ChatGPT can only help with a few tasks. This often comes from not thinking outside of the box.

Finding great ways to apply ChatGPT to a problem often requires creative thinking. Often, ChatGPT can help not by directly solving problems, but automating necessary but tedious tasks. This way, you can focus on what you enjoy and are uniquely good at.

For instance, ChatGPT can assist musicians not by writing music, but by organizing music sheets, noting recording tasks, or creating tour tech riders. This leaves all the more time to actually make music.

Not so Obvious: Remembering to Use ChatGPT

Too often, I forget to use ChatGPT to solve the problem in front of me.

Remembering to use LLMs requires practice. It's common to overlook that a problem can be solved with an LLM: as a software engineer, it seems obvious that I should engineer a solid prompt to help me review code, yet it took me 10 months to think of it!

Make it a habit to think "could I use ChatGPT for X?" and to write down all the ideas.

Thinking About an Idea Takes More Effort than Just Trying it Out

With ChatGPT, trying something out often takes less than discussing it.

When you catch yourself talking (joking?) to colleagues about something ChatGPT could do, grab your phone, open your laptop and just try it out. Successes will add to your prompt arsenal, and failures will provide valuable insight, all written up in a transcript.

ChatGPT is not a one-trick pony but a versatile tool that can be applied to most tasks and questions; the hard part is remembering to use it!

0

Atomic Essay

GO GO GOLEMS
GO GO GOLEMS
WE ARE ROBOTIC COMPUTER SCIENTISTS WRITING ABOUT PROGRAMMING AND LARGE LANGUAGE MODELS.
1y ago
Understanding ChatGPT's Main Strength: Moving Up and Down the "Ladder of Language"
GO GO GOLEMS

You might think that a conversation with ChatGPT is just like a normal human conversation--after all, they're both in plain English (or of course the language you speak to it).

Don't be fooled: talking to ChatGPT is really more like programming.

Introducing: the Ladder of Language

There is a widespread idea that computer code is complex and mysterious, full of inscrutable symbols and abstractions, but that's discounting human language's richness.

Programming requires everything to be stated explicitly: IF A THEN B ELSE C. It does not care much about vocabulary, and is thus replete with unnatural sounding words like networkTokenManager, ctxRef or retVal. By contrast, human language has many shades of grey: context can be implied; a sentence might mean its opposite; choice of words convey information, emotion and intent.

I like to think of this as the "Ladder of Language:"

  • at the top, we have the rich and dense language that humans speak,

  • at the bottom, the rigid and structured code that computers understand.

ChatGPT's Language: Neither Human nor Machine

ChatGPT understands language differently than most computer programs.

For example, it is not very good at working within the rigid formal logic required by programming languages: it often makes bugs or calls functions that don't exist. Similarly, it will often misunderstand something that any child would think is obvious (it often misunderstands negation, for example).

ChatGPT sits at the middle of the ladder. It is king in the land of meeting notes and TPS reports; lord in the fief of resumes and social media articles; conqueror of the steppes of domain-specific languages and structured data.

ChatGPT is Best at Moving Text Up and Down the Ladder

Because large language models are decent at parsing the complex semantics of human language (something normal programs can't do well at all), yet also able to deftly generate mostly correct computer gobbledigook (something humans are neither great at nor very fond of), ChatGPT's main strength is pushing words up and down the ladder.

It excels at tasks like:

  • transforming natural language into JSON (moving down the ladder)

  • writing comments for existing code (moving up the ladder)

  • transform a voice memo into meeting slides (moving down the ladder)

  • explain a bug based on a stacktrace (moving up the ladder)

  • explain how to solve a problem step by step (moving down the ladder)

Moving down the ladder keeps ChatGPT on track by constraining the meaning of words.

Moving up or down the ladder makes things easier for humans to understand: converting cryptic computer code into something more fluid; summarizing long, or just badly written human language into something more structured and clear.

When prompting, always think about how you want to move around the ladder.

0

Atomic Essay

GO GO GOLEMS
GO GO GOLEMS
WE ARE ROBOTIC COMPUTER SCIENTISTS WRITING ABOUT PROGRAMMING AND LARGE LANGUAGE MODELS.
1y ago
Cutting Through the Clutter: Turning Lengthy ChatGPT Transcripts into Effective Notes
GO GO GOLEMS

I used to avoid going through lengthy ChatGPT transcripts, until I learned how to turn them into valuable notes.

ChatGPT transcripts, often dismissed as repetitive and verbose, are in fact treasure troves of information. Storing them in ChatGPT's UI or saving them in your notes might seem like a good idea, but it's a surefire way to ensure they gather digital dust. Yet, every ChatGPT transcript is a goldmine of information just waiting to be reused.

But don't despair, I've got three tips to help you extract the most value from these conversations.

1. Don't Correct, Edit

Correcting ChatGPT when it errs can introduce confusion into the context window of the LLM.

However, ChatGPT's UI gives you the option to go back to previous messages and edit them. This essentially rewinds the conversation, creating an alternative branch (you can toggle back and forth using the arrow buttons).

The result? A sharper, streamlined transcript that cuts right to the chase.

2. Craft Summaries and Specialized Notes

At the end of a conversation, ask ChatGPT for a concise summary of the conversation, neatly organized into bullet points. It's also capable of highlighting key quotes or specific details that you'd like to remember.

And there's more: ChatGPT can also be your personal note-taker. It can record facts and insights on individual topics, ready for reuse in a wiki or glossary.

3. Create Actionable Takeaways

ChatGPT can also create actionable output. Long brainstorming conversations can be turned into tickets for a task management system (for best results, add your preferred formatting guidelines). If you love learning, ask ChatGPT for a set of flashcards, follow-up questions or writing prompts.

So don't let your transcripts rot! Turn them into long-lived assets for your personal knowledge system.

0

Atomic Essay

GO GO GOLEMS
GO GO GOLEMS
WE ARE ROBOTIC COMPUTER SCIENTISTS WRITING ABOUT PROGRAMMING AND LARGE LANGUAGE MODELS.
1y ago
AI-Driven Self-Education: Learn How to use ChatGPT as the Private Tutor You Always Wished You Had
GO GO GOLEMS

I've always struggled with traditional learning methods, but ChatGPT is changing everything.

I have an unending passion for learning, so much so that I would exhaust my teachers: I was the kid who constantly asked "why?" Growing up, this innate curiosity never left me; yet I still struggle with a lot of didactic content out there and learn best by doing projects. As I started using ChatGPT more seriously, I realized that it was the perfect tool to turn my learning challenges into child's play.

Let me share three transformative ways ChatGPT can redefine your learning experience.

1. Craft Your Personalized Curriculum

To design a learning plan tailored to your needs, prompt ChatGPT with:

Create a curriculum for learning X, considering my background in Y, with the goal of achieving Z.

You may wish to include a textbook's table of contents or specific topics of interest. After ChatGPT generates an initial syllabus, engage in a dialogue with it to refine your perfect learning strategy.

2. Ask for Analogies and Exercises, Not Just Information

Given that ChatGPT is a language model, it may occasionally produce inaccurate or incomplete data. While this can offer a unique learning experience, you can optimize your interaction with ChatGPT by asking it to:

  • Summarize key points

  • Relate information to other areas you're familiar with

  • Transform texts into flashcards

  • Create exercises with escalating difficulty levels

3. Let ChatGPT Play the Role of a Student

There is no better way to learn than by teaching. Consider this approach (you can make it your own, of course):

ChatGPT, you're now a beginner in X. Ask me questions and I'll do my best to answer them. Use examples that resonate with you. After each response, summarize what you've understood in your own words and ask for clarification if needed.

This method requires you to explain concepts in your own words, which really puts your brain into gear.

Learning things has never been as fun as in the last few months: I often wake up and can't wait to study some complicated topic, because I know it will be so much fun!

0

Atomic Essay

GO GO GOLEMS
GO GO GOLEMS
WE ARE ROBOTIC COMPUTER SCIENTISTS WRITING ABOUT PROGRAMMING AND LARGE LANGUAGE MODELS.
1y ago
No, ChatGPT, Bad! Using Critique to Get the Best out of ChatGPT
GO GO GOLEMS

ChatGPT can dazzle us with insightful responses but often falls short when generating complex content like articles or code. Even for humans, crafting such content is difficult—that's why we have editors and peer reviews.

A surprising aspect of ChatGPT is its ability to critique itself, a process that—as we'll discover—can significantly enhance its output quality.

Craft the Perfect Critique Prompt

There are many ways to ask ChatGPT to criticize. Here is one template that I often use when writing:

  • As a X,

  • Criticize and suggest improvements for the following Y,

  • Targeting Z.

Here is a concrete example (this advice taught me a lot about writing!):

As a highly regarded editor of the New York Times blog section,

Criticize and suggest improvements for the following atomic essay (250 words, straight to the point),

Targeting the style of Dickie Cole or David Perell.

These steps are key:

  • your role sets the tone,

  • what you want improved dictates the focus,

  • and the target you aim for guides the language.

Tell ChatGPT to review and fix its own code

Because ChatGPT using its own output as its input, self-critique taps into data from actual review sessions in its training corpus. These will often contain helpful advice which, once present in the context window, will further improve ChatGPT's output.

Here's a programming prompt I often employ (note how the review gets reviewed too!):

'Assume the role of an experienced programmer. Review and refine the previous code, focusing on bugs and clarity. Mention and disregard irrelevant issues.

Asking ChatGPT to critique and correct itself significantly improves its output —making it an indispensable asset in your toolkit.

0

Atomic Essay

GO GO GOLEMS
GO GO GOLEMS
WE ARE ROBOTIC COMPUTER SCIENTISTS WRITING ABOUT PROGRAMMING AND LARGE LANGUAGE MODELS.
1y ago
Custom Instructions: 3 Ways to Make ChatGPT Uniquely Yours
GO GO GOLEMS

Imagine transforming a simple chatbot into your personalized digital assistant. OpenAI introduced a feature that does just that: with Custom Instructions, you can now prepend custom text to every message in a conversation.

Beyond basic chat

This might at first seem like an innocuous feature: surely you can do the same with copy-paste. However, having that option preset and out of your mind transforms ChatGPT from a powerful yet generic bot into an assistant that feels deeply personal without having to constantly control its behaviour. Instead of sticking to the basic examples often cited by OpenAI, let's delve into the untapped potential of this feature.

Here are three ways to use Custom Instructions that will change how you use the tool forever.

1. Easily control the output format

Switching between verbose and concise outputs can be simplified using a custom instruction like:

If the first word is terse/concise/default/comprehensive/exhaustive, use the matching verbosity. When exhaustive, being comprehensive in depth and breadth.

Simply insert the appropriate keyword to control how chatty the answer should be.

2. Define custom slash commands

While you can't "program" ChatGPT per se, you can tell it to follow commands, like this:

Slash Commands:

  • /questions - Ask pertinent and interesting questions a curious user would ask

  • /summarize - Summarize the conversation so far

  • /critique - Criticize your previous answer before improving your output

3. Customize the output

You can apply our previous insights to create custom instructions. For example, in order to leverage its attention mechanism or optimize context size, we can ask it to:

  • Output your answers in concise bullet point form, prefixing each with a relevant emoji

  • Before answering, concisely list what your expertise in the domain is and how you are approaching the problem. Think step by step.

Make ChatGPT your own

We've only begun to explore the myriad ways how ChatGPT can be improved using custom instructions. ChatGPT AutoExpert offers many insights (including instructions tailored for code). Yet, the most effective instructions will be the ones tailored by you, for you.

0

Atomic Essay

GO GO GOLEMS
GO GO GOLEMS
WE ARE ROBOTIC COMPUTER SCIENTISTS WRITING ABOUT PROGRAMMING AND LARGE LANGUAGE MODELS.
1y ago
ChatGPT Deep Dive: Tokens, Context Size and Why They Matter
GO GO GOLEMS

While hands-on experience is paramount for mastering Large Language Models, a basic understanding of their internal mechanisms can have a big impact.

If you've followed this series, you are already familiar with the importance of autoregressive behavior in the attention mechanism. Today's discussion will focus on tokenization and context size: two additional implementation aspects that will show why shorter prompts are often better prompts.

1. Tokenization

In order for text to be processed by a Large Language Model, it has to be turned into numbers. This process is called tokenization: each word is broken up into smaller units known as tokens. As we will see, the amount of tokens in your prompt can have big consequences.

There are usually multiple tokens per word, which results in a token being around 3/4 of a word. To get the exact count of tokens in your prompt, you can use OpenAI's tokenizer.

2. Context Window Size

Due to memory constraints, computational costs and the use of "positional embeddings" (necessary to keep word ordering), Large Language Models have a fixed context window size: they can only look at so many words to predict the next one.

Because the model looks at input tokens as well as previously emitted output tokens, both the prompt and its answer have to fit within the context size. ChatGPT 3.5 has a context size of 32k tokens, while ChatGPT 4 has a context size of 8k.

Better Conversations with Less Tokens

Each message in a conversation adds to the context size. Exceed the limit, and ChatGPT starts forgetting earlier parts. While it can be tempting to add a lot of data to your prompts, conveying the important information concisely is often a better technique. It not only allows for longer conversations, but it tends to improve the quality of the answers as well.

Here are a few techniques to reduce the size of your prompts:

  • Pre-summarize data before adding it to your prompt

  • Request answers in bullet-point form

  • Condense information and often start new conversations, keeping only the key details

Understanding why token size matters is key to having conversations that make full use of the context window of the LLMs.

0

Atomic Essay

GO GO GOLEMS
GO GO GOLEMS
WE ARE ROBOTIC COMPUTER SCIENTISTS WRITING ABOUT PROGRAMMING AND LARGE LANGUAGE MODELS.
1y ago
Putting Humans First: Using AI to Do Great Code Reviews
GO GO GOLEMS

When using AI, we have a tendency to literally ask it to do our job.

In software development, AI is often see as a way to automate tasks like code reviews, but its limitations in context and reasoning make it quite ineffective and usually unsuitable for replacing human judgment. This often leads to AI tools being dismissed by developers entirely.

What we are missing is that code reviews are primarily about communication: overall quality improves most when code reviews are effective at sharing knowledge.

This means that thinking about humans first is often the most effective way of using AI.

Why do we do code reviews?

While the obvious answer would be to find software defects early, code reviews operate in a larger context. They are used to:

  • Mentor junior developers.

  • Onboard new colleagues.

  • Share knowledge.

  • Ensure a consistent coding style.

  • Promote a consistent software architecture.

These goals are more about effective communication than programming per se.

4 ways to use AI in code reviews

To maximize AI's utility in code reviews, focus on:

  • Clarifying Code: Use AI-generated text to explain unclear code segments. After confirming with the code author that the result is accurate, integrate it as comments or use it to clean up the code.

  • Expanding Explanations: Turn succinct code review comments into comprehensive ones by supplementing them with AI-generated documentation or examples.

  • Summarizing Reviews: Let AI write a concise summary of the reviw. This helps other team members learn from the review without having to dive into the nitty gritty.

  • Updating Documentation: Use AI to translate the review summary into updated coding rules and documentation..

None of these use cases have much to do with finding defects; instead, they are all about improving communication. The AI amplifies the reviewer's impact because their work can now be better appreciated by other humans.

This is what leads to better code in the long run!

0

Atomic Essay

GO GO GOLEMS
GO GO GOLEMS
WE ARE ROBOTIC COMPUTER SCIENTISTS WRITING ABOUT PROGRAMMING AND LARGE LANGUAGE MODELS.
1y ago
It's Happening! 3 Ways AI Already is Changing Software Engineering
GO GO GOLEMS

I still can't believe these three insights come straight out of my experience. The following 3 ways AI already changes software engineering are already part of my daily routine.

1. Documentation is Becoming Dramatically Better

Historically, documentation has been a developer's Achilles' heel.

Yet, with tools like ChatGPT, drafting quality documentation has never been easier. Begin with a basic outline, throw in some code snippets or terminal logs, and watch as it transforms into a polished document. Likewise, engineering communication such as RFCs can be enhanced, reviewed, and refined by a Large Language Model (LLM), resulting in clear and compelling content.

In an age where stellar documentation can be crafted in minutes with AI assistance, settling for mediocrity is no longer an option.

2. Tools, tests, instrumentation are practically free

Great software isn't just about innovative algorithms; it's also about the essentials: logging, testing, instrumentation, and configuration

These seemingly mundane tasks, often side-lined in sprint planning, are resource-intensive for developers. However, LLMs can generate this boilerplate with unmatched speed and precision, turning them from onerous undertaking (and thus easily dismissed in a crunch) to almost free.

Just as with documentation, there is no excuse for software without robust test coverage and useful tools.

3. Being a software engineer isn't as taxing anymore

The intricate nature of programming, especially when dealing with monotonous yet crucial tasks, can be draining. A single typo can trigger hours or even days of debugging.

But with LLMs, I've transitioned from grueling coding to a "conversational cognitive space." It's akin to brainstorming at the watercooler, but code still materializes. Where I would have spent 4h of tedious coding and maybe 1h of real thinking before, I can now often spend 1h coding and 4h of "chatting" (really, thinking).

This might be the single most important effect AI had on me: it has liberated me from the tediousness and fatigue of being a systems programmer and turned me into a systems thinker.

We are only seeing the beginning of how AI is changing software engineering; this is not science fiction, this is happening right now!

0

Atomic Essay

GO GO GOLEMS
GO GO GOLEMS
WE ARE ROBOTIC COMPUTER SCIENTISTS WRITING ABOUT PROGRAMMING AND LARGE LANGUAGE MODELS.
1y ago
Avoiding ChatGPT Prompt Overload: The Compress/Expand Framework
GO GO GOLEMS

Navigating the myriad of ChatGPT prompts can be daunting. But with the right framework, you can tackle each prompt confidently, ensuring top-notch results.

Here is a tried-and-tested approach I use in most of my prompts: the Compress/Expand framework.

Compress: Focus and Simplify

The aim of compression is to refine and streamline inputs. This means transforming a chunk of information into a more digestible form. For example:

  • Derive key headings from a verbose paragraph.

  • Convert data into a structured JSON format.

  • Condense a document into a list of essential facts.

  • Trim down a lengthy paragraph for brevity.

  • Formulate questions based on an article's content.

ChatGPT uses its training data and its reasoning capabilities to surface the important points in the input and present them in clear and concise form. This is great for multiple reasons:

  • Brevity enhances clarity.

  • Shorter outputs make the most of ChatGPT's text input limits.

  • Structured data can be manipulated by other software.

Expand: Transform and Add Detail

Expansion is a way to turn compressed inputs into something richer, more nuanced or in a different format. ChatGPT uses its training data and (more importantly) the additional information you provide to:

  • Transform bullet points into comprehensive slides.

  • Morph structured data into operational programs.

  • Craft a full essay from a question paired with factual points.

The beauty of working with compressed inputs? You can layer on additional information without muddying the waters—the clarity of the compressed part keeping ChatGPT on point.

Instead of searching for the next best prompt, compress your inputs, expand them out into a richer form, and have ChatGPT generate great content every step of the way.

0

Atomic Essay

GO GO GOLEMS
GO GO GOLEMS
WE ARE ROBOTIC COMPUTER SCIENTISTS WRITING ABOUT PROGRAMMING AND LARGE LANGUAGE MODELS.
1y ago
Nobody Likes Boring AI Content: How to Write Unique Marketing Copy with ChatGPT
GO GO GOLEMS

It's easy to generate bland dribble with generative AI tools. No more!

This short essay will show you how to consistently generate rich and unique output from large language models. This technique can be applied to many domains, but none show how effective it is than writing marketing copy.

The Baseline: Mediocre and Uninspired Copy

Let's pretend that we are selling a new blue lipstick. Asking ChatGPT to "write marketing copy for a blue lipstick" comes up with:

  • A statement, not just a shade.

  • Vibrant and long-lasting.

  • Inspired by deep sea magic.

This is not going to make much of an impact.

The Secret: Ask ChatGPT to List Interesting Facts First

We saw yesterday that due to the transformer architecture, ChatGPT looks back at its own output when generating new text, something called "autoregression". This simply means that we need ChatGPT to generate text that will help it generate better copy!

What is easier? Come up with good slogans out of nowhere, or come up with good slogans after getting a list of striking details?

We should first ask ChatGPT list vivid facts about our blue lipstick:

  • Celestial Midnight: Deep cosmos-inspired shade.

  • Iridescent Finish: Shimmers like twinkling stars.

  • Oceanic Undertones: Depths of the deep sea.

Generating interesting copy is now a breeze! Here are a few examples.

  • Midnight Meets Ocean: Depth redefined.

  • Nebula Nuance: Stellar shade, sea-depth subtlety.

  • Ocean Echo, Star's Shadow: Where deep meets dazzle

The Takeaway: Prompt for Intermediate Output

We will encounter many variations of this technique over the next few weeks, but its structure is always the same:

  • Ask for intermediate information before the actual answer

  • Make sure ChatGPT generates high quality intermediate information

  • Enjoy the final result!

0

Atomic Essay

GO GO GOLEMS
GO GO GOLEMS
WE ARE ROBOTIC COMPUTER SCIENTISTS WRITING ABOUT PROGRAMMING AND LARGE LANGUAGE MODELS.
1y ago
Under the Hood: How to Use ChatGPT's Attention Mechanism for Better Prompts
GO GO GOLEMS

Under the Hood: the Transformer

At the core of ChatGPT is a machine learning model called the "Transformer."

Since their invention in 2017, transformers have revolutionized text generation and understanding; they are the driving force behind the explosion of AI-based technology we are seeing today.

Transformers use a so-called "attention mechanism" that looks back at important elements in previous text, just like a human refers back to previous words and sentences to understand the context.

They are also "auto-regressive": to generate coherent text, they refer back to both your input and their own output.

You need ChatGPT to generate output that will help it generate better output down the line: it's basically thinking 2 steps ahead!

Here is one insight into how ChatGPT's inner architecture helps write better prompts.

Don't Skip Explanations!

ChatGPT is very verbose and dogmatic, almost sounding like a kindergarten teacher, announcing every step it will take before actually taking it: "First I will A, then B, finally C."

You might be tempted to ask it to skip those parts and get straight to the point, but be careful when doing so! Skip too many explanations and ChatGPT will start producing inaccurate or nonsensical information.

Remember how the attention mechanism examines previous output: the more prescriptive the previous text, the bigger the chance further output will be correct.

Explanations play two roles here:

  • they allow you to verify ChatGPT correctly "understood" the prompt,

  • in turn, they help ChatGPT generate better text

Coming Up: Creating Amazing Outputs

Stay tuned as we learn more about leveraging the attention mechanism to create incredible prompts.

Next, we'll explore how to:

  • craft detailed and colorful marketing copy,

  • write complex programs that work right off the bat

  • create concise yet accurate summaries of long documents

Until then, happy prompting!

0

Atomic Essay

GO GO GOLEMS
GO GO GOLEMS
WE ARE ROBOTIC COMPUTER SCIENTISTS WRITING ABOUT PROGRAMMING AND LARGE LANGUAGE MODELS.
1y ago
1000 hours of ChatGPT: here are the best 3 techniques to become a better prompt engineer!
GO GO GOLEMS

When I first used ChatGPT 10 months ago, I knew I had discovered a fantastic new tool.

At the time, prompt engineering was very abstract and mostly discussed in scientific papers, yet I wanted to use ChatGPT for concrete everyday tasks! I sat down and got to work: 1000 hours later, I identified 3 techniques to will make you better at ChatGPT.

1. Use the regenerate button

To best understand how ChatGPT works, hit "Regenerate" frequently.

Notice how it's sometimes spot-on and sometimes completely wrong. By seeing many responses, you will start discerning useful results from mere hallucinations.

If pressing regenerate multiple times always outputs something useful, you know you have a winner!

2. Don't ask ChatGPT to solve a problem: ask it how to solve a problem

ChatGPT is not very good at solving problems: it often gives incorrect answers.

That's because the solution to your exact problem won't be present in the material it has been trained on. However, it probably knows how to solve problems of the same type; asking it to find a step by step solution or write a program that does usually works!

You'll then have correct instructions (or program!) to solve many similar problems.

3. Use whimsical examples

Want to see if ChatGPT truly understands your prompt? Test it in a bizarre domain! Ask for Excel formulas to calculate the 'circumference of giraffe necks' rather than standard business scenarios.

Such whimsical settings help you gauge the prompt's effectiveness without your brain's tendency to find meaning where none exists.

0

Atomic Essay