'Prompt' has become an everyday word on account of the popularity of ChatGPT. However the idea of prompting any language model (particularly decoder, or encoder-decoder models) to get a response back in natural language has been around for a few years. Here are a few ideas from research that can help you perform few-shot learning.
Prompting a model directly
We can also solve many classification or generative tasks by casting the task in natural language format in a manner that was similar to the task the model was initially trained on. I would insist that every NLP practitioner reads this wonderful article if they have not already: Pre-train, Prompt and Predict: A systematic survey of prompting methods in NLP
Supervised Fine Tuning (SFT) [In this case, Fine tuning using Prompts]
Next time, don't use a classification head for most of your tasks. Instead, cast available labeled data into a format that mimics how the model was trained, and fine tune the model on this. Insightful: How many data points is a prompt worth.
Prompts to instruct and distill LLMs on certain tasks (InstructGPT, Alpaca)
Prompt tuning: At prediction time the task of engineering the perfect prompt to is challenging, especially given the non-deterministic nature of output. Instead of using hard coded prompt templates, one can also use a 'soft prompt' to perform ML prediction tasks. The soft prompt constitutes of a small neural network placed after the pretrained model layers are frozen. It tunes prompt parameters as per labeled data.(The Power of Scale for Parameter-Efficient Prompt Tuning, P-Tuning v2)
This list is obviously not exhaustive. It is just indicative of some the strategies you can try if you find yourself with limited labeled data, and limited compute.