Skip to Main Content
Main site homepage

Artificial Intelligence (AI) Library Guide

What is Prompt Engineering

“Prompt engineering at its core, involves the strategic crafting of inputs to elicit desired responses or behaviors from AI systems.”(Walter, 2024)

Prompt engineering is a growing discipline focused on crafting and optimizing prompts to effectively utilize large language models (LLMs) in diverse applications and research areas. It helps researchers and developers enhance model performance on tasks such as question answering and reasoning, while also enabling robust integration with tools and external systems.

Much like formulating effective research questions, prompt engineering is a gradual, step-by-step process. It demands critical evaluation, flexibility, and ongoing interaction with the AI’s outputs to continually refine and improve the quality of prompts.


Bibliography

Walter, Y. (2024). Embracing the future of Artificial Intelligence in the classroom: the relevance of AI literacy, prompt engineering, and critical thinking in modern education. International Journal of Educational Technology in Higher Education, 21(1). https://doi.org/10.1186/s41239-024-00448-3

Prompt Engineering Techniques

Prompt engineering techniques enables the effective design and refinement of prompts to enhance performance across a variety of tasks using large language models (LLMs).

While earlier basic examples for prompts demonstrated foundational concepts, this section explores more advanced prompt engineering techniques aimed at addressing complex tasks and improving the reliability and effectiveness of LLM outputs.

Prompting Techniques Description Example
Zero-shot Prompting (ZSP) Solves tasks without providing any examples, relying purely on prior knowledge. "Critically evaluate the implications of quantum entanglement on information theory."
Few-shot Prompting (FSP) Uses examples to help the AI model to understand the desired output or response. "Here are 2 peer-reviewed abstracts; write a third one summarizing this study on neural correlates of empathy."
Chain-of-Thought (CoT) Breaks tasks into logical, step-by-step reasoning for better clarity. "Describe, step by step, how you would model population dynamics using a system of differential equations."
Self-Consistency (SC) Generates multiple answers and selects the most consistent or accurate one. "Provide five different explanations for a rising inflation rate and argue which aligns best with Keynesian theory."
Expert/Role-based Prompting (EP) Assigns the model a role (e.g., teacher, doctor) to provide contextual tone and style. "As an evolutionary biologist, explain convergent evolution using examples from marine life."
Automatic Prompt Engineer (APE) The AI generates and optimizes prompts based on user intent without manual input. AI asks: “Are you focusing on political ideology, voter behavior, or media bias in your analysis of electoral data?”
Generated Knowledge (GKn) AI model generates contextual knowledge before answering to improve relevance and completeness. Before answering: "Generate a summary of major AI alignment theories," then: "Compare them in terms of scalability and ethical constraints."
Tree-of-Thought (ToT) Explores multiple reasoning paths or perspectives before producing a comprehensive answer.

"Before concluding, explore three possible explanations for the reproducibility crisis in psychology: statistical flaws, publication bias, and methodological inconsistency."

Rereading Prompting (Re2) Instructs the model to re-read the prompt to enhance comprehension and generate more thoughtful responses. "Re-read the following research question on gene editing ethics twice before outlining the main ethical concerns and proposing a framework for evaluation."
Chain-of-Verification (CoVe) Verifies answers by generating and responding to follow-up questions before refining the final output. "Analyze this economic model, then ask 3 verification questions about its assumptions or data sources, and update your critique accordingly."

 Bibliography

Dhuliawala, S., Komeili, M., Xu, J., Raileanu, R., Li, X., Celikyilmaz, A., & Weston, J. (2023). Chain-of-Verification Reduces Hallucination in Large Language Models.  http://arxiv.org/abs/2309.11495

Eliot, L. (2024, July 6). Using The Re-Read Prompting Technique Is Doubly Rewarding For Prompt Engineering. Forbes.  https://www.forbes.com/sites/lanceeliot/2024/07/06/using-the-re-read-prompting-technique-is-doubly-rewarding-for-prompt-engineering/

Walter, Y. (2024). Embracing the future of Artificial Intelligence in the classroom: the relevance of AI literacy, prompt engineering, and critical thinking in modern education. International Journal of Educational Technology in Higher Education, 21(1). https://doi.org/10.1186/s41239-024-00448-3

 

 

© Copyright Βιβλιοθήκη και Υπηρεσία Πληροφόρησης, Τεχνολογικό Πανεπιστήμιο Κύπρου