Friday, February 21, 2025

AI LLMs: prompt engineering

Prompt Engineering | Coursera

Prompt Engineering Guide | Prompt Engineering Guide .ai

 Prompt Hacking: Understanding Types and Defenses for LLM Security

Prompt hacking is a term used to describe attacks that exploit vulnerabilities of LLMs, by manipulating their inputs or prompts. Unlike traditional hacking, which typically exploits software vulnerabilities, prompt hacking relies on carefully crafting prompts to deceive the LLM into performing unintended actions.

Prompt Engineering for Generative AI  |  Machine Learning  |  Google for Developers

ChatGPT Prompt Engineering for Developers - DeepLearning.AI

AI Prompt Engineering Is Dead - IEEE Spectrum 
"Long live AI prompt engineering"

Build safe and responsible generative AI applications with guardrails | AWS Machine Learning Blog