ChatGPT Prompts for Developers: Code Review, Debugging, and More
The best ChatGPT prompts for developers — code review, bug fixing, writing tests, generating documentation, and explaining complex code.
Try it yourself
Use our free AI Prompt Debugger — no sign-up, runs in your browser.
AI is genuinely useful for developers — but only when you prompt it well. Asking “why is my code broken?” with no context gets you a generic answer. Giving the model your code, the error message, and what you’ve already tried gets you something you can actually use.
These prompts are organized by task. They’re specific enough to work, and all you need to do is fill in your actual code and context. If a prompt isn’t producing what you need, the AI Prompt Debugger can help you work out why.
Code Review Prompts
Thorough code review:
Review this [language] code for a [brief description of what it does]. Focus on: correctness, edge cases, performance issues, and anything that would get flagged in a real PR review. Be specific — point to line numbers or patterns, not just general advice.
[paste code]
Security-focused review:
You are a security engineer doing a review of this [language] code. Identify any security vulnerabilities — injection risks, improper input validation, insecure defaults, exposed secrets, or anything else that would be a concern in a production environment. Explain each issue and how to fix it.
[paste code]
Review against a specific standard:
Review this code against these team conventions:
- [convention 1]
- [convention 2]
- [convention 3]
Flag any violations and suggest corrections. Don't rewrite the whole file — just call out the specific issues.
[paste code]
PR review simulation:
Pretend you're a senior engineer reviewing this pull request. The PR description is: "[paste PR description]". Write inline review comments the way you'd leave them in GitHub — specific, actionable, and not nitpicky unless it actually matters.
[paste diff or code]
Debugging Prompts
Diagnose a bug with an error:
I'm getting this error in [language/framework]:
[paste error message and stack trace]
Here's the relevant code:
[paste code]
What's causing it? Walk me through your reasoning before giving a fix.
Debug unexpected behavior (no error):
This function is supposed to [describe expected behavior] but instead it [describe actual behavior]. There's no error — the output is just wrong.
[paste function]
What are the most likely causes? Check for off-by-one errors, incorrect assumptions about [data type/state/async behavior], or anything else that would produce this output silently.
Rubber duck a complex bug:
I've been stuck on a bug for [time]. Here's what I know:
- Expected behavior: [describe]
- Actual behavior: [describe]
- What I've tried: [list]
- Code involved: [paste]
Ask me clarifying questions to help narrow it down. Don't give me an answer yet — just help me think through it.
Check an async/race condition:
I think I have a race condition or async ordering issue in this code. Walk through the possible execution orders and tell me where the timing could go wrong:
[paste code]
Writing Tests Prompts
Generate unit tests:
Write unit tests for this [language] function using [Jest / pytest / RSpec / etc.]. Cover: the happy path, edge cases, invalid inputs, and any boundary conditions. Use descriptive test names that explain what's being tested, not just what function is called.
[paste function]
Generate tests for a specific scenario:
This function [describe what it does]. I'm not worried about the happy path — I want tests specifically for error cases and edge cases: empty inputs, null values, boundary values, and unexpected types. Write them in [testing framework].
[paste function]
Test a function that has side effects:
This function makes database calls and sends an HTTP request. Write tests for it in [language/framework] using mocks. Show me how to mock [the database call / the HTTP request / both] properly so the tests are isolated.
[paste function]
Improve existing tests:
These tests pass but they're not testing much. What are they missing? Suggest additional test cases and show me how to write them:
[paste existing tests]
Documentation Prompts
Write a docstring:
Write a [JSDoc / Python docstring / Go comment] for this function. Include: a one-line summary, parameter descriptions with types, return value, and a note about any important edge cases or exceptions:
[paste function]
Write a README section:
Write a "Getting Started" section for a README for this project. The audience is a developer who's just cloned the repo and wants to get it running locally. Cover: prerequisites, installation, environment setup, and running the app. Be concise — no marketing language.
Project context: [brief description of what the project does]
Document an API endpoint:
Write documentation for this API endpoint in the style of a public API reference (like Stripe or GitHub's docs). Include: description, HTTP method and path, request parameters (with types and whether they're required), response format, and an example request/response.
[paste endpoint code or description]
Explaining Code Prompts
Explain unfamiliar code:
Explain what this code does, step by step. I'm familiar with [language] basics but I haven't seen this pattern before. Focus on the why, not just the what — explain any non-obvious design decisions:
[paste code]
Explain a complex algorithm:
Explain this algorithm in plain English first, then walk through it with a concrete example input: [example input]. Finally, explain the time and space complexity:
[paste algorithm]
Explain a library or pattern you’re unfamiliar with:
This code uses [library/pattern/framework feature]. I've never used it before. Explain what it's doing here, why you'd use it instead of a simpler approach, and what the tradeoffs are:
[paste code]
Refactoring Prompts
Refactor for readability:
Refactor this code to be more readable. Don't change the behavior. Focus on: clearer variable names, breaking up long functions, and removing anything that makes it hard to follow. Explain the key changes you made and why:
[paste code]
Convert to a different pattern:
Rewrite this [callback-based / class-based / imperative] code to use [promises/async-await / functional style / hooks / etc.]. Keep the same behavior. Point out any places where the conversion changes how errors are handled:
[paste code]
Reduce duplication:
There's a lot of repetition in this code. Identify the duplicated patterns and suggest how to refactor them. Show me the refactored version with the duplication removed:
[paste code]
Getting the Most Out of These Prompts
Always paste the actual code. Describing code in words loses important detail. Paste it directly.
Include the error message. If there’s a stack trace, include the full thing — the relevant line is often not the last one.
Tell it what you’ve already tried. This stops the AI from suggesting things you’ve already ruled out and often surfaces the actual issue faster.
Ask for reasoning, not just answers. Adding “explain your reasoning” or “walk me through this” catches cases where the AI is confidently wrong, and helps you learn from correct explanations.
Iterate. If the first response isn’t right, say specifically what’s wrong with it: “That solution won’t work because [reason]. Try a different approach that doesn’t involve [X].”
When a prompt keeps missing the mark, the AI Prompt Debugger can help you figure out what’s going wrong and suggest improvements.