In today’s fast-paced development environment, leveraging AI tools for code reviews can significantly enhance productivity and code quality. As developers, we often work in isolation or wait hours (sometimes days) for our colleagues to review our pull requests. Large Language Models (LLMs) like GPT-4, Claude, and others can provide immediate feedback, spot potential issues, and suggest improvements within your favorite IDE.
This blog post explores how to craft effective prompts for LLMs when reviewing your code in VSCode, with specific examples for backend Node.js/Express developers and React frontend developers.
The Art of Effective LLM Prompting for Code Reviews
When asking an LLM to review your code, the quality of the prompt directly impacts the quality of the feedback you’ll receive. Here are some key principles:
- Provide context: Tell the LLM what the code is supposed to do
- Be specific: Ask for particular types of feedback
- Include relevant environment details: Mention your tech stack, versions, etc.
- Set expectations: Clarify what kind of suggestions you’re looking for
Backend Node.js/Express Developer Prompts
1. Comprehensive Express API Endpoint Review
A prompt to review ExpressJS code
Review this Express API endpoint implementation. This endpoint [handles user registration/processes payments/etc.].
Focus on:
1. Security vulnerabilities (particularly input validation and authentication)
2. Error handling and appropriate status codes
3. Database query optimization
4. Middleware usage
5. API design best practices
[paste your code here]
Tech stack: Node.js v18.x, Express 4.x, MongoDB with Mongoose
2. Performance Optimization for Database Operations
A prompt to review DB management
Analyze this Node.js database interaction code for performance issues.
The function performs [describe operation] on a [MongoDB/PostgreSQL/etc.] database.
It’s currently taking too long to execute with large datasets.
Look for:
1. N+1 query problems
2. Missing indexes or inefficient queries
3. Connection pooling issues
4. Async/await implementation problems
5. Memory usage concerns
[paste your code here — or just add the file to the LLM context if you are in Cursor or VScode with coPilot]
Database: [MongoDB/PostgreSQL/etc.] version [x.x]
ORM/Driver: [Mongoose/Sequelize/pg/etc.] version [x.x]
3. API Authentication & Authorization Review
A prompt to review your auth code
Review this authentication and authorization implementation for an Express API.
This code handles [JWT verification/OAuth integration/role-based access control].
Check for:
1. Security vulnerabilities and exploits
2. Token management best practices
3. Password handling (if applicable)
4. Middleware structure and execution flow
5. Edge cases in the authorization logic
[paste your code here — Or just add the file to the LLM context if you are in Cursor or VScode with coPilot]
Auth strategy: [JWT/OAuth/Session-based]
Additional context: [e.g., “We use this for a B2B SaaS application with role hierarchies”]
4. Error Handling Strategy Assessment
A prompt to review your errors/logs
Evaluate this error-handling approach in my Node.js/Express application.
This is our central error-handling middleware and error management strategy.
Please review for:
1. Comprehensiveness of error types covered
2. Appropriate status codes and response formats
3. Security (avoiding leaking sensitive information)
4. Logging implementation
5. Client-friendly error messages
[paste your code here — or just add the file to the LLM context if you are in Cursor or VScode with coPilot]
Additional context: This is for a [production/internal/public-facing] API
5. Microservice Communication Pattern Review
A prompt to review communication between your dockers
Review this code handling communication between microservices in our Node.js architecture.
It [makes HTTP requests/consumes message queue/implements gRPC calls].
Evaluate:
1. Resilience patterns (retries, circuit breakers, timeouts)
2. Error propagation
3. Efficiency of the implementation
4. Observability (logging, metrics)
5. Edge cases handling
[paste your code here — or just add the file to the LLM context if you are in Cursor or VScode with coPilot]
Communication method: [REST/gRPC/Message Queue/etc.]
Related services: [brief description of services this communicates with]
React.js Developer Prompts
1. Component Architecture and Structure Review
A prompt to review your React App structure and design
Review this React component, which [describe what it does – e.g., “displays a data table with filtering, sorting, and pagination”].
Analyze:
1. Component structure and separation of concerns
2. Prop management and component API design
3. State management approach
4. Performance considerations (unnecessary re-renders)
5. Accessibility compliance
[paste your component code here — or just add the file to the LLM context if you are in Cursor or VScode with coPilot]
React version: [17/18]
State management: [React Context/Redux/Zustand/etc.]
UI library: [Material-UI/Chakra/Tailwind/etc.]
2. React Hooks Implementation Review
A prompt to review your hooks handling
Analyze these custom React hooks and their implementation.
These hooks handle [state persistence/API data fetching/form validation/etc.].
Check for:
1. Adherence to React hooks rules
2. Dependencies array management
3. Memory leaks and cleanup
4. Edge case handling
5. Reusability and abstraction level
[paste your hooks code here — or just add the file to the LLM context if you are in Cursor or VScode with coPilot]
React version: [16.8+/17/18]
Additional context: [e.g., “Used across multiple form components”]
3. React Performance Optimization Review
A prompt to review your web app performance
Review this React component for performance issues.
It’s currently re-rendering too frequently/taking too long to render.
Focus on:
1. Memoization opportunities (useMemo, useCallback, React.memo)
2. State management efficiency
3. Expensive calculations or operations
4. Render optimization
5. Lazy loading implementation
[paste your component code here — or just add the file to the LLM context if you are in Cursor or VScode with coPilot]
Performance metrics observed: [e.g., “Rendering takes >500ms” or “Re-renders 5x when data changes”]
Component dependencies: [list major libraries/context used]
Advanced Prompting Techniques for Better Results
To get even better results from LLMs when reviewing your code, consider these advanced prompting strategies:
Before/After Comparison Review
A prompt to help you with a diff
I’ve refactored this [function/component/module]. Please review both versions and tell me if the refactored code improves upon the original.
BEFORE:
[paste original code]
AFTER:
[paste refactored code]
Areas I specifically tried to improve:
1. [e.g., “Error handling”]
2. [e.g., “Performance”]
3. [e.g., “Readability”]
Architecture Decision Review
A prompt to consult about architecture decision
I’m choosing between two approaches for [specific problem/feature].
Approach A:
[paste code or describe approach]
Approach B:
[paste code or describe approach]
Considerations:
1. This will need to scale to [X] users/requests
2. We have a team of [junior/senior] developers
3. Long-term maintainability is [more/less] important than immediate delivery
Which approach would you recommend and why?
Code Generation with Constraints
A prompt to limit the code that will be written by the LLM
Generate a [Node.js module/React component] that [does a specific task].
Requirements:
1. Must use [specific libraries/frameworks]
2. Should follow [specific pattern/architecture]
3. Needs to handle these edge cases: [list cases]
4. Must be unit testable
Additional context: [any other important information]
Best Practices for Working with LLMs for Code Reviews
- Start with a clear problem statement: Being specific about your goal helps the LLM provide more targeted feedback.
- Break down large reviews: Instead of requesting a review of an entire file, focus on specific functions or components.
- Iterate on feedback: Use the LLM’s suggestions as a starting point, implement changes, and then ask for a review of your changes.
- Challenge the LLM’s suggestions: Don’t blindly accept all recommendations; ask for explanations of why specific changes are suggested.
- Provide examples: When appropriate, show the LLM examples of code patterns you consider good to align its recommendations with your team’s standards.
Limitations to Keep in Mind
While LLMs are powerful tools for code review, remember:
- They don’t have access to your entire codebase and may miss context-specific issues – However, in VScode and Cursor you do have the ability to expose all your codebase to the LLM –> Do it!
- They can’t run your code to verify functionality — In VScode and Cursor you do have the ability to run code… use it.
- They might suggest technically correct but contextually inappropriate solutions
- Their training cutoff date means they may not be familiar with the latest libraries or language features
- Security-sensitive code should always have a human review as well
Which models are leading?
Of course… this table won’t age well. So check Google for the latest and greatest models.
Top 5 LLMs for Coding
Here’s a concise table of the five most capable large language models for coding tasks as of May 2025:
| Rank | Model | Creator | Key Strengths | Best For |
|---|---|---|---|---|
| 1 | Claude 3.7 Sonnet | Anthropic | Advanced reasoning, excellent code understanding, strong debugging | Complex algorithms, systems design, code refactoring |
| 2 | GPT-4o | OpenAI | Fast inference, strong multi-language support, API integration | Full-stack development, rapid prototyping |
| 3 | Gemini Ultra 2 | Excellent documentation generation, strong with Google technologies | Documentation, Google Cloud projects, data science | |
| 4 | CodeLlama 2 | Meta | Optimized for code completion, lightweight, open weights | Code completion, IDE integration |
| 5 | Mistral Code | Mistral AI | Low latency, efficient context handling, strong with multiple languages | Real-time pair programming, mobile development |
These models excel at different aspects of the coding workflow, from generating complete applications to debugging complex systems and optimizing existing code. Selection depends on your specific task requirements and constraints.
Conclusion
Effective LLM prompting for code reviews is a skill that can dramatically improve your development workflow. By providing clear context, specific requests, and iterating on feedback, you can leverage these AI tools to catch issues early, learn better practices, and ship higher-quality code faster.
Remember that LLMs work best as collaborative tools that augment rather than replace human code reviews. Used properly, they can help you identify issues earlier in the development process, saving time and improving code quality before your code reaches a human reviewer.
What are your favorite prompts for code reviews with LLMs? Share in the comments below!
Discover more from Ido Green
Subscribe to get the latest posts sent to your email.