AI Prompt Templates and Strategies for Developers to Code Smarter
Using AI tools like Cursor, ChatGPT, or Copilot isn’t about writing “magic prompts". It’s about clear communication, context, and iteration. Think of it like delegating work to an extremely fast but overly literal junior dev.
In the first part of the article, you are going to explore when you should consider using AI to boost your effectiveness. Then, in the second, more hands-on part, you’ll browse a selection of AI prompt templates you can use right away in development, or use as a base for your own personal prompt library.
Crafting effective prompts for AI tools
This article is expecting base knowledge of the AI-assisted development workflow. If you want to refresh your knowledge check out this AI-assisted development blog post first.
No matter what you use AI for, you should keep in the back of your mind what its effectiveness will be. Does it make sense to use AI for this or that use case? How much time can it (really) save? With these thoughts estimations in mind, it is easy to know when the solution is a good fit , or if your prompt needs further tuning. Sometimes it is just easier to try a couple of variants of the prompt and compare the results; sometimes it is better to iterate on the prompt itself.
Structure your prompts
There are many frameworks, methodologies, and concepts on how to build your prompt. In general, a solid prompt often includes a mix of these:
Element | What It Does | Example |
👤 Role | Instructs the model to think like an expert | "Act as a backend architect experienced in Node.js security." |
🛠️ Directive | Clearly defines the task | "Refactor this function to use async/await." |
🔁 Examples | Shows desired inputs/outputs | Include before/after code, especially for formatting or style. |
📦 Context | Adds important background | "We use Express.js + Zod. This route handles uploads." |
🗂 Format | Tells the model how to respond | "Return a Markdown list with code blocks." |
Chain of Thought Prompting
Chain of Thought (CoT) is an approach used to improve LLMs reasoning”. Instead of giving a final answer immediately, CoT guides the AI to think step by step. It’s similar to how humans break down complex problems into smaller, manageable parts.
This leads to more accurate, logical, and understandable results, especially in tasks that involve some kind of problem-solving.
In your prompt, include an instruction encouraging the model to think step by step. Use expressions similar to these in your prompt:
”let’s think step by step”
”explain your reasoning steps”
”break it down logically”
”let’s think and plan first”
This approach allows us to take advantage of what AI is good at: Focusing on a single and manageable task and completing it fast
AI rules
Adding project rules is a simple but powerful way to make your codebase more AI-friendly. Instead of explaining the same naming conventions, architectural patterns, or common edge cases over and over in chat, you can teach the editor once and let it apply those rules consistently.
Several AI tools support this idea but in different ways:
- Cursor IDE allows you to define structured rules in
.mdc
files that attach descriptions and metadata to specific files or file patterns. - Continue.dev offers a
continue.config.json
where you can set system prompts, include/exclude files, and shape how the AI behaves. - Claude (via API or tools like Continue) supports rule-like behavior using system prompts or embedded project instructions.
- Gemini CLI is also evolving with growing support for custom context injection, though it doesn’t yet have native rule files as of today.
Here’s a simple example of a rule defined in Cursor using an .mdc
file:
---
description: Ensure class fields starting with an underscore are marked as private.
globs:
- "**/*.ts"
rules:
- type: pattern
pattern: "^(?!.*private)(\\s*)(_[a-zA-Z0-9]+)\\s*[:=]"
message: "Class fields starting with '_' should be marked as private."
replacement: "private $2"
alwaysApply: false
---
// Before
class Counter {
_count = 0;
}
// After
class Counter {
private _count = 0;
}
The real-life example of the ‘Mediator Query Command Pattern’ might give you an idea of how to structure it and what aspects you can include in your AI rules.
---
description: Adding endpoints to your service following the Mediator Query Command Pattern
globs:
alwaysApply: false
---
# Mediator Query Command Pattern
This guide outlines the implementation pattern for Commands and Queries in YOUR PROJECT.
## Directory Structure
<!-- use tree syntax to show the directory structure -->
```
src/
├── {domain}/
├── commands/
│ └── {commandName}/
│ ├── {CommandName}Command.ts
│ ├── {CommandName}CommandHandler.ts
│ └── {CommandName}ServiceHandler.ts
├── queries/
└── {queryName}/
├── {QueryName}Query.ts
├── {QueryName}QueryHandler.ts
└── {QueryName}ServiceHandler.ts
```
## File Naming Conventions
1. **Class Names**: PascalCase
- Queries: `GetEntityQuery`, `GetEntityQueryHandler`, `GetEntityServiceHandler`
- Commands: `CreateEntityCommand`, `CreateEntityCommandHandler`, `CreateEntityServiceHandler`
<!-- 2. More naming conventions if needed -->
## Query Implementation
### Query Class:
<!-- YOUR QUERY CLASS IMPLEMENTATION EXAMPLE -->
### Query Handler:
<!-- YOUR QUERY HANDLER IMPLEMENTATION EXAMPLE -->
### Service Handler (for HTTP endpoints):
<!-- YOUR SERVICE HANDLER IMPLEMENTATION EXAMPLE -->
## Command Implementation
### Command Class:
<!-- YOUR COMMAND CLASS IMPLEMENTATION EXAMPLE -->
## Key Components
1. **Query/Command Classes**:
- Extend `Query<TResult>` or `Command`
- Use `@serializableName()` with kebab-case
- Define interfaces for parameters and results
- Use validators for fields
2. **Handlers**:
- Implement `IQueryHandler` or `ICommandHandler`
- Use dependency injection for repositories
- Handle business logic
3. **Service Handlers**:
- Extend `RestHandler` for HTTP endpoints
- Define URL pattern, versions, and HTTP method
- Use mediator to send queries/commands
- Handle request/response serialization
## Best Practices
1. **Naming**:
- Use PascalCase for class names and filenames
- Prefix queries with "Get" and commands with action verbs
2. **Structure**:
- Keep related files in domain-specific directories
- Group by feature rather than type
- Include DTOs and interfaces in the same directory
3. **Implementation**:
- Use proper dependency injection
- Implement comprehensive validation
- Handle errors consistently
- Document public APIs with Swagger annotations
## Examples
<!-- [use mdc link syntax to link to the real files in your project] -->
For real-world examples, see:
- Query implementation: [GetEntityQuery.ts](mdc:services/serviceName/src/entity/queries/getEntity/GetEntityQuery.ts)
- Service Handler: [GetEntityServiceHandler.ts](mdc:services/serviceName/src/entity/queries/getEntity/GetEntityServiceHandler.ts)
AI Prompt templates for common cases
These are real-world scenarios where developers use AI to help them. It includes use cases where Cursor (and similar tools) can be really helpful and save you time.
The following templates and examples contain a mix of MarkDown (md
) and Extensible Markup Language (xml
) for creating prompts. This form is recommended to use them with Anthropic and OpenAI. If you want to follow a different format, transform the templates as necessary. Also @file
, or @git
represents syntax, or a way to visualize that you added the specific references to the context. For AI rules, Cursor mdc
format will be used.
Self-Code Review Helper
This is a typical use case. You would mostly use some AI embedded in your IDE, like GitHub Copilot.
Use case: Ask AI to review a file or diff before requesting a peer review.
Why: Helps catch low-hanging fruit (naming, unused vars, logic bugs).
Prompt example:
<context>
I need a thorough code review of @file [FILE_PATH]
[OPTIONAL_ADDITIONAL_CONTEXT]
</context>
<instructions>
Please review this code with a critical eye for:
- Potential bugs or edge cases
- Performance issues
- Adherence to [STANDARDS/PATTERNS]
- Clarity and maintainability
- [Refer to existing rules if exist @yourRule.mdc]
Don't just compliment the code - find the weak spots.
</instructions>
Result:
This is a real result from one of the script files in my codebase. The optional parts of the templates have been removed.
1. Potential Bugs / Edge Cases
No Error Handling for fs.readJsonSync:
If a package in external has a malformed or missing package.json,
fs.readJsonSync will throw and crash the script.
Suggestion: Wrap this in a try/catch and log which repo failed.
No Error Handling for execSync:
If yarn run postinstall fails in any external repo, the whole script will throw and stop.
Suggestion: Wrap in try/catch, log the error, and consider whether to continue or abort.
Assumes All Directories in external Are Node Packages:
The script assumes every directory in external is a package with a package.json. If not, it will throw.
No Check for CKEDITOR5_ROOT_PATH:
If CKEDITOR5_ROOT_PATH is undefined or incorrect, path.resolve will fail or point to the wrong place.
No Check for Husky Installation:
If husky is not installed, the import or call may fail.
2. Performance Issues
Synchronous File and Process Operations:
Uses fs.readdirSync, fs.statSync, fs.readJsonSync, and execSync. For a large number of packages, this will block the event loop and slow down the process.
Suggestion: For a postinstall script, this is usually acceptable, but consider async versions if you expect many packages.
3. Clarity and Maintainability
No Logging:
There is no output to indicate progress, which makes debugging harder if something fails.
No Comments on Filtering Logic:
The filtering steps are not commented, making it harder to follow the intent for each filter.
No Option to Skip Failing Packages:
If one package fails, the whole process stops. Consider an option to continue with others and summarize errors at the end.
No Validation of postinstall Script:
The script only checks for the existence of a postinstall script in package.json, but does not validate its type or content.
4. Other Observations
Hardcoded Use of yarn:
The script always runs yarn run postinstall, which may not work if the user uses npm or pnpm.
No Cross-Platform Path Handling:
Uses path.join and path.resolve, which is good, but make sure all downstream scripts are also cross-platform.
Summary of Recommendations
Add try/catch blocks around fs.readJsonSync and execSync.
Add logging for progress and errors.
Consider handling or skipping non-package directories gracefully.
Optionally, allow the script to continue on errors and report all failures at the end.
Consider supporting other package managers or making it configurable.
Let me know if you want a refactored version with these improvements!
Debugging support
Use case: When stuck, use AI to explain what might be going wrong or suggest next steps. You can paste snippets of logs to understand them better.
⚠️ Read the content first (i.e. logs) to get a grasp of what might be going on and mind the security.
Prompt template:
<context>
I'm debugging [ISSUE_DESCRIPTION]
Error/logs:
[PASTE_ERROR_LOGS]
Relevant files:
@file [SUSPECTED_FILE]
[OPTIONAL git reference]
Issue was introduced in [@git RELEVANT_COMMIT]
</context>
<instructions>
- Analyze what might be causing this issue
- Suggest debugging steps to isolate the problem
- Ask clarifying questions if you need more information
- think step by step
</instructions>
Improving Docs & Comments
Use case: Add JSDoc, generate endpoint descriptions or rewrite README sections, generate parts of event-catalog event descriptions, or use it for more effective docs generation.
Prompt template:
<context>
I need to improve documentation for @file [TARGET_FILE/MODULE/README]
[ADDITIONAL_CONTEXT]
</context>
<instructions>
- [ADD_JSDOC/IMPROVE_README/CREATE_DOCS]
- Follow our documentation style in @file [EXAMPLE_FILE]
- Include [EXAMPLES/API_USAGE/DIAGRAMS]
- Ensure it covers [KEY_ASPECTS]
</instructions>
Example:
<context>
I need to improve documentation in @file README.md
We recently added a new CLI tool to @file src/cli/index.ts
</context>
<instructions>
- Add a new section to the README explaining the CLI tool
- Follow our documentation style in the existing README
- Include examples of each CLI command with options
- Ensure it covers installation, common use cases, and troubleshooting
- Add a table of available commands similar to the API reference
</instructions>
Generating Boilerplate / File Structures
Use case: Kick off base structures for feature branches, especially with repetitive patterns.
Example scenario: You want to create a scaffold for patterns already used in your project.
Using specific rules helps the AI generate consistent code that follows your team’s established patterns. For complex structures that your team uses regularly, create specific rules.
Prompt Template:
<context>
I need to create a new domain module for handling "subscriptions" following our Mediator-Query-Command pattern.
The structure should be similar to the existing @file services/cks/cks-customer-portal/src/invoices directory.
Requirements:
- Should handle CRUD operations for subscriptions
- Needs queries for: getting subscription details, listing all subscriptions, and checking subscription status
- Needs commands for: creating subscription, updating subscription, canceling subscription
</context>
<instructions>
@mediatorQueryCommand
Please generate the folder structure and files for this new "subscriptions" domain following our MQC pattern:
1. Create the main directory structure with models, commands, and queries folders
2. For each command, create the appropriate Command, CommandHandler, and ServiceHandler files
3. For each query, create the Query, QueryHandler, and ServiceHandler files
4. Generate an initialize.ts file similar to the one in the invoices module
5. Ensure all naming follows our conventions (PascalCase for classes, kebab-case for serializable names)
Use the same organization and file structure as seen in the invoices domain.
</instructions>
Generating tests
Use case: Generate tests based on existing code, examples, and AI rules.
Prompt Template:
<context>
You are an experienced node.js developer and you want to write tests
for the @[SOUCRCE_FILE] [OPTIONAL using @test-rules.mdc]
</context>
<instructions>
- Please write a test suite for @[SOURCE_FILE]
- Remember to match the style and practices present in @file [TARGET_FILE_PATH]
- Be thorough and cover all possible test cases.
- Keep in mind [ADDITIONAL_RULES]
</instructions>
Refactoring Support
Use case: Migrate to another language (JS to TS), rename logic, or rework patterns.
Prompt Templates:
<context>
Currently @file [SOURCE_FILE_PATH] is written in pure javascript, making it harder to maintain.
</context>
<instructions>
- Please rewrite the @file [SOURCE_FILE_PATH] to typescript keeping in mind the project rules
- Remember to match the style and practices present in @file [TARGET_FILE_PATH]
- Remember to use to use async/await instead of Promise.then chains.
- [ADDITIONAL_RULES]
</instructions>
<context>
Currently @file [SOURCE_FILE_PATH] contains several problems that need to be addressed
- [PROBLEMS_REQUIRING_REFACTOR] # deprecated code style, pattern, etc
</context>
<instructions>
- Please refactor the @file [SOURCE_FILE_PATH] according to the specified context and project rules
- Remember to match the style and practices present in @file [EXAMPLE_FILE_PATH]
- [ADDITIONAL_RULES]
</instructions>
<context>
In our repository we decided that
we should use [PATTERN] instead of [PATTERN_LEGACY] to perform [TASK]
The [PATTERN_LEGACY] was problematic because
- [REASONS]
</context>
<instructions>
- Please refactor the @file to use [PATTERN] keeping in mind provided context and project rules
- Remember to match the code style present int [EXAMPLE_FILE]
</instructions>
Project Management & Maintenance
Use case: In large monorepos, it’s common to encounter wide-scoped maintenance tasks that are repetitive in structure but span dozens of isolated projects. These tasks often involve applying the same steps to many modules that each have slightly different configurations or edge cases. They aren’t complex in terms of logic, but they can easily break things if handled sloppily or inconsistently.
This approach also takes advantage of agent’s ability to run terminal commands.
Prompt Template:
<context>
I'm currently working in a large monorepo with many Node.js projects (modules and services).
There are likely many unused dependencies across these projects that need to be cleaned up.
The projects that need attention:
- Located in `common/*` and `services/*` folders
- Have package names starting with `@cksource-cs/` in package.json
- Can be managed using pnpm workspace features
</context>
<instructions>
Help me clean up unused dependencies from all these projects by following these exact steps:
1. Generate a complete list of all projects from the specified folders:
- Create a temporary file to track projects and mark those already processed
- Sort the list with modules first for better organization
- Make absolutely sure you've found all projects matching our criteria
2. For each project, perform these specific steps in sequence:
- Run `npx depcheck --ignores="@src/*, tsconfig-paths"` to identify unused dependencies
- For any dependencies flagged as unused, verify they're truly not used in runtime by searching ALL files in both `src/` and `tests/` folders
- If confirmed unused, remove them from package.json
- If no dependencies were removed, skip to the next project
3. For each project where changes were made:
- Bump the patch version in package.json
- Run `csli install [package_name]` (where [package_name] is from package.json)
- Run `csli build [package_name]` to verify the build succeeds
- If build errors occur related to missing packages, add those back to package.json and repeat step 3
- Run `csli test [package_name]` to ensure tests pass
- If test errors occur related to missing packages, add those back to package.json, repeat step 3
Process each project thoroughly and iteratively. Don't skip any steps or projects. Be methodical and don't rush. Document any special cases or issues encountered.
</instructions>
This is exactly where a structured and analyzed plan helps. You can break down a task like this into a repeatable plan, track which sub-projects you've processed, and log findings or adjustments as you go. This plan utilizes the Chain of Thought approach to guide the AI step by step.
SQL
Use case: Suggest performance improvements, index usage, or query rewrites.
Use AI to understand suggested improvements first.
Prompt Template:
<context>
I’m experiencing performance issues with a SQL query. It’s taking over [X] seconds to execute. I need to optimize this SQL query:
```
[SQL_QUERY]
```
Database: [POSTGRESQL/MYSQL/ETC + version]
Table structure:[TABLE_DEFINITIONS]
The EXPLAIN output from the DB engine: [EXPLAIN_OUTPUT]
</context>
<instructions>
Analyze this query and suggest optimizations:
- Identify performance bottlenecks
- Suggest index improvements
- Rewrite the query where beneficial
- Explain the reasoning behind each suggestion
- Prioritize suggestions by expected impact
</instructions>
These AI Prompts templates are just a start?
These are the common use cases and their AI Prompt templates; feel free to customize them to your specific needs and workflows.
And remember, the most effective prompts include:
-
clear context
-
specific instructions
-
relevant examples
If you wonder where you can test these out - generate a CKEditor starter project using Builder and run the Prompts against it!
Bookmark this blog, copy the templates to your repo, or just keep these practices in mind.
And a hint before the end: You can use the prompt templates with AI to generate the actual prompt.
Happy prompting! 🥑