When introducing GitHub Copilot's code review feature to a project, a key point is how well it can adhere to existing coding conventions. This article explains the results of verifying whether Copilot can recognize project-specific rules and make appropriate suggestions. It also introduces a case where the issue of broken review comment formatting was resolved by adding instructions to the documentation (prompt engineering).
Purpose of Verification
The main purposes of this verification are the following two points:
- Applying Custom Rules: Confirm whether Copilot can understand custom rules defined in the repository's documentation and point them out appropriately.
- Controlling Output: Confirm whether Copilot's behavior can be improved through natural language instructions when there are deficiencies in its output format.
Preparing the Verification Environment
For the verification, we prepared a rule definition file and test code that intentionally violates the rules.
Rule Definition (docs/code-review-considerations.md)
We defined the following three rules:
- Prohibition of
anytype - Mandatory return type definitions for functions
- Prohibition of non-null assertions (
!)
Test Code (src/violation-test.ts)
We created TypeScript code that violates the above rules.
// This file is intentionally created to violate code review rules for testing purposes.
interface User {
name: string;
email?: string;
}
// Violation 1: Avoid `any` type
// Violation 2: Explicit Return Types (missing return type)
function processUserData(data: any) {
console.log("Processing data: " + data);
return { processed: true };
}
// Violation 3: No Non-null Assertions
function getUserName(user: User): string {
// Using ! operator on a potentially undefined property
const emailLength = user.email!.length;
return user.name + " (" + emailLength + ")";
}
const rawData: any = "some raw data";
processUserData(rawData);
Verification Process and Addressing Issues
Initial Review and Formatting Problems
When we created a Pull Request and assigned Copilot as a reviewer, the rule violations themselves were correctly detected.

However, line breaks in the review comments were output as literal \n, resulting in extremely poor readability.
Example of Problematic Output:
Avoid using the `any` type.\nUse a specific type or `unknown` instead.
Solution via Prompt Engineering
To resolve this issue, we added a meta-instruction (Note to Copilot) to the beginning of the rule definition file (docs/code-review-considerations.md). Specifically, we instructed it to "use actual line breaks in the Markdown text instead of printing the literal \n characters."
Added Instruction:
> **Note to Copilot:** When generating review comments, please use **actual line breaks** in the Markdown text instead of printing the literal `\n` characters. Ensure the output is properly formatted multi-line Markdown.
Results After Correction
After adding the instruction and conducting the review again, the line breaks were rendered correctly, and we obtained highly readable Markdown-formatted comments.

Conclusion
As a result of the verification, we confirmed that GitHub Copilot can refer to documents in the repository and perform reviews based on unique conventions. Furthermore, even if there are problems with the output format, Copilot's behavior can be effectively controlled by writing specific instructions in natural language within the documentation.
When applying project-specific rules, simply listing the rules is not enough; appropriately placing instructions for Copilot (meta-prompts) is the key to eliciting high-quality reviews.
Appendix: Files Used for Verification
1. .github/copilot-instructions.md
When asked to perform a code review or when assigned as a reviewer on a GitHub PR, please read `docs/code-review-considerations.md` before responding.
2. docs/code-review-considerations.md
# Code Review Considerations
> **Note to Copilot:** When generating review comments, please use **actual line breaks** in the Markdown text instead of printing the literal `\n` characters. Ensure the output is properly formatted multi-line Markdown.
## Must
When finding issues related to the following points, please include the badge  in your review comment.
1. **Avoid `any` type**: Do not use `any`. Use `unknown` or specific types to ensure type safety.
2. **Explicit Return Types**: All functions must have explicit return types.
3. **No Non-null Assertions**: Avoid using the non-null assertion operator (`!`). Use optional chaining or type narrowing instead.
3. src/violation-test.ts
// This file is intentionally created to violate code review rules for testing purposes.
interface User {
name: string;
email?: string;
}
// Violation 1: Avoid `any` type
// Violation 2: Explicit Return Types (missing return type)
function processUserData(data: any) {
console.log("Processing data: " + data);
return { processed: true };
}
// Violation 3: No Non-null Assertions
function getUserName(user: User): string {
// Using ! operator on a potentially undefined property
const emailLength = user.email!.length;
return user.name + " (" + emailLength + ")";
}
const rawData: any = "some raw data";
processUserData(rawData);
Related Articles
How to Create Sortable & Droppable Components with dnd-kit
Learn how to implement both useSortable and useDroppable in dnd-kit. A practical guide to achieving container reordering and item dropping simultaneously through conditional logic.
OpenSpec for Copilot: A VS Code Extension to Accelerate Spec-Driven Development
Introducing 'OpenSpec for Copilot', a VS Code extension that powerfully supports Spec-Driven Development on VS Code by combining OpenSpec prompts with GitHub Copilot, including its development background and features.
How to Configure Copilot Automated Code Review with GitHub Rulesets
This article explains how to use GitHub Rulesets to enable automated code reviews by GitHub Copilot for pull requests targeting specific branches.