docs: add comprehensive tool examples and documentation

- Create examples directory with detailed tool usage documentation
- Add example request/response for each tool
- Update main README with links to examples
- Update .gitignore to include examples directory
This commit is contained in:
cyanheads
2025-01-25 20:17:19 -08:00
parent 4a21df70ac
commit b1feec0205
8 changed files with 599 additions and 2 deletions

3
.gitignore vendored
View File

@@ -37,12 +37,11 @@ yarn.lock
# Demo and Example Directories
demo/
demos/
example/
examples/
samples/
.sample-env
sample.*
!sample.template.*
!examples/ # Allow examples directory for tool documentation
# Node.js
node_modules/

View File

@@ -23,6 +23,7 @@ The Model Context Protocol (MCP) enables communication between:
- [Installation](#installation)
- [Configuration](#configuration)
- [Tools](#tools)
- [Examples](#examples)
- [Development](#development)
- [Project Structure](#project-structure)
- [License](#license)
@@ -160,6 +161,18 @@ Add to your MCP client settings:
</use_mcp_tool>
```
## Examples
Detailed examples of each tool's usage and output can be found in the [examples](examples) directory:
- [Second Opinion Example](examples/second-opinion.md) - Analysis of authentication system requirements
- [Code Review Example](examples/code-review.md) - Detailed TypeScript code review with security and performance insights
- [Design Critique Example](examples/design-critique.md) - Comprehensive UI/UX feedback for a dashboard design
- [Writing Feedback Example](examples/writing-feedback.md) - Documentation improvement suggestions
- [Brainstorm Enhancements Example](examples/brainstorm-enhancements.md) - Feature ideation with implementation details
Each example includes the request format and sample response, demonstrating the tool's capabilities and output structure.
## Development
```bash

31
examples/README.md Normal file
View File

@@ -0,0 +1,31 @@
# Mentor MCP Server Tool Examples
This directory contains example requests and responses for each tool provided by the mentor-mcp-server. These examples demonstrate the capabilities and expected output format of each tool.
## Tools
### 1. [Second Opinion](second-opinion.md)
Provides a second opinion on a user's request by analyzing it with an LLM and listing critical considerations. The example shows analysis of building a secure authentication system.
### 2. [Code Review](code-review.md)
Provides a code review for a given file or code snippet, focusing on potential bugs, style issues, performance bottlenecks, and security vulnerabilities. The example shows a review of the code review tool's own implementation.
### 3. [Design Critique](design-critique.md)
Offers a critique of a design document, UI/UX mockup, or architectural diagram, focusing on usability, aesthetics, consistency, accessibility, and potential design flaws. The example shows analysis of a cloud monitoring dashboard design.
### 4. [Writing Feedback](writing-feedback.md)
Provides feedback on a piece of writing, such as an essay, article, or technical documentation, focusing on clarity, grammar, style, structure, and overall effectiveness. The example shows feedback on a rate limiting implementation guide.
### 5. [Brainstorm Enhancements](brainstorm-enhancements.md)
Generates creative ideas for improving a given concept, product, or feature, focusing on innovation, feasibility, and user value. The example shows enhancement ideas for an AI-powered code review tool.
## Production Readiness
All tools have been tested and demonstrate:
- Proper input validation and error handling
- Comprehensive and well-structured responses
- Consistent output formatting
- Practical applicability to real-world scenarios
- Deep domain expertise in their respective areas
The examples serve as both documentation and test cases, showing the expected behavior and quality of responses for each tool.

View File

@@ -0,0 +1,135 @@
# Brainstorm Enhancements Tool Example
## Request
```json
{
"concept": "A code review tool that integrates with GitHub and uses AI to automatically analyze pull requests, focusing on code quality, security vulnerabilities, and performance issues. The tool currently provides inline comments on the PR and a summary report."
}
```
## Response
```markdown
### **1. Quick Wins (Immediate, Low-Effort Improvements)**
---
#### **A. Contextual Inline Comment Explanations**
- **Description**: Add brief explanations and code examples to AI-generated inline comments to help developers understand *why* a change is suggested (e.g., "This SQL query is vulnerable to injection; here's a parameterized example").
- **Impact**: Reduces ambiguity, accelerates fixes, and improves trust in AI feedback.
- **Complexity**: Low (leverage existing LLMs like GPT-4 for concise explanations).
- **Resources**: 12 developers, 23 weeks.
- **Challenges**: Ensuring explanations are accurate and non-redundant.
#### **B. Customizable Rulesets**
- **Description**: Allow teams to toggle specific code quality rules (e.g., enforce strict TypeScript typings but ignore line-length warnings).
- **Impact**: Tailors feedback to team priorities, reducing noise.
- **Complexity**: Low (add UI toggles tied to existing rule configurations).
- **Resources**: 1 frontend developer, 1 backend developer, 3 weeks.
- **Challenges**: Managing conflicting rule combinations.
#### **C. Severity-Based Issue Prioritization**
- **Description**: Flag issues as Critical/High/Medium/Low (e.g., security vulnerabilities = Critical, formatting = Low) and sort them in the summary report.
- **Impact**: Helps developers triage fixes efficiently.
- **Complexity**: Low (predefined severity tiers for common issues).
- **Resources**: 1 developer, 12 weeks.
- **Challenges**: Subjective severity assignments may require user customization.
#### **D. Dark Mode for Summary Reports**
- **Description**: Add a dark theme option to the PDF/HTML summary reports.
- **Impact**: Improves readability and aligns with developer preferences.
- **Complexity**: Low (CSS/theme adjustments).
- **Resources**: 1 frontend developer, 1 week.
- **Challenges**: Minimal.
---
### **2. Strategic Enhancements (Medium-Term, Moderate Complexity)**
---
#### **A. Multi-Platform CI/CD Integration**
- **Description**: Expand beyond GitHub to natively support GitLab, Bitbucket, and Azure DevOps pipelines.
- **Impact**: Captures broader market share and enterprise clients.
- **Complexity**: Medium (API integrations, testing across platforms).
- **Resources**: 23 developers, 23 months.
- **Challenges**: Varying API limitations across platforms.
#### **B. Real-Time Collaboration Mode**
- **Description**: Enable multiple users to interact with AI feedback simultaneously (e.g., threaded discussions, vote on fixes).
- **Impact**: Streamlines team collaboration and decision-making.
- **Complexity**: Medium (real-time sync, conflict resolution).
- **Resources**: 2 backend developers, 1 frontend developer, 34 months.
- **Challenges**: Scaling real-time features for large PRs.
#### **C. Predictive Impact Analysis**
- **Description**: Use AI to predict how code changes might affect system performance or security in production (e.g., "This loop could cause latency spikes at scale").
- **Impact**: Proactively prevents regressions.
- **Complexity**: High (requires training ML models on historical performance data).
- **Resources**: 1 ML engineer, 2 backend developers, 46 months.
- **Challenges**: Data collection and model accuracy.
#### **D. Automated Remediation for Simple Fixes**
- **Description**: Offer one-click fixes for low-complexity issues (e.g., updating deprecated dependencies, fixing syntax errors).
- **Impact**: Reduces manual toil for trivial tasks.
- **Complexity**: Medium (code generation + safety checks).
- **Resources**: 2 developers, 3 months.
- **Challenges**: Ensuring automated fixes don't introduce new bugs.
#### **E. Developer Learning Hub**
- **Description**: Curate a knowledge base of common issues flagged by the tool, with links to tutorials and best practices.
- **Impact**: Turns code reviews into teaching moments, improving team skills.
- **Complexity**: Medium (content curation + UI integration).
- **Resources**: 1 technical writer, 1 developer, 2 months.
- **Challenges**: Keeping content updated.
---
### **3. Transformative Ideas (Long-Term, Innovative Solutions)**
---
#### **A. Self-Learning AI Model**
- **Description**: Train the AI to adapt to a team's unique coding patterns over time, reducing false positives (e.g., ignoring approved legacy code conventions).
- **Impact**: Personalizes feedback, boosting adoption.
- **Complexity**: High (federated learning, continuous feedback loops).
- **Resources**: 23 ML engineers, 612 months.
- **Challenges**: Privacy concerns with training on private codebases.
#### **B. Virtual Pair Programming**
- **Description**: Integrate the AI into IDEs (VS Code, JetBrains) to provide real-time suggestions *while coding*, not just during PRs.
- **Impact**: Shifts left code quality checks, reducing PR backlog.
- **Complexity**: High (IDE plugin development + low-latency AI).
- **Resources**: 34 developers, 912 months.
- **Challenges**: Performance optimization for real-time analysis.
#### **C. Code Health "Credit Score"**
- **Description**: Generate a dynamic score for repositories based on technical debt, test coverage, and vulnerability history, with actionable insights.
- **Impact**: Helps teams track and prioritize long-term code quality.
- **Complexity**: High (metric aggregation + visualization).
- **Resources**: 2 data engineers, 1 frontend developer, 6 months.
- **Challenges**: Defining universally meaningful metrics.
#### **D. Blockchain-Verified Audits**
- **Description**: Store immutable audit logs of code reviews on a blockchain for compliance-heavy industries (finance, healthcare).
- **Impact**: Differentiates in regulated markets.
- **Complexity**: High (blockchain integration, legal compliance).
- **Resources**: 2 blockchain developers, 68 months.
- **Challenges**: Cost and scalability of blockchain storage.
#### **E. Decentralized AI Models**
- **Description**: Offer on-premise deployments where AI models run locally to address data privacy concerns.
- **Impact**: Attracts enterprises with strict data governance policies.
- **Complexity**: High (model optimization for edge devices).
- **Resources**: 2 ML engineers, 2 DevOps engineers, 12 months.
- **Challenges**: Maintaining model accuracy in offline environments.
---
### **4. Implementation Recommendations**
---
1. **Prioritize Quick Wins First** to build user trust and demonstrate rapid iteration (e.g., severity filters + explanations).
2. **Phase Strategic Enhancements** based on user feedback (start with multi-platform support and automated fixes).
3. **Invest in R&D for Transformative Ideas** by partnering with research institutions or leveraging open-source models.
4. **Adopt a Modular Architecture** to isolate components (e.g., AI engine, integrations) for easier scaling and updates.
5. **Leverage Cloud-Native Tools** (e.g., Kubernetes, serverless) for scalability and cost efficiency.
6. **Build a Feedback Loop** with early adopters to validate features like the learning hub or predictive analysis.
7. **Focus on Security Early** (e.g., SOC 2 compliance, encryption) to appeal to enterprise buyers.
**Key Challenges to Mitigate**:
- Balancing AI autonomy with user control (avoid "black box" frustration).
- Ensuring low latency for real-time features.
- Competing with GitHub's native tools (Copilot, CodeQL) through deeper customization.

138
examples/code-review.md Normal file
View File

@@ -0,0 +1,138 @@
# Code Review Tool Example
## Request
```json
{
"code_snippet": "import type { ToolDefinition, CodeReviewArgs, ToolResponse } from '../../types/index.js';\nimport { makeDeepseekAPICall, checkRateLimit } from '../../api/deepseek/deepseek.js';\nimport { readFileContent } from '../../utils/file.js';\nimport { createPrompt, PromptTemplate, sanitizeInput } from '../../utils/prompt.js';\n\nconst SYSTEM_PROMPT = `You are an expert code reviewer...`;\n\nconst PROMPT_TEMPLATE: PromptTemplate = {\n template: `Review the following {language} code...`,\n systemPrompt: SYSTEM_PROMPT\n};\n\nexport const definition: ToolDefinition = {\n name: 'code_review',\n description: 'Provides a code review...',\n inputSchema: {\n type: 'object',\n properties: {\n file_path: { type: 'string', description: 'The full path...' },\n language: { type: 'string', description: 'The programming language...' },\n code_snippet: { type: 'string', description: 'Optional small code...' }\n },\n oneOf: [\n { required: ['file_path', 'language'] },\n { required: ['code_snippet', 'language'] }\n ]\n }\n};\n\nexport async function handler(args: unknown): Promise<ToolResponse> {\n if (!checkRateLimit()) {\n return { content: [{ type: 'text', text: 'Rate limit exceeded' }], isError: true };\n }\n\n if (!args || typeof args !== 'object') {\n return { content: [{ type: 'text', text: 'Invalid arguments' }], isError: true };\n }\n\n if (!('language' in args) || typeof args.language !== 'string') {\n return { content: [{ type: 'text', text: 'Language required' }], isError: true };\n }\n\n try {\n let codeToReview: string;\n const typedArgs = args as CodeReviewArgs;\n\n if (typedArgs.file_path) {\n try {\n codeToReview = await readFileContent(typedArgs.file_path);\n } catch (error) {\n return {\n content: [{ type: 'text', text: `Error reading file: ${error}` }],\n isError: true\n };\n }\n } else if (typedArgs.code_snippet) {\n codeToReview = typedArgs.code_snippet;\n } else {\n return {\n content: [{ type: 'text', text: 'File path or snippet required' }],\n isError: true\n };\n }\n\n const sanitizedCode = sanitizeInput(codeToReview);\n const sanitizedLanguage = sanitizeInput(typedArgs.language);\n const prompt = createPrompt(PROMPT_TEMPLATE, {\n language: sanitizedLanguage,\n code: sanitizedCode\n });\n\n const response = await makeDeepseekAPICall(prompt, SYSTEM_PROMPT);\n\n if (response.isError) {\n return {\n content: [{ type: 'text', text: `Error: ${response.errorMessage}` }],\n isError: true\n };\n }\n\n return { content: [{ type: 'text', text: response.text }] };\n } catch (error) {\n console.error('Code review error:', error);\n return {\n content: [{ type: 'text', text: `Error: ${error}` }],\n isError: true\n };\n }\n}",
"language": "typescript"
}
```
## Response
```markdown
1. Critical Issues
------------------
- **Unsafe type assertion**:
`const typedArgs = args as CodeReviewArgs` bypasses proper type validation. An attacker could send malformed payloads that pass initial checks but fail later.
- **Incomplete input validation**:
While checking for `language`, the code doesn't validate types for `file_path` or `code_snippet`. An invalid numeric `file_path` would throw later.
- **Conflicting input handling**:
The schema specifies `oneOf` but code prioritizes `file_path` over `code_snippet` if both are present. This contradicts the schema definition.
2. Security Concerns
--------------------
- **Error message leakage**:
Returning raw error messages (e.g., `Error reading file: ${error}`) could expose sensitive filesystem information.
- **Potential prompt injection**:
While `sanitizeInput` is used, its implementation isn't shown. Inadequate sanitization could allow LLM prompt injections via user-provided code snippets.
- **Early rate limiting**:
`checkRateLimit()` is called before input validation, allowing invalid requests to consume rate quota.
3. Performance Considerations
------------------------------
- **Duplicate sanitization**:
Both `codeToReview` and `language` are sanitized, but this might be redundant if the API layer already handles sanitization.
- **Unbounded file reading**:
`readFileContent` could cause memory issues with large files. Consider adding size limits.
4. Code Style & Best Practices
-------------------------------
- **Type safety violations**:
- Using type assertion (`as`) instead of proper type guards
- No Zod-like validation for input schema
- `unknown` type in handler parameters not properly narrowed
- **Error handling inconsistencies**:
- Mix of error return objects and exceptions
- `console.error` in production code (should use structured logging)
- **Template issues**:
- Hardcoded SYSTEM_PROMPT makes testing/mocking difficult
- Prompt construction could be vulnerable to injection if sanitizeInput isn't robust
5. Suggested Improvements
--------------------------
**Input Validation**:
```typescript
// Use type guard instead of assertion
function isCodeReviewArgs(args: unknown): args is CodeReviewArgs {
return /* detailed property checks */;
}
// Validate file_path format if applicable
if (typedArgs.file_path && !isValidPath(typedArgs.file_path)) {
return invalidPathError;
}
```
**Security Enhancements**:
- Implement generic error messages:
```typescript
return {
content: [{ type: 'text', text: 'File read error' }],
isError: true
};
```
- Add file size limits:
```typescript
const MAX_FILE_SIZE = 1024 * 1024; // 1MB
if (fileStats.size > MAX_FILE_SIZE) throw new Error('File too large');
```
**Performance Improvements**:
- Move rate limit check after input validation:
```typescript
export async function handler(args: unknown): Promise<ToolResponse> {
// Validate FIRST
if (!validateArgs(args)) return invalidArgsResponse;
// Then check rate limits
if (!checkRateLimit()) return rateLimitResponse;
```
**TypeScript Best Practices**:
1. Use Zod for schema validation:
```typescript
import { z } from 'zod';
const CodeReviewSchema = z.union([
z.object({
file_path: z.string().min(1),
language: z.string().min(1)
}),
z.object({
code_snippet: z.string().min(1),
language: z.string().min(1)
})
]);
```
2. Implement proper error typing:
```typescript
class CodeReviewError extends Error {
public readonly isUserSafe: boolean;
constructor(message: string, isUserSafe: boolean) {
super(message);
this.isUserSafe = isUserSafe;
}
}
```
**Architecture Suggestions**:
- Separate validation, business logic, and presentation layers
- Use dependency injection for file system access and API calls
- Implement circuit breakers for the Deepseek API integration
- Add telemetry for code review request metrics
**Additional Considerations**:
- Add timeout handling for file operations and API calls
- Implement content scanning for malicious code patterns in snippets
- Add cache control headers for API responses
- Consider streaming partial responses for large reviews
- Add content security policies for any HTML output

124
examples/design-critique.md Normal file
View File

@@ -0,0 +1,124 @@
# Design Critique Tool Example
## Request
```json
{
"design_document": "A dashboard interface for a cloud monitoring system with the following components:\n\n- Top navigation bar with company logo, user profile, and notifications\n- Left sidebar with collapsible menu items for different monitoring categories (CPU, Memory, Network, Storage)\n- Main content area showing:\n - Overview cards with key metrics\n - Real-time graphs showing system performance\n - Alert history table\n - Resource usage breakdown\n- Footer with system status and last update timestamp\n\nThe design uses a dark theme with accent colors for critical alerts and status indicators. Graphs update every 30 seconds and alerts appear as toast notifications.",
"design_type": "web UI"
}
```
## Response
```markdown
### Comprehensive Design Critique: Cloud Monitoring Dashboard
#### 1. **Overall Assessment**
The design presents a functional foundation for a monitoring dashboard with logical component organization. Its dark theme aligns with industry standards for operational tools, and the real-time elements address core user needs. However, the layout risks visual clutter in high-alert scenarios, and several accessibility/technical optimizations could enhance usability and scalability.
---
#### 2. **Strengths**
- **Intuitive Layout**: Industry-standard top nav + left sidebar structure reduces cognitive load.
- **Effective Alert Hierarchy**: Accent colors (e.g., red for critical alerts) create clear visual urgency.
- **Contextual Updates**: 30-second graph refresh rate balances data recency with server load.
- **Multi-Channel Notifications**: Toast + alert history provides dual awareness mechanisms.
- **System Transparency**: Footer timestamp builds trust in data freshness.
---
#### 3. **Areas for Improvement**
**Usability**
- No apparent customization options for dashboard widgets
- Overloaded main content area risks "dashboard fatigue"
- Missing emergency controls (e.g., manual alert suppression)
**Visual Design**
- Insufficient spacing between overview cards
- Undifferentiated graph line colors (accessibility risk)
- Status indicators lack text labels (icon-only)
**Technical Feasibility**
- 30-second polling may strain backend at scale
- No error states for failed graph updates
- Toast stacking behavior undefined
**Accessibility**
- Contrast ratios below WCAG 2.1 AA for secondary text
- No keyboard navigation for sidebar
- Missing ARIA labels for dynamic content
---
#### 4. **Specific Recommendations**
**A. Layout & Interaction (High Impact)**
1. Add **dashboard customization**:
- Drag-and-drop widget rearrangement
- Collapsible/expandable sections
- User-defined thresholds for alert colors
2. Implement **progressive disclosure**:
- Expandable graph tooltips with raw metrics
- "Show More" pagination for alert history
3. Introduce **emergency bar** at viewport top for SEV-1 alerts
**B. Visual Hierarchy (Medium Impact)**
1. Apply **8px grid system** to cards/graphs for consistent spacing
2. Use **semantic color scales**:
- 3 distinct hues for CPU/Memory/Network
- Pattern overlays for grayscale printing
3. Add **status badges** to overview cards (e.g., "12% ▲ from avg")
**C. Technical Optimization**
1. Replace polling with **WebSocket** for real-time updates
2. Implement **client-side throttling**:
- Pause updates when tab/window not active
- Adaptive refresh rates (15-60s) based on connectivity
3. Create **toast queue system** with:
- Auto-dismiss timers (5s info / 15s critical)
- Manual "pin" capability
- Batch grouping for similar alerts
**D. Accessibility Compliance**
1. Achieve **4.5:1 contrast ratio** for all text (tools: Stark/Colorable)
2. Add **keyboard shortcuts**:
- Ctrl+1-4 for sidebar categories
- Spacebar to pause auto-refresh
3. Provide **alternative data views**:
- Tabular graph data export
- Sonification toggle for alert patterns
---
#### 5. **Implementation Considerations**
**Frontend Architecture**
- Use **React/Vue component library** with:
- State management (Redux/Vuex) for metric data
- Recharts/Visx for customizable graphs
- ResizeObserver for responsive widgets
**Backend Integration**
- **GraphQL API** with:
- Field-level metrics subscription
- Query cost analysis to prevent overfetching
- Cached historical data (Redis/Memcached)
**Performance**
- **Web Workers** for:
- Client-side data aggregation
- Off-main-thread graph calculations
- **Compression** for time-series data (Protocol Buffers > JSON)
**Maintenance**
1. **Design Token System**:
- CSS Custom Properties for themes
- Versioned design docs (Storybook/Figma)
2. **Monitoring**:
- Perf metrics for dashboard load (LCP, INP)
- User interaction heatmaps (Hotjar/LogRocket)
3. **Alert Taxonomy**:
- Machine-readable schema (JSON-LD)
- Automated alert lifecycle management
---
### Final Notes
This design shows strong potential but requires refinement in scalability and inclusive design. Prioritize implementing the WebSocket transition and accessibility fixes in Phase 1, followed by customization features. Conduct usability testing with SRE personas to validate alert prioritization workflows. Consider adopting the OpenTelemetry standard for future-proof metric collection.

View File

@@ -0,0 +1,32 @@
# Second Opinion Tool Example
## Request
```json
{
"user_request": "Build a secure authentication system with OAuth2 and JWT tokens"
}
```
## Response
```markdown
**Critical Considerations for Building a Secure OAuth2/JWT Authentication System:**
- **Flow Selection Complexity:** Choosing the correct OAuth2 flow (e.g., Authorization Code + PKCE for SPAs/mobile, Client Credentials for machine-to-machine) is critical to avoid security vulnerabilities like token leakage or misuse.
- **JWT Security Hardening:** Risks of unsigned/misconfigured tokens (e.g., using `none` algorithm), insecure secret storage for signing keys, and exposure of sensitive data in claims require strict validation, asymmetric signing (RS256), and payload minimization.
- **Token Lifetime & Refresh Strategy:** Short-lived JWTs with secure refresh token rotation (server-side tracking, binding to client/IP) to mitigate replay attacks and long-term exposure risks.
- **Revocation Challenges:** Stateless JWTs lack built-in invalidation—implement hybrid checks (e.g., short expiration, distributed deny lists, or token introspection endpoints) for compromised tokens.
- **Key Management:** Secure rotation of cryptographic keys (HS256/RS256) without downtime, avoiding hardcoded secrets, and using HSMs/KMS solutions for enterprise-grade systems.
- **Client-Side Storage Risks:** Mitigate XSS/CSRF vulnerabilities via HTTP-only cookies for web apps, secure mobile keystores, and avoiding local storage for tokens.
- **Compliance Overheads:** GDPR, CCPA, or industry-specific regulations (HIPAA, PCI-DSS) may mandate audit logging, consent management, or encryption-at-rest for user data.
- **Federated Identity Risks:** If integrating third-party IdPs (Google, Facebook), validate issuer claims, enforce email verification, and plan for IdP outage scenarios.
- **Scale & Performance:** Stateless JWTs enable horizontal scaling but risk oversized tokens; monitor header size impacts on network performance and API latency.
- **Observability Gaps:** Centralized logging for token issuance/errors, anomaly detection (e.g., spike in failed auth attempts), and monitoring token misuse patterns.
- **Dependency Risks:** Audit third-party OAuth libraries (e.g., `passport-oauth2`, `jsonwebtoken`) for vulnerabilities and maintain update cycles to address CVEs.
- **Edge Case Blast Radius:** Plan for token leakage (auto-revocation triggers), brute-force protection on auth endpoints, and OAuth2 error handling to avoid information leaks.
- **Build vs. Buy Tradeoffs:** Evaluate cost/effort of self-hosted solutions (Keycloak) vs. managed services (Auth0, AWS Cognito), especially for teams lacking security expertise.
**Advanced Value Adds:**
- OAuth2 dynamic client registration
- OpenID Connect integration for user profile data
- Step-up authentication with MFA hooks
- Mutual TLS for client authentication in high-risk scenarios

View File

@@ -0,0 +1,125 @@
# Writing Feedback Tool Example
## Request
```json
{
"text": "# Rate Limiting Implementation Guide\n\nThis document describes the implementation of rate limiting in our API service. Rate limiting prevents abuse and ensures fair resource allocation.\n\nRate limits are enforced using a token bucket algorithm. Each client gets a bucket that fills at a fixed rate. When the bucket is empty, requests are rejected.\n\nTo implement rate limiting:\n1. Configure bucket size and refill rate\n2. Store bucket state in Redis\n3. Check token availability before processing requests\n4. Update bucket state after processing\n\nExample:\n```js\nconst rateLimit = {\n bucketSize: 100,\n refillRate: 10 // tokens per second\n}\n```\n\nWhen rate limit is exceeded, return 429 Too Many Requests with retry-after header.",
"writing_type": "documentation"
}
```
## Response
```markdown
Here's my comprehensive feedback on your Rate Limiting Implementation Guide:
**1. Overall Assessment**
The document provides a functional foundation but lacks depth required for production-grade implementation guidance. While core concepts are present, critical technical details and API documentation standards are underdeveloped.
**2. Strengths**
✓ Clear purpose statement in opening paragraph
✓ Logical progression from concept to implementation steps
✓ Appropriate use of code fencing for example
✓ Correct identification of HTTP 429 response
✓ Concise explanation of token bucket algorithm basics
**3. Areas for Improvement**
• Insufficient technical details for distributed systems implementation
• Missing error handling and edge case considerations
• No versioning or maintenance guidance
• Lack of complete API response examples
• Omission of monitoring and troubleshooting recommendations
**4. Specific Recommendations**
**Clarity & Coherence**
→ Add visual diagram of token bucket mechanism
→ Clarify time window interpretation ("10 per second" vs sliding window)
→ Define "client" precisely (IP? API key? User session?)
→ Explain Redis storage schema and expiration policies
**Grammar & Style**
→ Use active voice: "Each client receives" instead of "Each client gets"
→ Maintain parallel structure in implementation steps:
- Current: "Configure... Store... Check... Update"
- Improved: "Configure... Implement... Validate... Maintain"
→ Fix markdown escaping in code block (extra backslashes)
**Structure & Organization**
Add these sections:
1. Prerequisites
2. Error Handling Specifications
3. Monitoring Metrics (e.g., X-RateLimit headers)
4. Version History
5. Common Configuration Patterns
6. Troubleshooting Checklist
**Content & Accuracy**
Critical technical additions needed:
• Distributed system coordination (Redis transactions/Lua scripting)
• Burst handling and warmup strategies
• Concurrent request race conditions
• Redis persistence and backup considerations
• Client identification methodology
• Grace period implementation for near-limit cases
**5. Key Action Items**
**Priority Updates**
1. Add complete API response example:
```js
// 429 Response Example
{
"error": {
"code": "rate_limit_exceeded",
"message": "Maximum 100 requests per minute",
"retry_after": 5.2 // seconds
}
}
Headers:
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1689876543
Retry-After: 5.2
```
2. Expand code example with Redis operations:
```js
// Token consumption logic
async function checkRateLimit(clientId) {
const now = Date.now();
const key = `rate_limit:${clientId}`;
const result = await redis.multi()
.hgetall(key)
.hmset(key, {
tokens: Math.min(
(current.tokens || bucketSize) +
Math.floor((now - (current.lastRefill || now)) / 1000 * refillRate),
bucketSize
),
lastRefill: now
})
.expire(key, 3600)
.exec();
if(result.tokens < 1) throw new RateLimitError();
}
```
3. Add troubleshooting table:
| Symptom | Potential Cause | Verification Method |
|-----------------------|--------------------------|---------------------------|
| Consistent 429 errors | Misconfigured bucket size | Check Redis config stores |
| Redis connection fails| Network ACL restrictions | Test telnet to Redis port |
**Recommended Additions**
• Versioning: "Rate limit configurations are versioned with API releases. Always specify limit parameters in your API contract using semantic versioning."
• Monitoring: "Track these metrics:
- Rejection rate by client segment
- 90th percentile token consumption
- Redis latency percentiles"
• Maintenance: "Review rate limits quarterly using historical traffic analysis. Implement canary deployments for limit changes."
**Summary**
The guide effectively introduces core concepts but requires deeper technical specification and operational documentation. Focus on adding implementation details for distributed systems, comprehensive error handling examples, and maintenance procedures to make this production-ready. Maintain the current concise style while expanding technical completeness.