~/rushijoshi
← back to blog

AI as a monkey and a monk

aisoftware engineeringworkflow

Monkey Mode vs Monk Mode: Mastering AI-Assisted Development

Lately, a recurring theme in my conversations with my friends has been the evolving role of AI in our daily life and work. A conversation yesterday made me realize most engineers I talk to are either treating AI like a magic wand ("just make it work!") or avoiding it entirely ("real programmers don't need help!"). I think both approaches miss the point entirely.

Here's the thing: after months of experimenting with Cursor AI, Claude code, and more models than I care to admit, I've discovered that the secret isn't in the tool — it's in knowing when to be the teacher and when to be the student. So here are my 2 cents on how I use AI tools.

I've settled on two distinct modes of AI collaboration that have transformed my workflow:

  • Monkey Mode: AI as the naive junior developer I once was
  • Monk Mode: AI as my wise senior mentor

The key insight? These modes require completely different mindsets, contexts, and expectations. Mix them up, and you'll end up with either over-engineered hello-world apps or rubber-stamp code reviews that miss critical flaws.

1. Monkey Mode: Teaching the Eager Apprentice

In Monkey Mode, I treat AI like a brilliant but inexperienced junior developer. We've all been there — the new engineer who can implement anything you describe but needs everything spelled out.

Remember your first code review where a senior engineer wrote "needs better error handling" and you stared at the screen wondering what that even meant? That's exactly how AI feels when you give it vague instructions.

Instead of:

Build a user authentication system

I write:

Implement a JWT-based authentication system with the following requirements:
- Express.js middleware for token validation
- 15-minute access tokens, 7-day refresh tokens
- Rate limiting: 5 failed attempts = 15-minute lockout
- Password requirements: 8+ chars, 1 uppercase, 1 number, 1 special
- Hash passwords with bcrypt, salt rounds = 12
- Return 401 for invalid tokens, 429 for rate limits
- Log all auth events to posthog with user ID and IP
- Handle edge cases: expired tokens, malformed requests, missing headers

The difference? The first request gets you a basic login form. The second gets you a (nearly) production-ready authentication that won't make your security team cry.

Context Files: Your Secret Weapon

Here's where most people go wrong: they start fresh conversations for every task, forcing AI to rediscover your coding standards, architecture decisions, and business logic every single time.

I maintain context files that auto-inject into my AI sessions. Something as simple as:

context/coding-standards.md

## Error Handling
- Always use Result<T, Error> pattern for operations that can fail
- Log errors with correlation IDs
- Never expose internal error details to clients
 
## Testing
- Unit tests for business logic (Jest)
- Integration tests for API endpoints (Supertest)
- Minimum 80% coverage, aim for 90%
 
## Performance
- Database queries must include EXPLAIN ANALYZE results
- API responses under 200ms for 95th percentile
- Use Redis for caching with 5-minute TTL default

context/architecture.md

## Current Stack
- Node.js 18+ with TypeScript
- PostgreSQL with Prisma ORM
- Redis for caching and sessions
- Docker for containerization
 
## Design Patterns
- Repository pattern for data access
- Application layer for business logic
- Service layer for HTTP handling
- Event-driven architecture for async operations using azure queues

This lets the AI know that it should use Redis, follow your error patterns, include proper tests, and maintain your architectural boundaries.

Session Hygiene

This is crucial: keep your Monkey Mode sessions focused and isolated. One session for authentication, another for payment processing, a third for notification systems.

Why? Because AI context windows are like human attention spans — they degrade with too much information. A session that starts with "implement user login" and evolves into "also add email templates and fix that weird CSS bug" will produce increasingly confused and inconsistent code.

2. Monk Mode: Learning from the Master

Monk Mode flips the script entirely. Here, AI becomes my senior colleague — the one who's seen every antipattern, survived every architectural disaster, and has strong opinions about why your "clever" solution will bite you in six months.

The Adversarial Code Review

Instead of asking AI to write code, I ask it to break my code:

Review this authentication middleware. Act as a security-focused senior 
engineer. What vulnerabilities do you see? What edge cases am I missing? 
How would you attack this system? Be ruthlessly critical.

The results are eye-opening. AI will spot timing attacks in your password comparison, race conditions in your token refresh logic, and memory leaks in your session handling that you might have completely missed.

Architecture Validation

Before committing to major architectural decisions, I run them through the AI gauntlet:

I'm designing a microservices architecture for a fintech app. 
Here's my approach: [detailed design document]

Play devil's advocate. What are the failure modes? Where will this design 
break at scale? What am I not considering? Challenge every assumption.

AI excels at this because it's not emotionally invested in your clever solution. It will cheerfully point out that your event-driven architecture has no dead letter queues, your service mesh will become a debugging nightmare, and your "eventual consistency" strategy is actually "eventual chaos."

Design Pattern Deep Dives

When I'm stuck on a complex problem, I use AI as a design pattern consultant:

I need to handle complex business rules that change frequently. 
Users can have different pricing tiers, geographic restrictions, 
and time-based promotions. The rules interact in complex ways. 
What design patterns would you recommend? Walk me through the 
tradeoffs of each approach.

AI will suggest Strategy patterns, Rule engines, Policy objects with implementation examples and discussions of when each pattern shines or fails. Here AI acts as a teacher and I can learn and dig deep on the topics I need, when I need them.

Context Separation is Critical

Same as in Monkey Mode, Monk mode sessions should stay focused. Monk Mode requires its own clean context. Don't pollute your architectural discussions with implementation details, and don't mix code reviews with feature requests.

I also maintain separate context files for Monk Mode, simply describing my NFRs such as the security, performance and maintainability requirement checklists ensuring that I have the necessary context present with every prompt.

The Human Expertise Safety Net

Here's something crucial that might take you a few embarrassing production bugs to learn (don't ask how I know that): AI is only as good as your ability to spot when it's wrong.

Last month, AI confidently suggested using JSON.parse() without error handling in a critical piece of code. It even provided elegant justification about "trusting upstream data validation." All of my experience screamed "NOPE" — I've seen too many systems crash because someone assumed data would always be perfectly formatted.

This is where domain expertise becomes your superpower. AI might know every design pattern ever invented, but it doesn't know that:

  • Your payment processor occasionally sends malformed webhooks
  • Your database connection pool starts acting weird under high load
  • Your CDN has a quirky caching behavior that breaks your asset versioning

Know your tools, know your systems and learn to spot mistakes.

The Hallucination Problem

AI will confidently tell you about JavaScript features that don't exist, recommend libraries that were deprecated years ago, or suggest architectural patterns that sound brilliant but violate fundamental principles of your tech stack.

I caught AI recommending Array.prototype.fromAsync() in production code — a method that is still in proposal stage. Another time, it suggested using a specific Redis configuration that would have caused data loss in my setup.

The solution? Trust but verify. Every AI suggestion gets the same scrutiny I'd give to a junior developer's pull request:

  • Does this make sense?
  • Have I seen this pattern work before?
  • What could go wrong?

Staying Current in an AI World

Here's an uncomfortable truth: AI models are not all-knowing. They are trained on data with cutoff dates. The model I'm using might not know about the latest security vulnerabilities, framework updates, or best practices that emerged last month.

This means staying current is more important than ever, not less. I still:

  • Follow security advisories for my tech stack
  • Read release notes for major framework updates
  • Participate in engineering communities and discussions
  • Experiment with new tools and patterns myself

When AI suggests using an older version of a library or misses a recent security best practice, my up-to-date knowledge catches it. When AI doesn't know about the latest React features or the newest Angular performance improvements, I can guide it in the right direction.

The 2 Context Files

Both modes benefit enormously from well-maintained context files, but they serve different purposes:

  • Monkey Mode contexts provide implementation guidelines — how to write code that fits your system
  • Monk Mode contexts provide evaluation criteria — how to judge whether code is good enough

These are just markdown files that auto-inject 2–3 pages of context in every prompt, that would take you 10 minutes to type out manually. All while ensuring consistency across all AI interactions.

My Results So Far

Here's what this dual-mode approach has done for my productivity:

Monkey Mode lets me implement features 3–4x faster than manually coding. Not because AI writes perfect code (it doesn't), but because it handles the tedious boilerplate while I focus on business logic and edge cases.

Monk Mode catches bugs and design flaws that would have taken weeks to surface in production. It's like having a senior engineer pair with you on every architectural decision.

The combination is powerful: I use Monkey Mode to rapidly prototype and implement, then immediately switch to Monk Mode to tear it apart and improve it. My domain expertise acts as the quality gate, ensuring that AI's suggestions actually make sense in the real world.

Monkey Mode amplifies your implementation speed. Monk Mode amplifies your architectural wisdom. Your domain expertise ensures everything actually works in production.

The Path Forward

While we're still in the early days of AI-assisted development, the patterns are becoming clear. These new tools will bring huge benefits if we learn to be effective AI collaborators — knowing when to lead, when to follow, and when to challenge.

Now if you'll excuse me, I need to go explain to my AI why "just make it faster" isn't a valid performance requirement, and why that "clever" caching strategy it suggested would actually create a memory leak. Some things never change (smh).