What I write about

Monday, 6 April 2026

Design Failures Caused by a Fundamental Misunderstanding of MCP

Design Failures Caused by a Fundamental Misunderstanding of MCP

Design Failures Caused by a Fundamental Misunderstanding of What MCP Actually Is (And Why It Breaks in Production)

Most MCP implementation failures are design failures. And they come from a simple but critical misunderstanding of what MCP actually is. I see this repeatedly across teams—developers, architects, even experienced engineering orgs.

MCP gets treated like:

  • a better API layer
  • a structured version of function calling
  • or just another integration pattern

On the surface, this seems to work.

  • Demos look clean
  • Test cases pass
  • Early results feel promising

But none of that reflects real usage.

Then the system hits production.

And the cracks start showing:

  • the wrong tools get selected
  • behavior becomes inconsistent across similar queries
  • latency increases without clear reason
  • debugging becomes guesswork

At this point, the blame usually shifts to the model:

“The LLM is unreliable.”

It isn’t.

What’s actually happening is more fundamental.

MCP is being introduced into systems that are still designed as if execution is deterministic.

But MCP changes that completely.

It introduces a decision-making layer into your architecture.

Which means:

  • execution is no longer fully controlled by code
  • flows are not fixed—they are interpreted at runtime
  • correctness depends on how well you define tools, schemas, and boundaries

This is the shift most teams underestimate.

Because of that, they carry forward design habits from traditional systems:

  • overlapping responsibilities
  • loosely defined interfaces
  • implicit assumptions about flow

In a deterministic system, this might still hold.

In an MCP system, it creates ambiguity.

And ambiguity at design time becomes:

  • incorrect tool selection
  • inconsistent execution paths
  • silent failures that look “correct”

These issues rarely show up in controlled testing.

They surface under:

  • real user variability
  • scale
  • and unpredictable inputs

Which is why MCP systems often appear to “break” only in production.

Not because MCP is flawed—

but because the system around it was never designed for how MCP actually works.

That’s also why the same questions keep coming up across teams.

Not as isolated problems, but as symptoms of the same root cause.

So instead of addressing them one by one in conversations, I’ve put together the most common misconceptions, questions, and failure patterns I see—and how to approach them correctly in real systems.


MCP Deep-Dive FAQ

1) When would I need MCP instead of just using APIs directly?

APIs are designed to execute predefined operations. When you use APIs directly, you are responsible for deciding when to call them, how to sequence them, and how to combine their results.

MCP introduces a decision-making layer on top of this. Instead of hardcoding the flow, the LLM interprets the user’s intent and decides which tool (and therefore which capability) should be used.

If your workflow is fixed, predictable, and does not depend on interpretation, APIs are sufficient. However, if your workflow needs to adapt dynamically to user input, MCP provides flexibility and reduces hardcoded logic.

2) Is MCP just function calling, or does it solve a broader problem?

No. Function calling is only a mechanism that allows a model to invoke a tool.

MCP is a broader architectural pattern. It defines how tools are:

  • described (using schemas)
  • exposed (via MCP servers)
  • selected and orchestrated (via MCP clients and the LLM)

In other words, function calling is one implementation detail, while MCP is about structuring how reasoning connects to execution across a system.

3) How should I think about the number of MCP clients and servers in a real system?

In most real-world systems, you should begin with a single MCP client and a single MCP server.

The MCP client is responsible for interacting with the LLM and orchestrating tool usage. The MCP server exposes a set of tools that the LLM can access.

You should only introduce multiple MCP servers when there is a clear separation of concerns, such as:

  • different business domains (e.g., pricing vs analytics)
  • separate teams owning different toolsets
  • security or scaling requirements

Multiple MCP clients are typically only needed in advanced setups such as multi-agent systems or separate applications with independent reasoning flows.

4) How do APIs, microservices, and MCP fit together in a real architecture?

These components operate at different layers and should not be confused as alternatives.

MCP is responsible for deciding what action should be taken next based on user intent.

Tools act as the interface layer. They receive structured input from the LLM and translate it into executable operations.

APIs are used for simple, well-defined operations such as fetching data or triggering a specific action.

Microservices handle more complex responsibilities, including business logic, data processing, and scalable backend operations.

In , MCP determines the action, tools translate that decision, and APIs or microservices perform the actual work.

5) How does the system decide which tool or capability to use?

The LLM does not have true understanding of tools; it performs pattern matching using the signals you provide.

It relies on three primary inputs:

  • the tool description (what the tool claims to do and when to use it)
  • the input schema (how to call it and what arguments look like)
  • the user’s query (intent expressed in natural language)

At runtime, the model compares the user’s query with available tool descriptions and selects the tool whose description best matches the intent.

Common failure modes:

  • vague or generic descriptions → wrong tool selected
  • overlapping descriptions → inconsistent selection
  • missing usage cues ("when to use") → tool is ignored

Best practices:

  • write descriptions as decision rules (e.g., “Use this when you need X with Y constraints”)
  • include examples of when to use vs not use
  • keep names and descriptions unambiguous and domain-specific

6) Why do we need schemas for tools, and what problem do they solve?

Schemas define a strict contract between the LLM and tools. They specify exactly what inputs are valid and what outputs will be returned.

Without schemas:

  • the LLM may generate malformed or incomplete inputs
  • tools may return free-form outputs that cannot be consumed by subsequent steps
  • multi-step workflows break silently due to shape mismatches

With schemas:

  • inputs are validated before execution (type, required fields)
  • outputs are predictable and machine-readable
  • tools can be chained reliably across steps

Practical guidance:

  • define explicit required fields and types
  • keep outputs minimal and structured (avoid prose)
  • maintain consistency across tools to enable composition

7) What happens if multiple tools can solve the same problem?

When two tools can solve similar problems, the LLM faces ambiguity during selection.

Symptoms:

  • different tools chosen for the same query across runs
  • unstable behavior when prompts or context change slightly

Root cause:

  • overlapping responsibility or similar descriptions

Fixes:

  • enforce single-responsibility per tool
  • differentiate descriptions with clear boundaries (inputs, constraints, outcomes)
  • remove or merge redundant tools

8) Should the system be allowed to use tools freely without restrictions?

No. Fully autonomous tool access often leads to inefficient or incorrect behavior.

Risks:

  • unnecessary tool calls (cost and latency)
  • wrong tool selection due to ambiguity

Control strategies:

  • limit the set of tools exposed for a given context or task
  • add routing hints (e.g., “for billing queries, only these tools are allowed”)
  • enforce max tool calls per request

Goal: Provide guided autonomy so the model can choose, but within well-defined boundaries.

9) Why does the system sometimes avoid using tools even when they exist?

By default, the LLM prefers to answer directly if it believes it can.

Common reasons for not calling tools:

  • the tool does not clearly outperform direct reasoning
  • the description does not signal when it should be used
  • the query appears solvable without external data

How to fix:

  • ensure the tool provides capabilities the LLM cannot reliably do (real-time data, deterministic checks, actions)
  • make the description explicit about triggers ("Use this when…")
  • optionally bias toward tool use via instructions or constraints

10) How do I know if a tool is actually needed or just adding complexity?

A tool is unnecessary if it does not add unique capability beyond the LLM.

If the LLM alone can produce equivalent results, the tool only adds overhead (latency, cost, complexity).

Valid tool use cases:

  • accessing external or real-time data
  • enforcing deterministic logic (validation, scoring)
  • performing actions (sending emails, writing to DB)

Heuristic: If LLM(prompt) ≈ Tool + LLM, the tool is redundant.

11) Should tools be allowed to call other tools internally?

Technically possible, but discouraged.

Problems introduced:

  • hidden execution paths (hard to trace end-to-end)
  • compounded failures across nested calls
  • difficult observability and debugging

Recommended pattern:

  • keep tools atomic (single responsibility)
  • let the MCP client orchestrate sequences explicitly

This keeps control centralized and behavior observable.

12) Is it a good idea for tools to use LLMs internally?

Yes, but only when the problem is inherently non-deterministic.

Good uses:

  • parsing or summarizing unstructured text
  • extracting signals from logs or documents

Avoid using LLM for:

  • calculations, validation, rule checks
  • simple data retrieval or formatting

Guideline: Use LLM inside tools only when rules cannot reliably solve the task.

13) Where should system state be managed in an MCP architecture?

State should live outside tools, in dedicated systems such as databases or memory layers.

Tools should be stateless:

  • accept input
  • read/write external state as needed
  • return output

Why this matters:

  • stateless tools are easier to scale and test
  • avoids hidden coupling and side effects
  • enables reuse outside MCP contexts

14) Why do MCP-based systems sometimes feel slower?

MCP introduces multi-step execution for a single user request.

A request may involve:

  • an LLM decision step
  • one or more tool calls
  • additional reasoning steps between calls

Each step adds network and processing overhead, and chaining multiplies the delay.

Mitigations:

  • reduce the number of tool calls per request
  • avoid deep chains unless necessary
  • cache frequent results
  • design tools to return complete, minimal outputs

15) What is the most challenging part of designing an MCP system?

Tool design is the most challenging and most impactful aspect.

You must get right:

  • clear, narrow responsibility (no overlap)
  • precise input schema (what’s required, types)
  • consistent, minimal output schema (machine-first)

Consequences of poor design:

  • incorrect tool selection
  • brittle multi-step flows
  • hard-to-debug behavior

Reality: Most MCP issues stem from tool design, not the LLM itself.

16) Why do MCP systems often become more complex than expected?

Complexity in MCP systems usually comes from over-engineering too early rather than real necessity.

Common causes include:

  • introducing too many tools before understanding real usage patterns
  • splitting MCP servers or domains prematurely
  • trying to model every possible capability upfront

This leads to:

  • higher cognitive load for the LLM when selecting tools
  • more ambiguity and overlap between tools
  • harder debugging and slower iteration

A better approach is incremental:

  • start with a minimal set of high-quality tools
  • observe how they are used in real scenarios
  • split or expand only when clear bottlenecks or boundaries emerge

The goal is not architectural purity, but operational clarity.

17) What should we track or monitor to understand how the system is behaving?

Effective debugging in MCP requires visibility into both decisions and execution.

At a minimum, you should log:

  • which tool was selected
  • the exact input passed to the tool
  • the output returned by the tool

For production systems, also include:

  • number of tool calls per request
  • failures and retries
  • timestamps for each step

This allows you to answer critical questions such as:

  • Why was this tool chosen?
  • Where did the system deviate from expectation?
  • Which part of the flow is slow or failing?

Without proper logging, MCP systems behave like black boxes, making debugging guesswork.

18) What is the most common hidden failure in MCP systems that is hard to detect?

The most dangerous failure mode is silent misrouting.

In this case, the system appears to work correctly, but it:

  • selects the wrong tool
  • produces suboptimal results
  • still returns a plausible answer

Because there is no obvious error, these issues often go unnoticed and accumulate over time.

This can degrade system quality significantly without triggering alerts.

To detect this, you need:

  • evaluation metrics focused on tool selection
  • monitoring of tool usage patterns
  • periodic audits of decision quality

19) How should we evaluate whether an MCP system is actually working well?

Evaluation must go beyond checking whether the final answer is correct.

You should assess:

  • whether the correct tool was selected
  • how many tools were used to reach the answer
  • latency per request
  • failure and retry rates
  • cost per query

For example, two systems may produce the same answer, but one may require multiple unnecessary tool calls, making it inefficient and expensive.

A strong evaluation framework focuses on both correctness and efficiency.

20) Does MCP replace microservices or work alongside them?

No. MCP and microservices serve fundamentally different roles.

Microservices are responsible for executing business logic, handling data, and scaling backend operations.

MCP sits above this layer and focuses on deciding:

  • which capability to use
  • when to use it
  • how to sequence multiple capabilities

In practice, MCP orchestrates calls to tools, which in turn interact with APIs or microservices.

This separation ensures that business logic remains reusable and independent of the AI layer.

21) Why won’t most SaaS platforms directly expose themselves as MCP servers?

SaaS providers have strong incentives to maintain control over how their systems are accessed and used.

Key reasons include:

  • security and data protection
  • pricing and rate limiting control
  • ownership of user experience

Exposing full MCP interfaces would reduce this control.

As a result, most SaaS platforms will continue to provide APIs, and MCP systems will act as a layer that integrates and orchestrates those APIs.

22) What is the most common misunderstanding about what MCP actually does?

A common misconception is that MCP makes AI systems more intelligent.

In reality, MCP does not improve the reasoning capability of the LLM.

Instead, it improves how that reasoning is applied by:

  • structuring interactions with external systems
  • enforcing consistency through schemas
  • enabling controlled execution of actions

This leads to systems that are more reliable and predictable, even if the underlying intelligence remains the same.

23) What characteristics define a well-designed MCP system?

A strong MCP system is defined by clarity, not complexity.

Key characteristics include:

  • clearly defined tools with a single responsibility
  • strict and consistent schemas for all tools
  • minimal overlap between tool capabilities
  • predictable and structured outputs

Additionally, strong systems exhibit:

  • controlled tool usage (not excessive or random)
  • good observability for debugging
  • efficient execution with minimal unnecessary steps

The focus should always be on making the system easy for both the LLM and engineers to understand and reason about.

24) What does deploying an MCP system look like in a real-world setup?

Deploying an MCP system is less about introducing new infrastructure and more about placing each component correctly in your existing stack.

In a typical deployment:

  • the MCP client lives inside your application backend (or agent layer) and interacts with the LLM
  • the MCP server runs as a service that exposes tools over a standard interface
  • tools internally call APIs or microservices that contain the actual business logic

From an infrastructure perspective:

  • MCP servers can be deployed like any backend service (container, serverless, etc.)
  • tools should be stateless so they scale easily
  • backend services remain unchanged and reusable

Key deployment considerations:

  • keep MCP server lightweight (no heavy logic inside)
  • secure tool access (authentication, rate limiting)
  • monitor tool usage and latency
  • ensure network reliability between client, server, and services

Common mistake: Trying to redesign your entire backend for MCP. In reality, MCP should sit on top of your existing APIs and microservices, not replace them.

Production-Level Realities

25) Why do MCP systems behave differently in production compared to testing?

In testing environments, inputs are usually clean, limited, and predictable. You are effectively validating the system against a narrow slice of reality.

In production, the system encounters:

  • a wide variety of user phrasing
  • incomplete or ambiguous inputs
  • edge cases you did not anticipate

Because tool selection is probabilistic, small differences in wording or context can lead the LLM to choose a different tool or skip tools entirely. This creates behavior that appears inconsistent even though the system is “working as designed.”

To mitigate this, you should:

  • test with real-world queries, not curated examples
  • tighten tool descriptions and schemas
  • add constraints or routing hints where necessary

26) Why does the system sometimes overuse a particular tool?

A tool becomes overused when it is either too broadly defined or consistently produces acceptable outputs. The LLM learns that this tool is a “safe default” and begins to prefer it over more specific tools.

This typically happens when:

  • the tool description is generic
  • multiple tools overlap in responsibility
  • one tool rarely fails compared to others

Over time, this leads to degraded system quality because the LLM stops exploring better alternatives.

To fix this, you should:

  • narrow the scope of the overused tool
  • make competing tools more clearly differentiated
  • explicitly describe when each tool should be used

27) Why is it difficult to predict the cost of running an MCP system?

In MCP systems, cost is not tied to a single operation. Instead, it depends on how many steps are taken to resolve a query.

A single user request may involve:

  • one or more LLM calls
  • multiple tool invocations
  • retries if something fails or is unclear

Because the number of steps varies per query, cost becomes difficult to predict.

To control this, you should:

  • limit the maximum number of tool calls per request
  • monitor cost at a per-query level
  • reduce unnecessary chaining of tools

28) Why do tools work individually but fail when used together?

Tools are often designed and tested in isolation, where they behave correctly. Problems arise when they are composed into multi-step workflows.

Failures occur because:

  • the output format of one tool does not match the expected input of another
  • assumptions made by one tool are not valid for the next
  • minor inconsistencies compound across steps

This is not a failure of individual tools, but of integration.

To prevent this:

  • enforce consistent schemas across all tools
  • validate outputs before passing them to the next step
  • design tools with composability in mind

29) Why do small changes in prompts affect tool usage so much?

Prompts directly influence how the LLM interprets user intent and decides which tool to call.

Even small changes in wording can:

  • shift the perceived meaning of a query
  • alter which tool is selected
  • change how arguments are constructed

Because of this, prompts should be treated as part of your system logic, not as informal text.

Best practices include:

  • versioning prompts
  • testing prompt changes before deployment
  • monitoring their impact on tool usage

30) Why can the same query produce different results each time?

LLM-based systems are inherently non-deterministic. The same input can produce slightly different internal reasoning paths.

In MCP systems, this variability is amplified because:

  • different tool selection paths may be taken
  • intermediate outputs may differ
  • timing and context may influence decisions

To reduce variability:

  • lower randomness in model settings where possible
  • constrain tool selection
  • ensure tools produce consistent outputs

31) Why do some tools rarely or never get used?

A tool may be ignored if:

  • its purpose is unclear from its description
  • it overlaps with a more general or dominant tool
  • it does not appear necessary to solve common queries

The LLM will naturally prefer tools that are easier to match or more frequently successful.

To address this:

  • simplify and clarify tool descriptions
  • remove redundant tools
  • ensure each tool has a clear and distinct role

32) Why does adding more tools sometimes reduce system performance?

Adding more tools increases the decision complexity for the LLM.

With more options available:

  • ambiguity increases
  • overlap becomes more likely
  • selection accuracy decreases

Instead of improving capability, excessive tools often degrade performance.

The better approach is to:

  • keep the tool set minimal
  • ensure each tool has a well-defined purpose
  • expand only when there is a clear gap

33) Why does system performance degrade over time?

MCP systems are sensitive to change. Over time, multiple factors can introduce drift, including:

  • updates to prompts
  • modifications to tools
  • changes in input patterns

These changes accumulate and can gradually reduce system performance without obvious failure.

To manage this, you should:

  • implement continuous evaluation
  • track key metrics over time
  • run regression tests after changes

34) Why can retrying a request sometimes make results worse?

Retries do not guarantee the same execution path.

When a request is retried:

  • the LLM may choose a different tool
  • intermediate steps may change
  • outputs may become inconsistent

This can lead to worse results instead of improvements.

To handle retries effectively:

  • control retry behavior explicitly
  • limit the number of retries
  • use fallback strategies instead of blind retries

35) Why is observability essential in MCP systems?

MCP systems involve multiple layers of decision-making and execution, many of which are not visible by default.

Without observability, you cannot:

  • understand why a tool was selected
  • trace where a failure occurred
  • identify inefficiencies

A production system should log:

  • tool selection decisions
  • inputs and outputs
  • execution paths

This visibility is essential for debugging and continuous improvement.

36) Why do MCP systems need guardrails?

LLMs can behave unpredictably when given full freedom.

Without guardrails, the system may:

  • call tools unnecessarily
  • misuse tools
  • generate invalid inputs

Guardrails help enforce boundaries by:

  • restricting which tools can be used
  • validating inputs before execution
  • limiting usage patterns

They ensure the system remains safe, efficient, and aligned with intended behavior.

37) Why are continuous evaluation pipelines important?

In MCP systems, correctness is not binary. A response may be technically correct but achieved inefficiently or through the wrong path.

Evaluation pipelines allow you to:

  • track tool selection accuracy
  • measure consistency across queries
  • detect degradation over time

Without continuous evaluation, issues accumulate silently and become harder to fix.

38) Why is caching especially important in MCP systems?

Many queries in real systems repeat similar patterns.

Without caching, the system repeatedly:

  • calls the LLM
  • invokes tools
  • recomputes results

This increases both latency and cost.

Caching allows you to reuse previous results, improving performance and reducing resource usage.

39) Why do we need stopping conditions in MCP workflows?

Because MCP systems can involve iterative decision-making, there is a risk of excessive or even infinite tool usage.

Without stopping conditions, the system may:

  • continue calling tools unnecessarily
  • enter inefficient loops

Stopping conditions enforce limits such as:

  • maximum number of tool calls
  • termination rules based on outcomes

40) Why is evolving schemas over time challenging?

Schemas define the contract between the LLM and tools. Changing them affects both how tools are called and how outputs are interpreted.

If schemas are modified without care:

  • existing integrations may break
  • the LLM may generate incorrect inputs

To manage schema evolution:

  • version schemas
  • maintain backward compatibility
  • test changes thoroughly

41) Why is it difficult to decide the right level of tool granularity?

Choosing the right level of granularity for tools is challenging.

If tools are too broad:

  • their purpose becomes unclear
  • selection accuracy decreases

If tools are too narrow:

  • the number of tools increases
  • orchestration becomes complex

The goal is to design tools with a single clear responsibility that is neither too vague nor too fragmented.

42) Why are fallback strategies critical in production systems?

In production systems, failures are inevitable. Tools may fail, APIs may be unavailable, and routing may be incorrect.

Fallback strategies ensure that the system can still provide a response when something goes wrong.

Examples include:

  • using an alternative tool
  • returning partial results
  • defaulting to LLM-only responses

43) Why is testing MCP systems more complex than traditional systems?

Testing MCP systems is more complex than traditional systems because you are not only testing code, but also behavior.

You must evaluate:

  • how the LLM makes decisions
  • how tools are orchestrated
  • how components interact under different conditions

This requires system-level testing rather than isolated unit tests.

Final Insight

MCP is not about increasing intelligence. It is about structuring how intelligence interacts with systems in a reliable, observable, and controllable way.

Thursday, 12 March 2026

The Reality of Building AI Systems Today

In today’s AI ecosystem, many capabilities that once required deep machine learning expertise have become widely accessible. Powerful APIs, pre-trained models, and developer platforms allow engineers to build sophisticated prototypes in very little time. As a result, it has become increasingly important to distinguish between what looks impressive and what actually requires substantial engineering skill.

Understanding this distinction is essential for anyone working with modern AI systems.

What Looks Impressive but Is Now Relatively Easy

Many demonstrations that appear technically advanced are largely integrations of existing tools rather than deeply engineered systems.

Calling AI APIs

Today, developers can access powerful language, vision, and multimodal models with only a few lines of code. A typical workflow might involve:

  • Capturing input (text, image, audio, or video)
  • Sending it to an AI API
  • Receiving a structured or descriptive response
  • Displaying or processing the result

The heavy lifting—perception, reasoning, and pattern recognition—is handled by the model provider. The surrounding application often acts primarily as a thin integration layer around these services.

Prompt Engineering

Crafting prompts that produce structured, detailed, or highly contextual outputs can appear sophisticated. In practice, prompt design is often an iterative process of experimentation and refinement.

For example, instructing a model to:

  • Describe actions occurring in a scene
  • Extract entities from a document
  • Summarize key ideas from a conversation

can produce highly convincing results. However, most of the intelligence resides within the model itself rather than in the surrounding system logic.

Rapid Prototyping

Combining multiple capabilities—such as multimodal input, model reasoning, and conversational interfaces—can quickly produce demonstrations that appear complex.

A prototype might integrate:

  • Live data input
  • A large AI model
  • A conversational interface
  • A simple decision rule

Such systems can look remarkably advanced, but the complexity often lies within the underlying models rather than in the application architecture.

The Real Challenges in Modern AI Systems

The most difficult problems today are usually not about building models. Instead, they involve designing reliable systems around those models.

These challenges remain difficult even for experienced engineering teams.

Architecture Design

Modern AI systems typically involve multiple interconnected components. A robust architecture often includes layers such as:

  • Data ingestion
  • Event processing
  • State management
  • Reasoning or decision logic
  • Storage systems
  • Monitoring and alerting
  • Analytics and reporting

Designing how these components interact reliably under real-world conditions is one of the most important skills in AI engineering.

Poor architectural choices can lead to systems that are fragile, expensive to operate, or difficult to scale.

Event Pipelines

Real-world environments produce continuous streams of data. Transforming these streams into meaningful signals is a core engineering challenge.

A common pipeline might involve:

data stream
→ filtering or sampling
→ signal detection
→ classification or analysis
→ event generation

Designing event pipelines requires careful consideration of latency, accuracy, noise, and system stability.

Small design errors can lead to systems that generate excessive false signals or miss critical information entirely.

Data Flow and System Efficiency

AI systems often process large volumes of data. Efficient system design requires deciding what data should be processed, when, and where.

Instead of processing everything, systems typically include filtering stages such as:

detect relevant activity
↓
capture relevant data
↓
analyze only selected inputs

Optimizing these flows is essential for controlling cost, latency, and system performance.

Temporal Reasoning

Most models analyze individual inputs in isolation. Real-world understanding, however, often requires reasoning across sequences of events.

For example, a system might need to interpret a sequence such as:

event A occurs
event B follows
event C occurs later

The meaning of the sequence may depend on the relationship between these events over time.

Designing systems that can maintain context and interpret temporal patterns is significantly more challenging than analyzing isolated inputs.

Reliability and Error Handling

AI models are probabilistic systems and can produce incorrect outputs. Production systems must therefore account for uncertainty.

Robust systems often include mechanisms such as:

  • Confidence thresholds
  • Multiple observations before triggering actions
  • Validation layers
  • Fallback logic

Balancing sensitivity with reliability is a non-trivial engineering problem.

Tracking State Over Time

Many systems must track entities, objects, or conditions across time. This requires maintaining consistent state information even when data is incomplete or noisy.

Challenges include:

  • Maintaining identity across observations
  • Handling temporary loss of signal
  • Managing partial information
  • Reconciling conflicting signals

Reliable state tracking is critical for many real-world applications.

Pattern and Behavior Analysis

More advanced systems move beyond detecting individual events to identifying patterns across time.

This may involve:

  • Identifying recurring sequences
  • Detecting unusual activity
  • Analyzing long-term trends
  • Generating insights from historical data

This level of reasoning requires both data infrastructure and analytical logic beyond simple model inference.

Why Hardware Is Often Not the Limiting Factor

Many assume that powerful GPUs are the primary requirement for building advanced AI systems. In practice, the hardest problems often occur outside the model itself.

Architecture design, data pipelines, and system orchestration are primarily software engineering challenges.

Because large models are typically available through cloud services, the limiting factor is rarely raw compute power. The real challenge lies in designing systems that use those capabilities effectively.

Where Experience Really Shows

Experienced engineers tend to focus on aspects that are rarely visible in demonstrations:

  • System reliability
  • Fault tolerance
  • Data consistency
  • Cost efficiency
  • Monitoring and observability
  • Operational maintenance

Prototypes often perform well under controlled conditions, but production systems must handle unpredictable inputs, failures, and edge cases.

Building systems that remain stable under real-world conditions is what distinguishes experimentation from professional engineering.

The Key Insight

Modern AI development has shifted significantly.

The difficult part is no longer building powerful models.

Instead, the challenge lies in designing systems that can interpret, coordinate, and act on the outputs those models produce.

In other words: the intelligence of modern AI systems increasingly comes from the architecture surrounding the model, not the model itself.

Friday, 6 March 2026

The Mental Model for Agentic AI Frameworks

The Mental Model for Agentic AI Frameworks

Why People Get Confused — and How to Think About Them Clearly

The explosion of “agentic AI frameworks” has created a lot of confusion. Names like LangChain, LangGraph, AutoGen, CrewAI, LlamaIndex, and Semantic Kernel are often presented as if they compete with each other. Beginners naturally ask: Which one should I choose?

That question is actually the wrong starting point.

The truth is that most of these tools operate at different layers of an AI system, which means they are often used together rather than instead of each other. Once you see this layering clearly, the confusion disappears.

The Core Mental Model

Every modern AI system that goes beyond a simple chatbot usually contains three conceptual layers.

1. Intelligence Layer — the model itself

This is the raw LLM:

  • OpenAI
  • Anthropic
  • Groq
  • Azure OpenAI

These provide intelligence but nothing else. They generate text. They do not manage workflows, memory, or tools.

2. Capability Layer — giving the model tools and knowledge

This layer equips the model with the ability to interact with the world.

Typical capabilities include:

  • Tool calling (APIs, databases, search)
  • Retrieval from documents (RAG)
  • Memory and context management

Frameworks operating here include:

  • LangChain – connects LLMs to tools and pipelines
  • LlamaIndex – specializes in knowledge indexing and retrieval
  • Semantic Kernel – organizes reusable AI “skills” and planners

A helpful analogy is to think of this layer as giving the AI hands and a library.

3. Orchestration Layer — coordinating complex behavior

Once systems grow beyond one step, coordination becomes the real challenge. This layer manages:

  • task ordering
  • multi-agent collaboration
  • retries and error handling
  • workflow branching

Frameworks here include:

  • LangGraph – graph-based workflow orchestration
  • CrewAI – role-based AI teams
  • AutoGen – agents communicating through conversation

This layer acts like management inside an AI organization.

A Simple Way to Remember the Ecosystem

Framework Mental Model
LangChain Connector between AI and tools
LlamaIndex Librarian managing knowledge
Semantic Kernel Planner organizing tasks
CrewAI Company with defined employee roles
AutoGen Group chat where agents collaborate
LangGraph Workflow engine controlling processes

Why Multiple Frameworks Often Appear in the Same System

Many beginners assume you must choose only one framework. In reality, serious systems often combine several.

For example, a production AI workflow might look like this:

  • LlamaIndex retrieves relevant documents
  • LangChain calls tools and APIs
  • LangGraph orchestrates the overall workflow

Each framework solves a different problem.

Trying to force one framework to do everything usually leads to unnecessary complexity.

Where Most People Get Confused

1. Confusing capability frameworks with orchestration frameworks

LangChain and LlamaIndex primarily provide capabilities. LangGraph, CrewAI, and AutoGen primarily provide coordination.

They solve different problems.

2. Thinking agent frameworks are interchangeable

They are not.

  • Some focus on structured workflows
  • Others focus on collaborative agents
  • Others focus on knowledge retrieval

3. Over-engineering too early

Many beginners jump immediately into complex multi-agent architectures.

In practice, most successful systems start with a simple pipeline and only introduce orchestration when necessary.

A Practical Decision Guide

  • Simple RAG chatbot → LangChain or LlamaIndex
  • Knowledge-heavy assistant → LlamaIndex
  • Structured workflows → LangGraph
  • Role-based AI teams → CrewAI
  • Agents collaborating via conversation → AutoGen
  • Microsoft enterprise copilots → Semantic Kernel

Control vs Flexibility

Another useful mental model is the spectrum of structure.

From least structured to most controlled:

AutoGen → CrewAI → LangChain → Semantic Kernel → LangGraph

More control usually means:

  • easier debugging
  • predictable behavior
  • production readiness

Less control usually means:

  • more experimentation
  • emergent behavior
  • faster prototyping

The Most Practical Advice

  • Start simple. Build a working pipeline before designing multi-agent systems.
  • Choose frameworks based on architecture layers.
  • Do not over-index on agents.
  • Treat orchestration as an engineering problem, not a prompt problem.

A Final Rule of Thumb

When evaluating an AI system architecture, ask three questions:

  1. What model provides intelligence?
  2. What framework gives the model tools and knowledge?
  3. What component orchestrates the workflow?

Once you can answer these clearly, the agentic AI ecosystem stops looking chaotic and starts looking like a structured stack.

And that clarity is the real advantage.

Wednesday, 4 March 2026

The Technological Ascent: From Data to Wisdom

The Technological Ascent: From Data to Wisdom

For most of human history, we have misunderstood progress.

We framed it as machines becoming smarter, when in reality progress has always been about humans being freed from lower layers of thinking.

What looks like an AI revolution is actually the final stretch of a very long ascent—one that began over ten thousand years ago.

This is the story of how technology systematically lifted humans from data to wisdom, layer by layer, exactly as it was always meant to.


The Core Thesis

Technology does not replace humans from the top.
It replaces humans from the bottom.

Every major technological shift removes human effort from a lower cognitive layer and pushes us upward. What remains—after automation has done its work—is not intelligence, but judgment.

That is where humans belong.


The Six Layers of the Ascent

1. Data (≈10,000 BCE – 1900s)

Humans as recorders

At the base lies raw data: facts without meaning.

  • Crop yields
  • Inventory counts
  • Births, deaths, taxes
  • Weather observations

For millennia, humans acted as living storage systems. We wrote, copied, preserved, and remembered because there was no alternative.

Data had:

  • No context
  • No interpretation
  • No abstraction

This was not a failure of intelligence. It was a failure of tooling.


2. Computation (1900s – 1970s)

Machines learn to calculate, not understand

The early 20th century introduced a critical but often misunderstood layer: computation.

  • Mechanical calculators
  • Mainframes
  • Punch cards
  • Batch processing
  • Fixed programs

Machines could now:

  • Perform arithmetic flawlessly
  • Repeat instructions endlessly
  • Process records faster than humans

But they could not:

  • Understand meaning
  • Adapt questions
  • Interpret results

This era automated math, not semantics.

Humans were still responsible for understanding what the outputs meant.


3. Information (1980s – 2000s)

Machines organize meaning

With personal computers, relational databases, and the internet, a fundamental shift occurred.

Data became structured.

  • Schemas
  • Queries
  • Dashboards
  • Reports
  • KPIs

Machines now organized data into information.

You could ask new questions without rewriting programs. Meaning became explicit.

This is where most organizations still live today—surrounded by dashboards, mistaking visibility for insight.


4. Knowledge (2000s – 2020s)

Machines discover patterns

Machine learning and analytics moved us into the knowledge layer.

Machines learned to:

  • Detect patterns
  • Identify correlations
  • Predict outcomes
  • Optimize decisions

Knowledge stopped being handcrafted. It became computed.

At this point, humans ceased to be the best pattern recognizers in the room. That role belongs to machines now—and permanently.

The human bottleneck shifted from knowing facts to deciding what to do with them.


5. Action (2022 – Present)

Machines execute decisions

This is the agentic era.

AI systems now:

  • Take actions
  • Use tools
  • Operate in closed loops
  • Learn from outcomes
  • Execute within constraints

This is not intelligence inflation—it is execution automation.

Humans are exiting the loop not because they are obsolete, but because execution is no longer the right layer for them.


6. Wisdom (Emerging / Future)

The irreducible human layer

Wisdom is not faster thinking.
It is not better prediction.
It is not more data.

Wisdom is:

  • Choosing what matters
  • Defining goals
  • Balancing trade-offs
  • Setting ethical boundaries
  • Taking responsibility for consequences
  • Knowing when not to act

No dataset tells you:

  • What is acceptable risk
  • What kind of future you want
  • When efficiency becomes harm

This layer has never been automatable—not because it is complex, but because it is normative.

Technology ends here.


The Pattern Is Unmistakable

Layer Who used to do it Who does it now
Data collection Humans Sensors & logs
Computation Humans Machines
Information processing Humans Software
Knowledge discovery Humans ML systems
Action execution Humans AI agents
Wisdom Humans Still humans

Why This Feels Uncomfortable

Many people resist this framing because their identity lives between layers.

  • Knowledge workers fear losing relevance
  • Managers confuse control with wisdom
  • Organizations reward activity over judgment

But wisdom is not comfortable.

It demands accountability.

There are fewer tasks, but the consequences are larger.


The Final Insight

Progress is not machines becoming human.
Progress is humans being freed to become wise.

We didn’t lose purpose.

We outsourced the noise.

And for the first time in history, that leaves us face to face with the layer that was always ours.

Saturday, 19 July 2025

A deep technical breakdown of how ChatGPT works

How ChatGPT Works – A Deep Technical Dive

๐ŸŒŸ INTRODUCTION: The Magic Behind the Curtain

Have you ever asked ChatGPT something — like “Summarize this news article” or “Explain AI like I’m 10” — and wondered how this is even possible? Let’s walk through how ChatGPT actually works.


๐Ÿง  PART 1: ChatGPT Is a Probability Machine

ChatGPT doesn’t understand language like humans. It generates text by predicting what comes next — one token at a time.

Example:

You type: “The Eiffel Tower is in”

  • Paris → 85%
  • France → 10%
  • Europe → 4%
  • a movie → 1%

The highest-probability token wins — so it outputs “Paris.” This continues token by token. This is called auto-regressive generation.


๐Ÿ”ก PART 2: What’s a Token?

Tokens are chunks of text — not full words or characters.

  • “ChatGPT is amazing” → ["Chat", "GPT", " is", " amazing"]

GPT processes and generates text one token at a time within a fixed context window.

  • GPT-3.5 → ~4,096 tokens
  • GPT-4 → ~8k–32k tokens

๐Ÿงฐ PART 3: What Powers It Underneath

ChatGPT is built on a Transformer — a deep neural network architecture introduced in 2017.

1. Embeddings

Tokens are converted into high-dimensional vectors that capture meaning. Similar words end up close together in vector space.

2. Self-Attention

Self-attention lets the model decide which previous tokens matter most for the current prediction.

“The cat that chased the mouse was fast” → “was” refers to “cat”

3. Feed-Forward Layers

These layers refine meaning after attention using non-linear transformations.

4. Residuals + Layer Normalization

These stabilize training and allow very deep networks to work reliably.


⚙️ PART 4: How It Was Trained
  1. Pre-training — learns language by predicting the next token
  2. Supervised Fine-Tuning — trained on human-written examples
  3. RLHF — optimized using human feedback and PPO

⚠️ PART 5: Where It Goes Wrong
  • Hallucinations
  • Stale knowledge
  • Context window limits
  • Bias inherited from data

๐ŸŽ“ CONCLUSION: It’s Just Math — But Really Good Math

ChatGPT is a probability engine trained on massive data and refined by human feedback. It doesn’t think — but it predicts extremely well.

Sunday, 1 June 2025

Value Proposition vs Positioning Statement

๐Ÿงญ Value Proposition vs. Positioning Statement: What’s the Difference (and How to Write Both)

If you’ve ever struggled to explain what your company does or why anyone should care, you’re not alone. Two of the most important tools for defining your brand are:

  • The Value Proposition
  • The Positioning Statement

They’re often confused, but each serves a different purpose — both externally for customers and internally for teams.

๐ŸŽฏ What’s the Difference?

Aspect Value Proposition Positioning Statement
Purpose Convince customers to choose you Align internal teams on brand strategy
Audience External (customers, clients) Internal (employees, partners)
Focus Benefits, problems solved, uniqueness Market, audience, problem, differentiator
Length Short (1–2 sentences) Longer but focused
Usage Websites, ads, product pages Brand decks, internal strategy
Core Message “Why choose us?” “How we’re positioned and who we serve”

✅ Positioning Statement Template

Use this to define your place in the market — especially useful for brand workshops and internal alignment.

[Company Name] helps [Target Customer] [Verb] [Positive Outcome] through [Unique Solution] so they can [Transformation] instead of [Villain / Roadblock / Negative Outcome].

๐Ÿงช Example: Airtable

Airtable helps fast-moving teams organize work efficiently through a flexible, no-code database so they can launch projects faster instead of juggling spreadsheets and tools.

✅ Value Proposition Template

Use this when you need a customer-facing hook — simple, clear, and direct.

We help [Target Customer] solve [Problem] by [Key Benefit / Solution], so they can [Achieve Desired Outcome].

๐Ÿงช Example: Grammarly

We help professionals and students improve their writing by offering real-time grammar and clarity suggestions, so they can communicate confidently.

๐Ÿ“„ Copy-Paste Templates

Positioning Statement

[Company Name] helps [Target Customer] [Verb] [Positive Outcome] through [Unique Solution] so they can [Transformation] instead of [Villain / Roadblock / Negative Outcome].

Value Proposition

We help [Target Customer] solve [Problem] by [Key Benefit / Solution], so they can [Achieve Desired Outcome].

๐Ÿง  TL;DR

  • Value Proposition → Why customers choose you
  • Positioning Statement → How your team frames you
  • Both are essential — one sells, one guides

✍️ Want to Fill These Out Easily?

Want a ready-made Google Doc, Notion page, or Miro board version of these templates?

Leave a comment or drop a message — we’ll share it with you.

Thursday, 15 May 2025

Intelligent Proctoring System Using OpenCV, Mediapipe, Dlib & Speech Recognition

ProctorAI: Intelligent Proctoring System Using OpenCV, Mediapipe, Dlib & Speech Recognition

ProctorAI is a real-time AI-based proctoring solution that uses computer vision and audio analysis to detect suspicious activities during exams or assessments.

๐Ÿ‘‰ View GitHub Repository

๐Ÿ” Key Features

  • Face detection and tracking using Mediapipe and Dlib
  • Eye and pupil movement monitoring for head and gaze tracking
  • Audio detection for identifying background conversation
  • Multi-screen detection via active window tracking
  • Real-time alert overlays on camera feed
  • Interactive quit button on the camera feed

⚙️ How It Works

  1. Webcam feed is captured using OpenCV
  2. Face and eye landmarks detected using Mediapipe
  3. Dlib tracks pupil movement from eye regions
  4. System checks head movement, gaze, and face presence
  5. Running applications scanned using PyGetWindow
  6. Background audio analyzed using SpeechRecognition
  7. Alerts displayed in real time on suspicious activity

๐Ÿง  Tech Stack

  • OpenCV – Video capture and rendering
  • Mediapipe – Face and landmark detection
  • Dlib – Pupil detection and geometry
  • SpeechRecognition – Audio analysis
  • PyGetWindow – Application window tracking
  • Threading – Parallel detection modules

๐Ÿšจ Alerts Triggered By

  • Missing face (student leaves or covers webcam)
  • Sudden or excessive head movement
  • Unusual pupil movement
  • Multiple open windows
  • Background voice detection

๐Ÿ“ฆ Installation

git clone https://github.com/anirbanduttaRM/ProctorAI
cd ProctorAI
pip install -r requirements.txt

Download shape_predictor_68_face_landmarks.dat from dlib.net and place it in the root directory.

▶️ Running the App

python main.py

๐Ÿ–ผ️ Screenshots

๐ŸŽฅ Demo Video

๐Ÿ“Œ Future Improvements

  • Face recognition for identity verification
  • Web-based remote monitoring
  • Data logging and analytics
  • Improved NLP for audio context

๐Ÿค Contributing

Pull requests are welcome. For major changes, open an issue first.

๐Ÿ“„ License

Licensed under the MIT License — see the LICENSE file.


Made with ❤️ by Anirban Dutta

Design Failures Caused by a Fundamental Misunderstanding of MCP

Design Failures Caused by a Fundamental Misunderstanding of MCP Design Failures Caused by a Fundamental Misunderstanding o...