What I write about

Friday, 6 March 2026

The Mental Model for Agentic AI Frameworks

The Mental Model for Agentic AI Frameworks

Why People Get Confused — and How to Think About Them Clearly

The explosion of “agentic AI frameworks” has created a lot of confusion. Names like LangChain, LangGraph, AutoGen, CrewAI, LlamaIndex, and Semantic Kernel are often presented as if they compete with each other. Beginners naturally ask: Which one should I choose?

That question is actually the wrong starting point.

The truth is that most of these tools operate at different layers of an AI system, which means they are often used together rather than instead of each other. Once you see this layering clearly, the confusion disappears.

The Core Mental Model

Every modern AI system that goes beyond a simple chatbot usually contains three conceptual layers.

1. Intelligence Layer — the model itself

This is the raw LLM:

  • OpenAI
  • Anthropic
  • Groq
  • Azure OpenAI

These provide intelligence but nothing else. They generate text. They do not manage workflows, memory, or tools.

2. Capability Layer — giving the model tools and knowledge

This layer equips the model with the ability to interact with the world.

Typical capabilities include:

  • Tool calling (APIs, databases, search)
  • Retrieval from documents (RAG)
  • Memory and context management

Frameworks operating here include:

  • LangChain – connects LLMs to tools and pipelines
  • LlamaIndex – specializes in knowledge indexing and retrieval
  • Semantic Kernel – organizes reusable AI “skills” and planners

A helpful analogy is to think of this layer as giving the AI hands and a library.

3. Orchestration Layer — coordinating complex behavior

Once systems grow beyond one step, coordination becomes the real challenge. This layer manages:

  • task ordering
  • multi-agent collaboration
  • retries and error handling
  • workflow branching

Frameworks here include:

  • LangGraph – graph-based workflow orchestration
  • CrewAI – role-based AI teams
  • AutoGen – agents communicating through conversation

This layer acts like management inside an AI organization.

A Simple Way to Remember the Ecosystem

Framework Mental Model
LangChain Connector between AI and tools
LlamaIndex Librarian managing knowledge
Semantic Kernel Planner organizing tasks
CrewAI Company with defined employee roles
AutoGen Group chat where agents collaborate
LangGraph Workflow engine controlling processes

Why Multiple Frameworks Often Appear in the Same System

Many beginners assume you must choose only one framework. In reality, serious systems often combine several.

For example, a production AI workflow might look like this:

  • LlamaIndex retrieves relevant documents
  • LangChain calls tools and APIs
  • LangGraph orchestrates the overall workflow

Each framework solves a different problem.

Trying to force one framework to do everything usually leads to unnecessary complexity.

Where Most People Get Confused

1. Confusing capability frameworks with orchestration frameworks

LangChain and LlamaIndex primarily provide capabilities. LangGraph, CrewAI, and AutoGen primarily provide coordination.

They solve different problems.

2. Thinking agent frameworks are interchangeable

They are not.

  • Some focus on structured workflows
  • Others focus on collaborative agents
  • Others focus on knowledge retrieval

3. Over-engineering too early

Many beginners jump immediately into complex multi-agent architectures.

In practice, most successful systems start with a simple pipeline and only introduce orchestration when necessary.

A Practical Decision Guide

  • Simple RAG chatbot → LangChain or LlamaIndex
  • Knowledge-heavy assistant → LlamaIndex
  • Structured workflows → LangGraph
  • Role-based AI teams → CrewAI
  • Agents collaborating via conversation → AutoGen
  • Microsoft enterprise copilots → Semantic Kernel

Control vs Flexibility

Another useful mental model is the spectrum of structure.

From least structured to most controlled:

AutoGen → CrewAI → LangChain → Semantic Kernel → LangGraph

More control usually means:

  • easier debugging
  • predictable behavior
  • production readiness

Less control usually means:

  • more experimentation
  • emergent behavior
  • faster prototyping

The Most Practical Advice

  • Start simple. Build a working pipeline before designing multi-agent systems.
  • Choose frameworks based on architecture layers.
  • Do not over-index on agents.
  • Treat orchestration as an engineering problem, not a prompt problem.

A Final Rule of Thumb

When evaluating an AI system architecture, ask three questions:

  1. What model provides intelligence?
  2. What framework gives the model tools and knowledge?
  3. What component orchestrates the workflow?

Once you can answer these clearly, the agentic AI ecosystem stops looking chaotic and starts looking like a structured stack.

And that clarity is the real advantage.

Wednesday, 4 March 2026

The Technological Ascent: From Data to Wisdom

The Technological Ascent: From Data to Wisdom

For most of human history, we have misunderstood progress.

We framed it as machines becoming smarter, when in reality progress has always been about humans being freed from lower layers of thinking.

What looks like an AI revolution is actually the final stretch of a very long ascent—one that began over ten thousand years ago.

This is the story of how technology systematically lifted humans from data to wisdom, layer by layer, exactly as it was always meant to.


The Core Thesis

Technology does not replace humans from the top.
It replaces humans from the bottom.

Every major technological shift removes human effort from a lower cognitive layer and pushes us upward. What remains—after automation has done its work—is not intelligence, but judgment.

That is where humans belong.


The Six Layers of the Ascent

1. Data (≈10,000 BCE – 1900s)

Humans as recorders

At the base lies raw data: facts without meaning.

  • Crop yields
  • Inventory counts
  • Births, deaths, taxes
  • Weather observations

For millennia, humans acted as living storage systems. We wrote, copied, preserved, and remembered because there was no alternative.

Data had:

  • No context
  • No interpretation
  • No abstraction

This was not a failure of intelligence. It was a failure of tooling.


2. Computation (1900s – 1970s)

Machines learn to calculate, not understand

The early 20th century introduced a critical but often misunderstood layer: computation.

  • Mechanical calculators
  • Mainframes
  • Punch cards
  • Batch processing
  • Fixed programs

Machines could now:

  • Perform arithmetic flawlessly
  • Repeat instructions endlessly
  • Process records faster than humans

But they could not:

  • Understand meaning
  • Adapt questions
  • Interpret results

This era automated math, not semantics.

Humans were still responsible for understanding what the outputs meant.


3. Information (1980s – 2000s)

Machines organize meaning

With personal computers, relational databases, and the internet, a fundamental shift occurred.

Data became structured.

  • Schemas
  • Queries
  • Dashboards
  • Reports
  • KPIs

Machines now organized data into information.

You could ask new questions without rewriting programs. Meaning became explicit.

This is where most organizations still live today—surrounded by dashboards, mistaking visibility for insight.


4. Knowledge (2000s – 2020s)

Machines discover patterns

Machine learning and analytics moved us into the knowledge layer.

Machines learned to:

  • Detect patterns
  • Identify correlations
  • Predict outcomes
  • Optimize decisions

Knowledge stopped being handcrafted. It became computed.

At this point, humans ceased to be the best pattern recognizers in the room. That role belongs to machines now—and permanently.

The human bottleneck shifted from knowing facts to deciding what to do with them.


5. Action (2022 – Present)

Machines execute decisions

This is the agentic era.

AI systems now:

  • Take actions
  • Use tools
  • Operate in closed loops
  • Learn from outcomes
  • Execute within constraints

This is not intelligence inflation—it is execution automation.

Humans are exiting the loop not because they are obsolete, but because execution is no longer the right layer for them.


6. Wisdom (Emerging / Future)

The irreducible human layer

Wisdom is not faster thinking.
It is not better prediction.
It is not more data.

Wisdom is:

  • Choosing what matters
  • Defining goals
  • Balancing trade-offs
  • Setting ethical boundaries
  • Taking responsibility for consequences
  • Knowing when not to act

No dataset tells you:

  • What is acceptable risk
  • What kind of future you want
  • When efficiency becomes harm

This layer has never been automatable—not because it is complex, but because it is normative.

Technology ends here.


The Pattern Is Unmistakable

Layer Who used to do it Who does it now
Data collection Humans Sensors & logs
Computation Humans Machines
Information processing Humans Software
Knowledge discovery Humans ML systems
Action execution Humans AI agents
Wisdom Humans Still humans

Why This Feels Uncomfortable

Many people resist this framing because their identity lives between layers.

  • Knowledge workers fear losing relevance
  • Managers confuse control with wisdom
  • Organizations reward activity over judgment

But wisdom is not comfortable.

It demands accountability.

There are fewer tasks, but the consequences are larger.


The Final Insight

Progress is not machines becoming human.
Progress is humans being freed to become wise.

We didn’t lose purpose.

We outsourced the noise.

And for the first time in history, that leaves us face to face with the layer that was always ours.

The Mental Model for Agentic AI Frameworks

The Mental Model for Agentic AI Frameworks Why People Get Confused — and How to Think About Them Clearly The explosion of “agentic A...