Deco
decocms

Monitoring

Why observability is built into the control plane and how it enables production MCP operations

Observability as Infrastructure

deco CMS treats monitoring as infrastructure, not an afterthought. Every tool invocation that flows through the control plane creates a monitoring log—there’s no way to disable this, and that’s by design.

Every tool call is monitored by deco CMS. Whether it’s a simple GitHub query or a complex database operation, every MCP tool invocation creates a structured log entry with full request/response details, timing, attribution, and error information. This happens automatically—no configuration required.

Traditional MCP deployments have fragmented observability: each client logs independently (or doesn’t log at all), each MCP server has its own logging strategy, and there’s no unified view of what happened across the system. deco CMS solves this by logging at the control plane level, creating a single source of truth for all MCP traffic.

Why Monitoring is First-Class

The Distributed System Problem

MCP deployments are inherently distributed systems:

  • Multiple clients: Cursor, Claude Desktop, custom agents
  • Multiple MCP servers: GitHub, Slack, databases, custom tools
  • Multiple operations: A single user workflow might invoke 10+ tools across 5 MCP servers

When something breaks, you need to reconstruct what happened across this distributed system. Without centralized monitoring, you’re correlating logs from different sources with different formats, timestamps, and levels of detail—if those logs even exist.

Control Plane Logging Solves This

By logging at the control plane (deco CMS), monitoring becomes:

Centralized: One place to see all tool invocations across all clients and MCP servers.

Consistent: Every log entry follows the same schema with the same level of detail.

Complete: No invocation can bypass monitoring—if it went through deco CMS, it’s logged.

Correlated: Request context connects related operations, enabling distributed tracing across multiple tool invocations.

What Gets Recorded

Every tool invocation creates a monitoring log with:

Identity & Attribution

  • Caller: Which user or API key made the request
  • Organization: Which tenant the request belongs to
  • Connection: Which upstream MCP server was invoked
  • Agent/Project: If accessed through a Virtual MCP, which one

Operation Details

  • Tool name: The specific tool that was invoked
  • Input arguments: What parameters were sent (normalized and sanitized)
  • Output results: What the tool returned
  • Timestamp: When the invocation occurred

Performance & Status

  • Duration: How long the operation took (milliseconds)
  • Status: Success or failure
  • Error details: If failed, what went wrong
  • HTTP status: The underlying HTTP response code (if applicable)

This comprehensive logging enables multiple use cases beyond debugging.

Use Cases Enabled by Monitoring

Debugging Production Failures

When a tool invocation fails, monitoring logs provide:

  • Exact inputs: What arguments were passed (to verify correctness)
  • Error messages: What the MCP server returned
  • Timing context: When the failure occurred and how long it took
  • Attribution: Which user/agent made the request

This information is available immediately in the deco CMS UI, without SSH-ing into servers or grepping log files.

Security & Compliance

Monitoring creates an immutable audit trail of all MCP activity:

  • Who accessed which tools, when
  • What data was read or modified
  • Which credentials were used
  • Failed authorization attempts

For regulated industries (finance, healthcare), this audit trail is essential for compliance. For any production system, it’s essential for security incident response.

Cost & Performance Analysis

Monitoring data reveals:

  • Expensive operations: Which tools are slowest or most resource-intensive
  • Usage patterns: Which tools are actually used (vs. which are just configured)
  • Bottlenecks: Where latency spikes occur
  • Optimization opportunities: Which Virtual MCPs could benefit from filtering or caching

This visibility enables data-driven decisions about MCP infrastructure optimization.

Behavior Analysis

Understanding how teams actually use MCP capabilities:

  • Tool adoption: Which connections are most valuable
  • Workflow patterns: Common sequences of tool invocations
  • Agent effectiveness: How agents’ tool selection evolves over time
  • User friction: Where errors or retries happen most frequently

This informs product decisions about which capabilities to prioritize or improve.

Monitoring as a Feedback Loop

deco CMS monitoring isn’t just passive logging—it creates a feedback loop that improves the platform:

Error patterns → Better authorization policies or connection health checks

Performance data → Virtual MCP filtering strategies or caching policies

Usage analytics → Recommendations for which tools to add or deprecate

Security events → Automated responses or enhanced access controls

This feedback loop is only possible with centralized, structured monitoring.

Privacy & Data Handling

Monitoring logs include tool inputs and outputs by default. For sensitive operations (e.g., querying customer data), consider:

  • Using API keys with appropriate scope restrictions
  • Implementing connection-level logging policies
  • Leveraging organization-level data retention controls

deco CMS provides the observability infrastructure; your organization defines the privacy policies.

Monitoring Philosophy

Traditional systems treat monitoring as “nice to have”—something you add after building the core functionality. deco CMS inverts this:

Monitoring is not optional. It’s how the control plane operates. Without monitoring logs, you cannot debug, audit, optimize, or secure MCP traffic at scale.

By making observability first-class infrastructure, deco CMS enables teams to run MCP in production with the same confidence they have in other critical systems.

Found an error or want to improve this page?

Edit this page