Offline

What I've Learned Maintaining the MCP Python SDK

Track:
Machine Learning, NLP and CV
Type:
Talk
Level:
intermediate
Duration:
30 minutes
View in the schedule

Abstract

After months of maintaining the MCP Python SDK and reviewing many community contributions, I've seen some architectural hiccups in repeat. Developers struggle with questions that seem simple but have nuanced answers: When should one tool become three? When does a server need to split into two? How do you test an MCP server without spinning up a full client? When should you use resources or prompts instead?

In this talk, we will explore my learnings and understand how to design tool boundaries that scale with your server's complexity, structure your codebase for long-term maintainability, and build a testing strategy for your MCP server that works. I'll share real examples from the wild, both the antipatterns to run away from and the implementations worth adopting.

In 2026, the MCP Python SDK v2 will bring improved typing, a refined API, and better testing primitives. The architectural decisions you make today will determine whether that migration takes a day or a month.

Whether you're maintaining an internal tool or publishing to the community, you'll leave with a clear framework for evaluating your own server's design and concrete next steps to improve it.