Microsoft Agent Framework 1.0 (April 3, 2026) collapses Semantic Kernel and AutoGen into one SDK across .NET and Python, with YAML-declarative agents and native MCP + A2A. Google ADK doubles down on explicit Sequential / Parallel / Loop orchestration with Vertex AI deployment. smolagents (HuggingFace, ~26K stars) takes a third path entirely: agents write Python code instead of emitting JSON tool calls, which cuts step count ~30% on multi-tool tasks but expands the blast radius. Pick MAF for Microsoft-stack shops with Azure and a mixed .NET/Python codebase. Pick ADK for GCP-native teams that need auditable graph orchestration. Pick smolagents for open-source-first teams running their own models where code-as-action beats JSON tool dispatch.
Last month a manufacturing client asked us to extend an existing Semantic Kernel pipeline with a planner that coordinates three specialist agents. We started writing the planner. Two days in, Microsoft shipped Agent Framework 1.0 — and the planner abstraction we were building was deprecated before we merged it. We rewrote against MAF in a day and the YAML agent definitions ended up in the same Git repo as the .NET service code. The migration cost us less than the original implementation would have.
That April 3, 2026 release is a big deal. Microsoft Agent Framework 1.0 collapses two parallel agent stories — Semantic Kernel for enterprises and AutoGen for research — into one supported product, with first-class MCP and A2A support and YAML-declarative agents across .NET and Python. It also resets the comparison every team is now asking about: how does it stack up against Google ADK and against the open-source darling, HuggingFace's smolagents?
This post is the head-to-head, with the trade-offs we've actually hit on client projects. For the broader landscape including LangGraph, CrewAI, and OpenAI Agents SDK, see our comparison of major agent frameworks in 2026. For the Google vs AWS cloud-native split specifically, see our Google ADK vs AWS Strands deep-dive.
The April 3 Consolidation: What Changed
Microsoft has been running two agent stories in parallel for over a year. Semantic Kernel was the enterprise SDK — kernels, plugins, planners, the .NET-first abstraction. AutoGen was the Microsoft Research project — conversable agents, group chats, the Python-first multi-agent framework. Customers had to pick one and live with the bet, and product teams inside Microsoft were duplicating effort across both.
April 3, 2026 ended that. Microsoft Agent Framework 1.0 is the merger. Three things changed materially for everyone building on the Microsoft stack:
The migration story is uneven. Semantic Kernel users carry their plugins forward — the kernel and plugin abstractions survive — but planners and agent registration change shape. AutoGen users get hit harder: conversable agents and group chat are gone, replaced by declarative workflows. AG2 (the community fork) is in maintenance mode, so the practical answer for most teams is "migrate to MAF." Budget one to two engineering weeks per non-trivial agent.
name: refund_resolver
model: gpt-5
instructions: |
You resolve customer refund requests. Verify policy, check eligibility,
and either issue the refund or hand off to a human reviewer.
tools:
- mcp: stripe-server
- mcp: orders-server
- function: notify_human_reviewer
guardrails:
max_refund_usd: 500
require_handoff_above: 250
handoffs:
- agent: human_reviewer
when: amount_exceeds_thresholdA minimal MAF agent in YAML
That YAML loads identically in C# and Python. The same definition deploys to Azure AI Foundry, to Azure Container Apps, or to a self-hosted runtime. This is the operational shift that matters most — agent topology stops being buried in code.
Google ADK in 2026: Explicit Graphs, Doubled Down
Google's Agent Development Kit (ADK) hasn't pivoted in response to MAF. If anything, it has doubled down on its core philosophy: explicit orchestration. You define exactly how agents coordinate using SequentialAgent, ParallelAgent, and LoopAgent — three primitives that cover roughly 90% of real orchestration patterns. There's no model-driven dynamic routing. Every execution path is visible in the graph definition.
from google.adk.agents import SequentialAgent, ParallelAgent, LoopAgent
intake_pipeline = SequentialAgent(
name="ticket_intake",
sub_agents=[classify_agent, enrich_agent, route_agent],
)
context_gather = ParallelAgent(
name="context_gather",
sub_agents=[crm_agent, billing_agent, history_agent],
)What's changed since our March ADK vs Strands comparison is mostly enterprise polish — tighter Vertex AI deployment, better debugging UI, more aggressive A2A interop on Bedrock AgentCore. The core decision criterion is unchanged: ADK is the right call when you need to be able to point at the graph and tell an auditor exactly how a decision got made. That's why we picked it for the logistics client whose operations lead asked "which agent made which decision and why?" — explicit graphs make that question answerable in seconds.
The cost is verbosity. The same workflow that takes ten lines of code in smolagents takes thirty in ADK. That's a feature, not a bug, when reliability matters more than line count — and per Princeton's data, reliability is the dimension where most agents fail in production.
smolagents: The Third Philosophy
smolagents is the one most enterprise teams overlook and most open-source-first teams quietly love. HuggingFace's framework — ~26K GitHub stars, ~3K lines of core code — takes a fundamentally different stance from MAF and ADK on what an agent's "action" should look like.
In MAF, ADK, LangGraph, and almost every other framework, the agent emits a JSON tool call. The runtime parses it, dispatches to the named tool, returns the result, and loops. In smolagents, the agent writes Python code. The runtime executes the code in a sandbox and returns whatever the code printed or returned.
from smolagents import CodeAgent, HfApiModel, DuckDuckGoSearchTool
agent = CodeAgent(
tools=[DuckDuckGoSearchTool()],
model=HfApiModel("Qwen/Qwen3.5-72B-Instruct"),
)
agent.run(
"Find the three highest-rated coffee grinders under $200 "
"and rank them by espresso suitability."
)That Python expression-as-action shifts everything. HuggingFace's research showed code-acting agents complete multi-tool tasks in roughly 30% fewer steps than JSON-tool agents. The reason is composition: one Python expression can chain three operations — [search(q) for q in queries if not in cache] — that JSON tool calling has to express as three separate dispatches with three round trips to the model.
The trade-off is execution safety. The agent can in principle import os; os.system(...). You must sandbox. E2B, Docker, gVisor, and the new SmolVM (~200ms cold-start microVMs) are the standard options — see our SmolVM vs Firecracker vs Docker comparison for how to pick. Without a sandbox, smolagents is a footgun. With a sandbox, it's a remarkably efficient way to compose tools.
Head-to-Head: The Comparison Table
The dimensions that actually drive the decision, after a dozen client projects across all three:
Three patterns jump out:
| Dimension | Microsoft Agent Framework 1.0 | Google ADK | smolagents |
|---|---|---|---|
| Released | April 3, 2026 (1.0 GA) | 2025, mature | 2024, ~26K stars |
| Languages | .NET + Python | Python (Java in preview) | Python |
| Action format | JSON tool calls | JSON tool calls | Python code |
| Orchestration | Declarative + workflow primitives | Sequential / Parallel / Loop graph | Single-agent loop (multi via glue) |
| Definition style | YAML-declarative | Code-first | Code-first, minimal |
| MCP support | Native (client + server) | Native (client + server) | Via tool wrappers |
| A2A support | Native | Native | Not built in |
| Deployment target | Azure AI Foundry, Container Apps | Vertex AI, Cloud Run | Anywhere Python runs |
| Observability | Azure Monitor, OTel GenAI | Built-in web debugger, Cloud Trace | OTel, lightweight tracer |
| Sandbox required | No (JSON tools) | No (JSON tools) | Yes (code execution) |
| Best fit org | Microsoft-stack enterprise | GCP-native, audit-heavy | Open-source-first, self-hosted |
| Lines of glue code | ~50 (with YAML) | ~80 for a 3-agent graph | ~15 |
| Vendor lock-in | Medium (Azure-aligned) | Medium (GCP-aligned) | None |
Decision Framework: Which One for Which Org
The matrix that's saved us hours of "should we evaluate X" debates with clients:
A specific recommendation we keep making: if you're building a brand-new agent system today and you don't have a strong cloud preference, MAF 1.0 is the lowest-regret default for enterprises. The .NET + Python split, the YAML declarations, the native protocols, and the fact that Microsoft now has one supported agent product instead of two competing ones — that's a much better surface area to build on than it was six months ago.
For open-source-first teams running their own models on their own GPUs, smolagents is underrated. The code-as-action paradigm composes tools in ways JSON tool dispatch can't, and the codebase is small enough to read in an afternoon. Pair it with SmolVM or E2B for sandboxing and you have a stack that's easier to reason about end-to-end than any of the cloud-native frameworks.
| If your situation is... | Pick |
|---|---|
| Microsoft shop, Azure-deployed, mixed .NET + Python | Microsoft Agent Framework 1.0 |
| GCP-deployed, need auditable graph orchestration, Gemini-aligned | Google ADK |
| Open-source-first, self-hosted models, code-acting fits the task | smolagents |
| AWS-deployed, want serverless | AWS Strands (see our comparison) |
| Already deep in LangGraph with shipped workflows | Stay on LangGraph for now |
| Building OpenAI-only prototypes fast | OpenAI Agents SDK |
| Need multi-agent orchestration above all | MAF or LangGraph |
Migration Paths: What Breaks
If you're on Semantic Kernel or AutoGen today and looking at MAF 1.0, the surface that breaks:
From Semantic Kernel:
From AutoGen / AG2:
asyncio across the stack.For multi-agent orchestration patterns that survive any framework migration, see our guide to orchestration that actually ships — supervisor, pipeline, and broadcast patterns are framework-independent and travel well across MAF, ADK, and smolagents.
What We're Picking on Client Projects in Q2 2026
Three real picks from the last month, with reasoning:
The framework decision used to feel high-stakes because the picks were so different. With MAF and ADK now sharing a protocol surface (MCP + A2A), and with smolagents owning a niche the cloud-native frameworks don't compete for, the choices have actually gotten cleaner. Pick by stack, pick by orchestration philosophy, and don't over-engineer it. The framework matters far less than the reliability discipline you build on top of it.
For the broader picture of where agent tooling is heading, our best tools to build AI agents overview and the AI agents pillar page collect the patterns we keep returning to. The framework picks change every quarter; the architecture lessons don't.
Frequently Asked Questions
Quick answers to common questions about this topic
Microsoft Agent Framework 1.0 launched April 3, 2026 and consolidates Semantic Kernel (the enterprise SDK) and AutoGen (the multi-agent research framework) into a single supported product. It ships for both .NET and Python, supports YAML-declarative agent definitions, and treats MCP and A2A as native protocols rather than community add-ons. Semantic Kernel users get a migration path that preserves plugins; AutoGen users get a stable production runtime in exchange for some pattern changes around group chat and conversable agents.


