Previewing Interrupt 2026: Agents at Enterprise Scale
LangChain's annual conference focuses on the challenges of scaling AI agents from production validation to enterprise-wide deployment, revealing how major companies build platforms, evaluate performance, and structure teams.
LangChain Blog · Thu, 09 Apr 2026 17:00:06 GMT
Human judgment in the agent improvement loop
LangChain argues that building reliable AI agents requires systematically integrating domain experts' tacit knowledge and judgment throughout the development lifecycle, rather than relying solely on the model's own capabilities.
LangChain Blog · Thu, 09 Apr 2026 15:00:12 GMT
How My Agents Self-Heal in Production
A LangChain engineer shares a complete pipeline for AI agents to automatically detect regressions, diagnose issues, and submit fix PRs after deployment, combining statistical methods with intelligent triage to reduce false positives.
LangChain Blog · Fri, 03 Apr 2026 17:01:03 GMT
Agent Evaluation Readiness Checklist
LangChain proposes a 6-point checklist before building agent evaluations, emphasizing manual analysis of 20-50 real failure traces before automating tests.
LangChain Blog · Fri, 27 Mar 2026 14:00:00 GMT
How we build evals for Deep Agents
LangChain shares its core philosophy for building AI agent evaluation systems: more evals aren't better; instead, precisely define and measure the agent behaviors you care about to guide its evolution.
LangChain Blog · Thu, 26 Mar 2026 15:18:56 GMT