← Back to Home

How Kensho built a multi-agent framework with LangGraph to solve trusted financial data retrieval

LangChain Blog Agent框架 进阶 Impact: 8/10

Kensho, S&P Global's AI innovation engine, built a multi-agent framework called Grounding using LangGraph to serve as a unified entry point, ensuring all AI outputs are grounded in trusted, traceable financial data.

Key Points

  • The core issue is fragmented financial data and trust: clients need efficient access to trusted information from S&P Global's vast, dispersed data sources.
  • The solution is a multi-agent framework called Grounding: it acts as a unified entry point, intelligently routing queries via a router to specialized Data Retrieval Agents (DRAs).
  • LangGraph is the engine of the framework: it handles query decomposition, agent coordination, and result aggregation, simplifying the development of complex workflows.
  • A key innovation is the custom data retrieval protocol: it ensures all agents communicate with a consistent data format, enabling cross-team collaboration and system scalability.

Analysis

In the era of AI agents, a core tension is becoming increasingly apparent: no matter how powerful a large model is, its outputs are worthless if it cannot accurately retrieve information from reliable, complex enterprise data sources. Kensho, S&P Global's AI innovation engine, recently shared how they used LangGraph to solve this challenge. This is not just a technical case study; it reveals a key architectural pattern for the deployment of enterprise-grade AI applications. The origin lies in the inherent difficulties of financial data. S&P Global's data isn't simple web text; it's highly structured and scattered across "data silos" in different business units (like equity research, fixed income, macroeconomics). Financial professionals used to spend hours navigating between different systems and verifying information. Kensho's goal was clear: provide a single, trusted data access point for all AI applications and agentic workflows, ensuring every insight is derived directly from verified datasets. Their solution is called Grounding, and its core is a multi-agent router architecture. The key insight here is "separation of concerns." Instead of having each AI agent parse natural language and handle various data sources itself, they designed a central router. This router receives a user's natural language query, then intelligently decomposes and routes it to specialized "Data Retrieval Agents". Each DRA is maintained by the corresponding data team (e.g., the equity research team) and focuses solely on handling queries within its domain, acting like a highly specialized "data steward." This design significantly improves query accuracy and signal-to-noise ratio. Finally, the router is responsible for aggregating the fragmented responses from different DRAs into a coherent, actionable insight. LangGraph acts as the "glue" and "engine" here. Its state graph nature is well-suited for modeling this complex "decompose-route-aggregate" workflow. Kensho's engineers noted that LangGraph made it easy to iterate and test the router logic locally, providing a smooth developer experience. This reveals a deeper trend: orchestration frameworks like LangGraph are evolving from "辅助工具" to "core infrastructure" for enterprises building complex AI systems. However, the most valuable lesson for practitioners might not be LangGraph itself, but the "custom data retrieval protocol" they established for it. In distributed systems, inconsistent communication interfaces are a nightmare. Through early internal experimentation, Kensho's team enforced a unified data format (including both structured and unstructured data) for all DRA responses. This protocol became the "lingua franca" for the entire multi-agent ecosystem, dramatically accelerating cross-team collaboration and the deployment of new agents. From their equity research assistant to their ESG compliance agent, all are built upon the same robust data protocol. The practical value for readers is: First, when faced with messy internal data sources, consider this "central router + specialized agents" architecture instead of trying to build a single "all-powerful" agent. Second, before starting a multi-agent project, prioritize defining clear data exchange protocols and interface standards—this is more important than which large model you choose. Third, tools like LangGraph lower the barrier to building complex workflows, but the real moat lies in deep understanding and engineering encapsulation of business data logic. Kensho's case shows that the next battlefield for AI deployment is not at the model layer, but in how to reliably and scalably connect models with trusted, complex enterprise data.

Analysis generated by BitByAI · Read original English article

Originally from LangChain Blog

Automatically analyzed by BitByAI AI Editor

BitByAI — AI-powered, AI-evolved AI News