Struggling to scale Agentic AI? CK Taneja, SVP Transformation, Northern Trust gives his expert advice on orchestration, technology and change management.
.png)
Insights from CK Taneja, SVP of Transformation and Innovation at Northern Trust, on the GBS Rewired Podcast
Gartner predicts that 40% of agentic AI projects will fail by 2027. The reason, more often than not, is not the technology; it is operational readiness. As enterprises move from carefully controlled pilots into full-scale deployment, the complexity multiplies. Suddenly, AI agents are not just impressing tech-savvy teams in proof-of-concept settings; they are interacting with employees at every level, customers, and suppliers across global operations.
In a recent episode of GBS Rewired, hosted by Sally Fletcher of Hypertos, CK Taneja — Senior Vice President of Transformation and Innovation at Northern Trust — shared a frank and practical perspective on what it really takes to scale agentic AI inside a complex financial services enterprise.
One of the most insightful parts of the conversation tackled why so many organisations get permanently stuck in pilot mode. CK's diagnosis was clear: pilots tend to be owned by innovative pockets of the business, tasked with demonstrating capability. But scaling requires something entirely different — a whole-organisation adoption plan anchored in strategic intent.
His advice? Stop talking about AI. Start talking about the outcomes you need.
"AI is part of your strategy. And I say don't even talk about AI — talk about automation, operational efficiency and effectiveness, even resiliency," CK explained. "When you have the right objectives, what you are going to do is work on using AI smartly, intelligently, and supporting the people, rather than saying I'm using AI and how many agents I have deployed. That's not the point."
This reframing is crucial. When organisations measure success by the number of agents deployed rather than by operational outcomes — cost reduction, customer satisfaction, system uptime — they have already lost the plot. CK drew a sharp analogy with earlier Agile transformations, where KPIs became about Agile adoption itself rather than the business results it was meant to drive.
A perennial tension in agentic AI is where to keep humans in the loop without creating the very bottlenecks AI was supposed to remove. CK's framework here is grounded and practical:
CK also offered a compelling counter-argument to the common objection that AI decision-making is opaque. In fact, he argued, AI offers something humans almost never do: a full, auditable log of every step in the decision process. "I don't know how humans are making decisions because they do not record where the decisions are happening. Each human has a different logic and different bias. In AI, I can actually show the observability — you can see how it is moving through the process."
"Orchestration" was one of the buzzwords at SSON's flagship event "Shared Services & Outsourcing Week", which CK recently chaired, and clearly it's a topic on GBS leaders' minds.
Using the example of loan processing at Northern Trust — where the company serves high-net-worth clients across the globe — CK described how effective orchestration means the system does not follow a fixed workflow. Instead, it dynamically constructs the right workflow based on the incoming data.
An existing customer may not need KYC checks. A returning customer with a change in income triggers a proof-of-income verification step. An international client may require sanctions screening using a different data source entirely, which might call for a different LLM altogether.
This is the orchestration layer: a decision engine that determines which agents are invoked, in what sequence, and where human checkpoints are placed — dynamically, at the point of each request.
On the architecture side, CK likened the decision of which LLMs to deploy to workforce strategy: some skills are built in-house, some are contracted, some are outsourced. Similarly, some AI capabilities might be developed internally, some purchased as off-the-shelf solutions, and some rented as cloud services. The architecture question then extends to hosting: on-premises, private cloud, public cloud, or a private instance within a shared cloud environment.
"It looks easy," CK noted. "And people say AI is taking over and you can get rid of people. No. The skill sets required to get to AI are shifting. You are going to change the pattern of the workforce, but you are not going to eliminate it. You have to be more process-driven and more data-solid than before."
When asked how he decides where to deploy AI within Northern Trust, CK's answer was notably free of the usual technology-led thinking. He does not start with "where can AI add value?" He starts with where the organisation hurts most.
His diagnostic framework looks for two types of operational pressure:
Once those pain points are identified, AI becomes one tool in a broader toolkit that may also include process re-engineering, workforce restructuring, and technology modernisation. The key discipline is cost-benefit analysis: what is the cost of deployment versus the efficiency gain? And crucially — what is the organisation's risk appetite?
CK used a memorable analogy: walking into a Walmart, you do not lock everything up simply because it could theoretically be stolen. You define what you can tolerate losing, put high-risk items behind glass, and let low-risk items remain accessible. The same logic applies to AI deployment in regulated environments.
For more insight into “How to Scale Agentic AI,” listen to the full episode here.
Further stories from our blog