AI ambition is rising across financial services, but enterprise-wide value remains uneven. Gartner estimates that a significant proportion of AI pilots fail to scale into full production, and by 2026, organisations that cannot tie AI initiatives to measurable business outcomes risk scaling back investments. At the same time, Gartner predicts that by 2027, more than half of generative AI models used by enterprises will be industry-specific or domain-tuned, signalling a shift from experimentation to deeply embedded, contextual intelligence.
In India’s NBFC sector, where underwriting precision, regulatory compliance, and turnaround time directly influence portfolio quality, AI maturity must move beyond isolated use cases. Governance, explainability, and risk discipline are emerging as critical differentiators.
In conversation with ETCIO, Markandey Upadhyay, Chief Data & Analytics Officer, Piramal Finance outlines how Piramal Finance is architecting an AI-native operating model, embedding agentic decision systems, risk-specific scorecards, and governance-by-design across the lending lifecycle.What does it take for an NBFC to move from isolated AI use cases to enterprise-wide impact?
Piramal has always been an AI native NBFC. Technology and Data forms the foundation that enables us to deeply embed AI across our core journeys. Infact now, with rapid advancements in AI model capabilities, we are reimagining our tech journey and workflows.
Our operating model has always been AI led decision-centric design, with clear decision ownership, human-in-the-loop safeguards, and explainable, auditable outputs appropriate for a regulated environment. Risk, business, and product teams co-own AI-driven outcomes.
Scaling further requires everyday AI – copilots and agents enabling teams to access insights, run what-if analyses, and act using natural language, improving productivity and consistency. Governance remains foundational: model risk management, regulatory alignment, monitoring, and override controls are embedded from inception to build an AI-native institution where intelligence strengthens productivity, customer experience, and risk quality enterprise-wide.
Which metrics best determine whether AI in underwriting is truly working: turnaround time, early delinquencies, cost efficiency, or risk-adjusted returns?
At Piramal Finance, we track AI performance for the business impact it delivers. Risk separation is our primary measure, we deploy product-specific scorecards for credit, fraud, income, eligibility, and asset evaluation rather than one-size-fits-all models. Our patent-pending leverage risk model identifies customers who may become over-leveraged post-disbursement, and these customers carry 3x-6x higher risk.
We have also deployed our first Agentic-Based Credit Decision (ABCD) framework for a live product. This architecture carefully orchestrates multiple supervised ML models, spanning credit risk, income, eligibility, and fraud, combined with contextual memory and AI agents that synthesise outputs into a final credit decision. Level of risk separation directly translates to better credit costs and portfolio quality. Operational efficiency is the second critical dimension. AI-driven document verification and bank statement digitization have reduced turnaround times and operational overhead, reflected in our Opex-to-AUM ratio. We also monitor early delinquencies and portfolio performance through dedicated hindsight teams. The real measure of success is when all these metrics move together, better risk outcomes, lower costs, faster decisions, and sustained growth.
Can alternative data meaningfully expand credit access for new-to-credit borrowers without weakening risk discipline?
Only when alternative data is used thoughtfully within a robust risk framework. Not all data carries information value, and even when it does, it is useful only if it delivers incremental predictive power over and above existing traditional data. At Piramal Finance, we use alternative data not as a replacement for traditional credit assessment, but as an additional layer of intelligence that enhances our ability to differentiate risk, particularly for customers with thin credit files. We deploy AI models that triangulate information across multiple data points, KYC records, bank statements, salary slips, and alternative signals like digital footprints and transaction patterns. Critically, we do not lower our risk standards to expand access. Instead, we surface customers who would have been declined under traditional models but who demonstrate strong repayment behaviour. Our proprietary models are continuously validated through hindsight analysis, and AI-generated insights are always reviewed by underwriters. The result is meaningful credit inclusion without compromising risk discipline, technology as a true enabler of financial inclusion.
How are you ensuring AI-driven decisions remain explainable, auditable, and aligned with regulatory expectations?
For us, innovation and trust are inseparable. We have invested deeply in governance, explainability, and end-to-end model lifecycle management to ensure every AI-driven decision is transparent, auditable, and compliant. Dedicated AI annotation and hindsight teams continuously track model performance, drift, and reliability. Every model is rigorously vetted before going live, with detailed documentation of model logic, training data, and decision criteria. Human oversight is non-negotiable, AI-generated alerts are always routed through human review. For example, our document tampering models flag anomalies like rasterisation issues and non-standard fonts, but the final decision rests with a human reviewer. Explainability is built into our models by design, not retrofitted later. By embedding governance, monitoring, and human oversight from day one, we responsibly harness AI while maintaining regulatory confidence and customer trust.
Gartner research suggests that by 2026, organisations that fail to link AI initiatives to measurable business outcomes may scale back investments. In that context, what will separate AI leaders from the rest in India’s NBFC sector over the next three to five years?
McKinsey’s findings underscore a reality: most AI programmes fail to create material value. The minority succeeding treat AI as a core competency, not a vendor-led experiment – we are in the top 5%. This needs a mindset from business leads to enable this. We have invested in our in-house data science, engineering, agentic-AI and foundational model capabilities, building proprietary solutions.
Execution is systematic: AI is embedded across the lending lifecycle in our tech stack, with 145+ live use cases and AI-assisted software development at scale (~8 billion tokens per month). Business leaders, data scientists and engineers operate integrated teams solving operational pain points and scaling impact quickly.
Governance is a growth enabler, with strong controls around data lineage, privacy, explainability, and continuous model monitoring under human oversight. AI supports over 5 million customers and US$7 billion in AUM, improves approvals and conversions; reduces fraud and costs, and underpins a projected 25 bps structural opex benefit, distinguishing AI-native NBFCs from experimental adopters.






