Building AI you can trust: A blueprint for sustainable intelligence

In an exclusive keynote session at TechSparks 2025, titled ‘Trust at scale: An era of sustainable AI governance’ , Kavita Viswanath, SVP Engineering & India Country Lead, Toast, shared her thoughts on how leaders can embed ethical AI practices into business models, ensure transparent and fair data usage, and adopt sustainable approaches that minimize environmental impact. She also addressed the growing importance of governance frameworks in an era of accelerated innovation.
Viswanath opened her keynote by introducing Toast, a leading global SaaS provider of point-of-sale (POS) solutions for restaurants and food and beverage retailers. The organization serves over 156,000 locations across the globe, generating over $2 billion in Annualized Recurring Run-Rate(ARR) as of the third quarter of 2025. Toast’s purpose goes deeper than POS systems; the company has built a comprehensive platform supporting the entire restaurant ecosystem, from front-of-house operations to backend management.
Delving into the discussion on sustainable AI governance, Viswanath started with a common meme of robots discussing the issue of ethical AI. “This is where organizations are headed. How can we avoid this?” she asked the audience. The answer, according to her, lies in building sustainable AI centered on the principle of responsible intelligence. This is not just an engineering mandate or a coding practice. “This is about how as leaders, how you build and make the right business decisions that fall within this framework of building scalable, sustainable AI,” she said.
This endeavor will be a three part mandate centered on business, engineering, and product.
The sustainable, scalable, responsible mandate
Business mandate: Building a responsible intelligence framework starts with a business mandate, and this is where most organizations stumble. Viswanath cited the example of hailing a cab during the rains. The streets are flooded, it’s pouring, and the city is reeling. For a customer hailing a cab on a ride-sharing app, this is the worst possible time for surge pricing to kick in. However, Viswanath notes that this is not an engineering failure. The system worked exactly as it was designed to work, built on a business logic of maximizing revenue when demand is high.
However, the intent and objectives were not translated properly into the operational boundaries and testable engineering requirements. Focusing on building the right business mandate – with the right intent – behind some of the decisions made will be a crucial part of responsible AI and system design.
Engineering mandate: The next pillar is the engineering mandate. Companies aim to build architectures that people can trust, a system that is auditable by design with a distinctly traceable data lineage and explainable logic. “This is not something that you add as an afterthought or bolt on to the system when the model fails at production. You have to think about it at the design phase, right from your intent-based business mandate and translate it in,” Viswanath said.
Product mandate: The product mandate focuses on building transparent interfaces with explainable logic. This ensures that users understand that the insights derived are from the right data points, and crucially, that they grasp the reasoning behind the AI-driven decisions offered to them. When users understand the reason behind these outcomes, the product becomes more powerful and actionable, building greater confidence and trust.
Humanity as a guardrail
In discussing guardrails, Viswanath addressed the importance of putting humans in the loop. The first step is context. Every AI model is trained on past data but what happens when a model trained in the past encounters the complexity and chaos of the real world. The human in the loop becomes essential when it fails to grasp this, providing the judgement and nuance the model lacks.
The next step, Viswanath said, is feedback. “As a user, when you interact with the model, whether it’s a prompt or additional context, correcting the model, or sharing mandates, these are not failures of the model, per se. These are very valuable training inputs that the model loves. The model is looking at correcting itself using some of these inputs. And this is what makes it adaptable”, she said.
Review, ensuring the quality, reliability and readiness to scale, comes next. It’s the final check box that takes a model from successful testing to full-scale deployment. But as we embrace human-in-the-loop systems, the challenge becomes clear: how do we make this work at scale in a world with thousands of models operating simultaneously?
The answer? Governance as an engineering discipline – from the continuous integration (CI) phase to post-deployment AI observability, where product models are tracked for drift and degradation. Any anomalies are treated as critical incidents. Beyond deployment, structural accountability brings rigor and traceability to the system.
Environment or efficiency?
The final topic Viswanath addressed was the copious amounts of resources and energy that both AI and data consume. With the rise of data centers worldwide, there is a growing concern about their environmental impact.
Viswanath shared that as technical leaders the challenge will be to build sustainable AI by embedding sustainable practices directly into systems and workflows. From data pipeline optimization to model architecture and intelligent compute scheduling, she advocated for solutions that are efficient while balancing a prudent use of resources.
“When you look at building a sustainable intelligence framework and put it together with some of the things we spoke about earlier – starting from building responsible intelligence to scaling it with humans-in-the-loop and using governance as an engineering discipline, sustainable intelligence and building sustainable AI becomes a reality,” she said.
Disclaimer- This content represents the personal opinions of the author and does not reflect the views of Toast.
Discover more from News Link360
Subscribe to get the latest posts sent to your email.
