Neszed-Mobile-header-logo
Wednesday, July 23, 2025
Newszed-Header-Logo
HomeAIBuilding trustworthy AI agents with SAS Viya

Building trustworthy AI agents with SAS Viya

AI agents take on more responsibility in our most sensitive systems – from finance to public safety to health care – one thing is clear: trust can’t just be designed in. It has to be sustained. Continuously. 

In the first two parts of this series, we explored the ethical foundations of trusted AI agents and the design challenges they raise. Making trust operational and building it into these systems’ day-to-day behavior keeps them accountable and aligned with human values.

This evolution from principle to practice is where the real work begins.

From design intent to lifecycle discipline 

Translating AI trust principles into real-world practice requires more than good design – it takes continuous oversight throughout an agent’s lifecycle. Consider financial services, where AI agents might adjust credit limits based on market conditions or detect and respond to fraud in real time. In health care, AI could help prioritize resource allocation during emergencies. As systems in sensitive areas like this gain autonomy, operational trust mechanisms become critical.

In other words, intending fairness or accountability is no longer enough. These values must be embedded layer by layer into how AI agents are built, deployed and maintained.

From intent to implementation  

AI agent trust requires continuous action, not assumptions. Organizations must embed transparency, accountability, and fairness throughout system layers.

SAS® Viya® helps do just that. With a powerful suite of governance tools, automation capabilities and model oversight features, SAS Viya turns ethical intentions into operational reality, enabling AI agents that are both intelligent and trustworthy from the ground up.

Building trust throughout the AI agent’s lifecycle 

Operationalizing trust in AI agents means embedding responsibility into every phase of development and deployment, not just at the design stage. SAS Viya provides a framework for doing exactly that, anchored in four essential pillars: strong data foundations, decision transparency, strategic human-AI collaboration and continuous trust maintenance.

Together, these pillars help organizations turn high-level ethics into real-world impact.

1. Foundation: Data integrity and governance

Trustworthy AI agents begin with trustworthy data. That means establishing comprehensive data governance practices, including:

  • Lineage tracking to understand where data comes from and how it evolves.
  • Automated quality assessments to ensure consistency and reliability.
  • Proactive bias detection to build fair and inclusive models.

SAS Viya has integrated data preparation and profiling capabilities to ensure agents operate on representative, accurate, and ethically sourced datasets, eventually establishing the bedrock for reliable autonomous decision-making.

viyapic1
Fig 1: End-to-end AI agent lifecycle in SAS Viya – from data access and ETL workflow creation to model training, governance and deployment in an AI agent environment.

2. Transparency in action

Organizations need AI systems that are explainable, auditable, and accountable. With SAS® Intelligent Decisioning, teams can build agent workflows where every step, from product suggestions to issue escalation, is traceable and transparent.

This level of visibility supports:

  • Regulatory compliance.
  • Internal audit readiness.
  • Stakeholder confidence in autonomous decisions.
vicpic2
Fig. 2: Turning business logic into action – SAS Intelligent Decisioning transforms a visual decision flow into an intelligent agent accessible through an API.

3. Strategic human-AI collaboration  

Effective trust architectures don’t just automate – they escalate. In high-stakes or ambiguous situations, intelligent systems need to know when to defer to human decision-makers.  

This kind of shared decision-making framework ensures AI remains efficient for routine tasks, while people stay involved where judgment, empathy, or ethical nuance are required. This approach ensures human judgment remains central to critical decisions. 

4. Dynamic trust maintenance  

Trust requires continuous cultivation. It becomes crucial that the AI agent platform support real-time monitoring of performance metrics, fairness indicators, and model drift.  

Sophisticated feedback mechanisms help incorporate user responses and internal stakeholder insights, enabling continuous model refinement and decision logic evolution, ensuring AI agents develop responsibly under changing conditions. 

image 1
Fig 3: Model cards – A clear, visual summary of a model’s performance, fairness, and governance readiness to support responsible AI decisions.

AI agents in action: Building a trusted call center agent 

Imagine a global telecom provider exploring how to transform its call center operations using an AI-powered virtual agent. The goals? Faster service, reduced wait times, and 24/7 support – without compromising customer trust. 

In this scenario, the company begins with high-quality, well-governed customer data, including call histories, interaction records, and built-in bias checks to ensure the system serves all demographics fairly. Every interaction the AI agent handles is based on transparent decision logic, enabling teams to trace responses, escalation triggers and resolution paths for audit and compliance. 

Crucially, the virtual agent is designed to know its limits. If a customer disputes a bill, shows signs of frustration, or requests to cancel a service, the system is built to escalate immediately to a human agent, ensuring empathy and reducing the risk of missteps in sensitive moments. 

The system doesn’t just launch and run on autopilot. In this vision, real-time monitoring tracks metrics like resolution rates, customer satisfaction and fairness across user groups. Continuous feedback from customers and human agents helps refine and improve the model over time, with updates made responsibly and transparently. 

By prioritizing transparency, data integrity, and thoughtful human-AI collaboration, this hypothetical provider doesn’t just deploy AI; it earns customer trust and lays the groundwork for responsible automation at scale.  

Trust isn’t optional; it’s the standard 

Operationalizing trust in AI agents requires continuous commitment across the entire lifecycle. As agents take on greater autonomy in high-stakes domains, organizations must move beyond principles and implement robust, lifecycle-wide mechanisms that embed transparency, accountability and human oversight from the ground up.  

SAS Viya provides the tools and frameworks to make this possible, transforming ethical intent into operational outcomes. From data integrity and decision transparency to strategic human-AI collaboration and continuous trust maintenance, this is how trustworthy agentic AI becomes real. As autonomous systems become ubiquitous, this kind of operational rigor is no longer optional – it’s essential for earning and maintaining public confidence. 

Learn more about how SAS drives technology innovation ethically and responsibly

1284Josefin Rosén

Principal Trustworthy AI Specialist, Data Ethics Practice

Dr. Josefin Rosén is the Principal Trustworthy AI Specialist with SAS’ Data Ethics Practice. With over 20 years of experience in AI and advanced analytics, she helps shape the company’s responsible innovation strategy and supports both employees and customers in implementing AI that advances human well-being and autonomy. Josefin holds a PhD in Chemometrics from the Faculty of Pharmacy at Uppsala University.

Source link

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments