Responsible innovation is more than a framework. It’s the discipline of building technology that earns trust.
As AI becomes more capable and deeply embedded in business workflows, organizations must ensure that every decision, design choice, and workflow supports accountability, transparency and human-centered outcomes.

The Responsible Innovator Spotlight is a new recognition program that celebrates employees who embody our values in their work. Whether you’re designing inclusive solutions, protecting privacy, or asking thoughtful questions that guide ethical decision-making, this program highlights how SASers are putting responsible innovation into practice.
Each spotlight features a colleague who exemplifies one or more of SAS’ six guiding principles: human-centricity, inclusivity, accountability, transparency, privacy and security and robustness.
Sierra Shell is a UX designer at SAS whose work designing governance-focused AI user experiences demonstrates what it means to build AI systems that are not only powerful but also principled.
Don’t just take it from us; Sierra went into all of the details.
What do you do and how are you innovating responsibly?
Sierra Shell: I’m a UX designer. Building a new product from scratch is a new challenge for me! The primary way I strive to innovate responsibly is to help our users pause and reflect on their content, while minimizing friction in other areas of the app.
In the world of AI governance, documentation serves as a pathway to accountability and transparency. It’s a delicate balance, trying to create pause points in certain areas of the app that have a high consequence for an organization governing AI. The app must overall feel frictionless, easy to use, and intuitive enough to fade into the background, making the user’s content king.
However, we can utilize design elements such as requests for confirmation, impact analyses before making edits and other features to help users consider the consequences of each decision as they make them. We trust our users to know what’s best for their organization; we just want to help make the responsible decision the easy one.
What is the trustworthy AI life cycle?
What motivated you to prioritize responsible innovation in this way?
Shell: I believe that design is often the “last mile” of digital governance. For example, the European Union can require that users have the ability to protect their privacy by disabling unnecessary cookies. But if that cookie pop-up is poorly designed, that can directly affect the choices users make at scale. Rather than making the empowered choice to protect their privacy, the easier choice usually wins out.
If folks have to navigate through a web of links or uncheck many boxes, they may give away their personal data even though they’d rather not just to get through the UI. This example sticks with me a lot – how can we make being responsible the default? How can we help our customers by making trustworthy AI governance the easiest choice?
What are examples of when you have personally embodied one of these SAS’ six guiding principles to responsible innovation?
Shell: Of course, human centricity is at the core of good user experience design. The field used to be more commonly referred to as Human Factors or Human-Computer Interaction. This presents an interesting problem, as AI could easily replace a human’s role in many systems (though it may not work out well). How can we help users utilize AI features while keeping them in the loop?
I believe we keep users at the center of our products in two ways: first, by involving them early through user research and usability testing before release; and second, by engaging with other expert designers working on AI features to coordinate best practices that are just emerging. I have driven both efforts in our AI Governance design process. I created and distributed a survey to gather insights and helped facilitate user interviews, working with others on the DEP team.
I also recently gave a lecture to NCSU Master’s of Graphic and Experience Design students about how we can use intentional friction to design AI features responsibly. Engaging with design students is especially important when it comes to emerging technologies, as they often use cutting-edge tools and processes without thinking about it. We have as much to learn from them as they do from us.
What inspired you to focus on one or multiple of these principles?
Shell: Humans are what it’s all about. We are a large enterprise selling to other enterprises and businesses, but all of those product teams and business units are just people, in the end. Design is all about meeting people where they are, matching their expectations, rather than asking them to change their habits and priorities to fit our products.
As AI continues to mature, I hope that we see more examples of these features and services meeting people where they are, addressing their existing needs, creating solutions, rather than hoping society conforms to AI’s emergence.
What is responsible innovation and why is it important to you? Why should others care?
Shell: It’s easy to focus on speed and scale when building AI tools, but responsible innovation asks us to build with care. I care about it because it gives design a moral dimension – a chance to protect people, elevate voices, and make complexity manageable. We should all care because the decisions we make now will shape how AI affects society for years to come.

