
In this interview, we speak with Preetham Reddy Kaukuntla, Staff Data Scientist at Glassdoor, about navigating the evolving demands of AI-driven decision-making. Preetham shares how statistical analysis, experimentation, and machine learning converge to deliver measurable impact, and offers insights into mentoring data scientists toward business-oriented thinking. From balancing short-term results with long-term scalability to shaping the future role of AI leadership, his perspectives shed light on both the strategic and practical sides of data science.
Explore more interviews here: Jay Dawani, Co-Founder & CEO at Lemurian Labs: Pioneering Accessible and Sustainable AI Development
Your journey reflects a strong blend of statistics, experimentation, and machine learning. Can you walk us through a defining moment where these pillars converged to deliver a critical business impact?
One defining moment came during the overhaul of our notification platform at Glassdoor, where the challenge was to improve engagement without increasing message fatigue. We began with statistical analysis of historical engagement data, which revealed key behavioral segments, for example, high-value job seekers who responded to certain job types at specific times of day. This step identified not only the “what” but also plausible “why” patterns behind dips in engagement.
From there, we designed controlled experiments to test different suppression rules, timing adjustments, and content variations. One test, for example, compared daily versus adaptive send schedules for top segments, measuring click-through, apply starts, and churn over several weeks.
The winning strategies were then operationalized into an ML-driven targeting pipeline that dynamically adjusted send frequency and ranking based on real-time engagement scores. Within three months, the system reduced redundant sends by 30%, saved $150K annually in email costs, and increased application starts from notifications by 18%, a clear example of how statistics, experimentation, and machine learning can build on each other to deliver measurable business value.
As a Staff Data Scientist, leadership goes beyond technical skills. How do you mentor junior data scientists to develop a business-oriented mindset?
I encourage junior data scientists to think of themselves as partners in decision-making, not just technical executors. We start by clearly defining the business context, what decision is at stake, who is making it, and how success will be measured. This framing helps shift the mindset from “I’m building a model” to “I’m influencing an outcome.”
We also focus heavily on trade-offs. For example, a marginal accuracy gain may not be worth the added complexity if it delays deployment or erodes interpretability. I ask them to always consider the “last mile,” how their work will be consumed, by whom, and under what constraints.
One practical exercise I use is having them present their findings twice, once to a technical audience and once to a business audience. The ability to adapt the same insight for two very different groups is a skill that multiplies their impact. Over time, they learn that influence and trust often matter more than technical sophistication alone.
Building end-to-end AI solutions often requires balancing short-term deliverables with long-term scalability. How do you manage this tension?
The tension between speed and sustainability is a constant in AI projects. My approach is to run two parallel tracks, one focused on delivering something tangible quickly, and another on building the infrastructure and processes that will allow the solution to evolve without breaking later.
In the short-term track, we aim for functional prototypes, minimal but useful solutions that prove value early. In the long-term track, we invest in data quality, architecture design, and automation, knowing that these investments prevent future bottlenecks.
What makes this work is transparency. I regularly share with stakeholders the risks of neglecting scalability and the benefits of doing foundational work early. When they see that this approach reduces rework and accelerates future launches, it becomes much easier to secure buy-in. In the end, the fastest way to deliver long-term impact is to plan for it from day one.
Can you share an example of a project where the impact wasn’t immediately visible but proved transformative over time?
A good example is the development of ML-driven ranking models for Glassdoor’s community content. Initially, the project’s metrics looked flat because the algorithm prioritized relevance and quality over volume, meaning fewer but more targeted posts were shown.
In the first month, engagement per session didn’t spike, and some stakeholders questioned the shift. However, over the next six months, we saw a 25% increase in meaningful participation (multi-comment threads with job-related discussion), 15% growth in repeat community visits, and a notable lift in sentiment scores from user surveys.
This slow-burn success came from focusing on long-term user value rather than immediate clicks. It also reduced moderation overhead by 20% because surfacing higher-quality posts led to fewer reports and disputes. Today, the ML-ranking framework is a cornerstone of our community strategy, influencing not only which posts are shown but also how we recommend discussions in email and push channels.
How do you decide between model complexity and interpretability in high-stakes scenarios?
I view complexity as a tool, not a default. The starting point is always the simplest approach that can credibly meet the objective. Simpler models have advantages, they’re easier to explain, maintain, debug, and audit.
In high-stakes environments, whether the risk is financial, reputational, or regulatory, interpretability often takes priority over a small boost in predictive accuracy. That’s because the cost of a wrong decision isn’t just an error rate; it’s the trust of the people relying on the output.
That said, complexity is not off the table. If it delivers a substantial and justifiable improvement, we’ll use it, but it must come with mechanisms for explanation and oversight. In other words, complexity has to earn its place.
What excites you most about the evolving intersection of AI and business decision-making?
We’re entering an era where AI systems can move from being passive observers to active participants in decision-making. Instead of just providing analysis, they can simulate scenarios, recommend actions, and predict downstream impacts in real time. This creates opportunities for more adaptive, forward-looking strategies.
What excites me most is the potential for collaborative intelligence, where AI handles scale and pattern recognition, and humans bring context, ethics, and judgment. The real transformation will happen when these systems are designed not only for accuracy but also for clarity and alignment with organizational values. That’s where AI stops being just a tool and becomes a trusted partner in shaping direction.
With AI tools democratizing data access, how do you see the Staff Data Scientist role evolving in the next five years?
The role will shift from “builder” to “architect.” As automation, pre-trained models, and no-code tools become more capable, the differentiator for senior data scientists will be problem selection, solution design, and governance.
I see Staff Data Scientists spending more time orchestrating multi-model ecosystems, ensuring systems are fair and explainable, and guiding cross-functional teams in using AI responsibly. We’ll also be the ones setting guardrails, defining what problems AI should solve, how it should be evaluated, and when human intervention is necessary.
In other words, the job will be less about producing outputs and more about ensuring that the outputs produced are the right ones.
How do you foster a culture of continuous learning and experimentation within data science teams?
It begins with lowering the barriers to experimentation. Teams need access to clean data, the right tools, and frameworks that make testing ideas straightforward. But infrastructure alone isn’t enough; you also have to shape the mindset.
I make it clear that failed tests are not failures if they produce learning. We hold regular “learning showcases” where people share experiments that didn’t work as expected, along with the insights gained. This normalizes the idea that progress is built on iteration.
Over time, this creates an environment where curiosity is rewarded, risk-taking is supported, and innovation is constant, not just something we do when there’s extra time.
If you were to design a “Data Science Leadership Playbook,” what would the first three chapters be?
- Define the Problem With Precision – Vague questions lead to vague answers. Invest time in sharpening the question before starting the analysis.
- Earn Trust Relentlessly – Your influence comes from credibility. Be transparent, deliver consistently, and own both successes and mistakes.
- Lead Through Others – Multiply your impact by empowering your team to think independently, make decisions, and take ownership.
Finally, what’s a personal mantra you rely on when navigating complex, ambiguous challenges?
“Progress over perfection, clarity through iteration.” I’ve learned that waiting for the perfect solution often means missing the window for impact. Instead, I focus on taking the best next step with the information available, measuring the outcome, and refining from there. This approach keeps momentum alive and creates space to adapt without losing direction.
In fast-moving environments, adaptability is as important as accuracy, and iteration is how you achieve both.

