“`html
Data Analyst
Product Design Interview: How to Walk Through Your Case Study and What Data Analysts Must Do Differently in 2026
29/04/2026
12 min read
Most data analyst candidates who clear the SQL round and the Python round still get rejected at the case study stage. Not because they lack analytical ability, but because they do not know how to walk through a product design case study in a way that shows business thinking alongside technical skill. At companies like Swiggy, CRED, and Razorpay, the case study round is where offers are won or lost. And yet, most candidates treat it like a quiz — they answer the question asked, sit back, and wait. That is exactly the wrong approach. This post will show you how to walk through a product design case study the way a senior analyst would, and why getting this right in 2026 is the single biggest lever on your interview success.
What a Product Design Case Study Actually Means in a Data Analyst Interview
A product design case study in an analytics interview is not asking you to design a UI. It is asking you to think like a product manager who has data as a superpower. The interviewer presents a vague business problem — something like “how would you design a referral program for Meesho?” or “how would you measure the success of a new Zomato feature?” — and they want to see how your mind moves from ambiguity to structure to insight.
In 2026, this format has become far more common across Indian product companies because the line between product analyst and business analyst is blurring fast. Swiggy, Zepto, PhonePe, and CRED are all hiring analysts who can sit in a product review meeting and contribute more than just a dashboard. They want people who can frame the problem, define the metrics, challenge assumptions, and ultimately drive a decision. The case study round is a simulation of exactly that.
What makes this format hard is that there is no single correct answer. The interviewer is evaluating your process, not just your conclusion. A strong candidate who reaches an average answer with excellent structure will almost always beat a candidate who jumps to the right answer without showing their reasoning.
Note
Most candidates prepare for product design case studies by memorising frameworks like HEART or AARRR. That is necessary but not sufficient. What interviewers at top companies are actually evaluating is whether you instinctively define success before jumping into solutions. Candidates who say “before I design anything, let me clarify what problem we are actually solving” score far higher than those who immediately start listing features.
How the Shift Toward Product Thinking Is Changing Analyst Hiring in India Right Now
Three years ago, a data analyst interview at most Indian startups was 70 percent SQL, 20 percent Python, and maybe one vague business question at the end. Today, that split has shifted dramatically. Companies like CRED, Juspay, and Razorpay now dedicate an entire interview round — sometimes two — to product thinking and case study walkthroughs. The reason is straightforward: these companies have matured past the “we need someone to pull data” stage. They need analysts who can influence roadmaps.
When Zepto expanded from 10 cities to 30 cities in under 18 months, their analytics team was not just tracking delivery times in a spreadsheet. They were designing experiments to understand how customer behaviour changed in tier-2 markets, proposing new success metrics for dark store performance, and flagging that the North Star metric used in Mumbai was misleading in Jaipur. That kind of thinking does not come from knowing SQL alone. It comes from product design thinking applied to data problems.
The hiring bar at Indian product companies in 2026 reflects this reality. Glassdoor and LinkedIn job postings for senior analyst roles at Flipkart and PhonePe now explicitly list “ability to drive product decisions through data” as a core requirement. Salaries for analysts with strong product thinking skills at these companies range from 18 to 35 lakhs per annum at the mid-level, compared to 10 to 16 lakhs for analysts who are purely technical. The premium for this skill is real and growing.
Interview Questions This Topic Generates at Top Companies
Companies like Flipkart, PhonePe, and Swiggy build their case study rounds specifically to test whether a candidate can walk through a product problem with the discipline of an analyst and the perspective of a product manager. The questions are rarely about a single metric or a single query. They are deliberately open-ended because the interviewer wants to see how you create structure where none exists. Below are five types of questions you will encounter, along with how a strong candidate approaches each one.
How would you design a loyalty program for Paytm and define whether it is working after 90 days?
Start by asking clarifying questions: what is the primary goal — retention, spend increase, or activation of dormant users? Then define your success metrics before any solution design. A strong answer separates input metrics (program participation rate) from output metrics (30-day retention lift among participants) and leading indicators (redemption rate in the first two weeks). The interviewer wants to see that you would not declare success based on sign-ups alone.
Write a SQL query to identify users who used a new feature at least once but never returned to it after their first use.
This is a feature adoption funnel problem. Use a window function to rank user-feature interactions by date, then filter for users where the count of interactions equals one within the analysis window. The trap most candidates fall into is not accounting for users who used the feature once and then churned from the platform entirely — that is a different problem from feature abandonment, and conflating the two will mislead any product decision.
Swiggy wants to add a “scheduled delivery” feature. How would you decide whether to build it?
Use the opportunity sizing framework: estimate the addressable user base for this feature, quantify the potential revenue or retention impact, then weigh it against engineering cost. A strong answer would also propose a lightweight experiment — maybe a survey to a cohort of high-frequency users — before full development commitment. Interviewers at companies like Swiggy want to see that you think about resource trade-offs, not just user benefit.
A product manager tells you that DAU is up 12 percent this week. How do you respond?
This tests your instinct to question surface-level metrics. A strong response immediately asks: is this organic or driven by a campaign? Is the growth uniform across segments or concentrated in one cohort? What is happening to quality metrics like session length and conversion alongside the DAU number? The interviewer is testing whether you are someone who validates signals or just reports them.
You are handed a dataset for a new CRED feature and the numbers look unusually clean. What do you do before building any analysis?
Unusually clean data is a red flag, not a gift. Check for logging errors, sampling bias, or a truncated event window. Verify that the event schema matches the product specification — sometimes features are built but tracking is not wired correctly. A strong candidate names specific checks: null rates per column, distribution of timestamps, cross-referencing with a secondary source like server logs or payment records.
Interview Tip
When walking through a product design case study, lead with the “why before the what.” Before you propose any metric or framework, state the business objective in your own words and confirm it with the interviewer. This single habit — taking 30 seconds to reframe the problem before solving it — is what separates candidates who get offers from those who give technically correct but contextually weak answers. Interviewers at Razorpay and Meesho have explicitly said in mock feedback sessions that they remember candidates who challenged their own assumptions out loud.
SQL You Need to Know for Product Design Case Studies
In a product design case study round, SQL questions are typically tied to a feature adoption or retention problem. Imagine you are at PhonePe and the product team has just launched a new “Save and Pay Later” feature. Your job is to identify users who adopted the feature in the first week but dropped off before week four — the “early adopter churn” cohort — because this segment holds the key to understanding what is broken in the product experience. Here is how that query looks in practice.
-- Identifying early adopter churn for a new PhonePe feature
-- We want users who used the feature in week 1 but had zero activity in weeks 3 and 4
WITH feature_events AS (
SELECT
user_id,
event_date,
DATE_TRUNC('week', event_date) AS event_week,
DATEDIFF('day', feature_launch_date, event_date) AS days_since_launch
FROM user_feature_events
WHERE feature_name = 'save_and_pay_later'
AND event_date >= feature_launch_date
AND event_date < DATEADD('day', 28, feature_launch_date)
),
week1_adopters AS (
SELECT DISTINCT user_id
FROM feature_events
WHERE days_since_launch BETWEEN 0 AND 6
),
week3_week4_active AS (
SELECT DISTINCT user_id
FROM feature_events
WHERE days_since_launch BETWEEN 14 AND 27
),
early_adopter_churn AS (
SELECT
w1.user_id,
COUNT(fe.event_date) AS week1_usage_count
FROM week1_adopters w1
LEFT JOIN week3_week4_active w34
ON w1.user_id = w34.user_id
LEFT JOIN feature_events fe
ON w1.user_id = fe.user_id
AND fe.days_since_launch BETWEEN 0 AND 6
WHERE w34.user_id IS NULL
GROUP BY w1.user_id
)
SELECT
user_id,
week1_usage_count,
CASE
WHEN week1_usage_count = 1 THEN 'single_touch'
WHEN week1_usage_count BETWEEN 2 AND 4 THEN 'light_adopter'
ELSE 'heavy_adopter'
END AS adopter_segment
FROM early_adopter_churn
ORDER BY week1_usage_count DESC;
The output segments early adopters who churned by how intensely they used the feature in week one. If heavy adopters are churning as much as single-touch users, the problem is likely in the product experience itself — perhaps in week two, the feature had a bug or a UX issue that even motivated users could not push through. If only single-touch users churn, the problem is different: the feature is fine, but onboarding is not creating enough habit in the first session. This distinction drives completely different product decisions, and explaining it clearly to a non-technical interviewer is just as important as writing the query correctly.
Common Mistake
Many candidates write the week one adopter filter correctly but forget to use a LEFT JOIN when identifying week three and four activity. If you use an INNER JOIN, you will only return users who were active in both periods — the exact opposite of what you want. You want users who were active in week one and absent in weeks three and four, which requires keeping all week one users and filtering for NULLs on the right side of the join. This is one of the most frequently failed SQL patterns in product analytics rounds.
Python for This Topic: What Analysts Actually Do
In a product design interview, Python questions tend to focus on exploratory analysis and visualisation of a feature's adoption curve. A common scenario: you have been given a dataset of user interactions with Zomato's new "Group Order" feature, and you need to understand whether adoption is healthy or whether there are segments of users who tried it once and never returned. The business question is whether the feature has genuine product-market fit or whether it is riding on novelty. Here is how you approach that in Python.
# Analysing Group Order feature adoption curve for Zomato
# Business question: is retention healthy or driven purely by novelty?
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# Load event data
df = pd.read_csv('group_order_events.csv', parse_dates=['event_date'])
# Define feature launch date
launch_date = pd.Timestamp('2026-01-15')
# Calculate days since launch for each event
df['days_since_launch'] = (df['event_date'] - launch_date).dt.days
# Filter to first 60 days post launch
df_filtered = df[df['days_since_launch'].between(0, 59)].copy()
# Count distinct users per day since launch
daily_active_users = (
df_filtered.groupby('days_since_launch')['user_id']
.nunique()
.reset_index()
.rename(columns={'user_id': 'dau'})
)
# Calculate 7-day rolling average to smooth noise
daily_active_users['dau_7day_avg'] = (
daily_active_users['dau'].rolling(window=7, min_periods=1).mean()
)
# Identify users by their first use cohort week
df_filtered['first_use_week'] = df_filtered.groupby('user_id')['days_since_launch'].transform('min') // 7
# Calculate retention rate per cohort: did they return after their first week?
cohort_retention = (
df_filtered.groupby(['first_use_week', 'user_id'])['days_since_launch']
.max()
.reset_index()
)
cohort_retention['returned_after_week1'] = cohort_retention['days_since_launch'] > (
(cohort_retention['first_use_week'] + 1) * 7
)
cohort_summary = (
cohort_retention.groupby('first_use_week')['returned_after_week1']
.mean()
.reset_index()
.rename(columns={'returned_after_week1': 'retention_rate'})
)
# Plot DAU curve and cohort retention side by side
fig, axes = plt.subplots(1, 2, figsize=(14, 5))
axes[0].plot(daily_active_users['days_since_launch'], daily_active_users['dau'], alpha=0.4, label='Daily DAU')
axes[0].plot(daily_active_users['days_since_launch'], daily_active_users['dau_7day_avg'], linewidth=2, label='7-Day Rolling Avg')
axes[0].set_title('Group Order Feature: Daily Active Users (First 60 Days)')
axes[0].set_xlabel('Days Since Launch')
axes[0].set_ylabel('Unique Users')
axes[0].legend()
axes[1].bar(cohort_summary['first_use_week'], cohort_
