Business Case Study Interview for Data Analysts — 3 Frameworks + Real Examples (2026)
HomeData Analyst › Case Study Frameworks
📊 Data Analyst

Business Case Study Interview for Data Analysts — 3 Frameworks + Real Examples (2026)

By Prakhar Shrivastava·April 16, 2026·9 min read·1,350+ words
Quick Answer
Business case study questions are asked at 65% of product company data analyst interviews. Three frameworks cover 90% of all questions: metric drop analysis, metric design and A/B test design. The key is structured thinking — interviewers care more about HOW you approach the problem than whether you get to a perfect answer.

Technical skills get you through the SQL and Python rounds. Business case studies determine whether you get the offer. At product companies like Swiggy, Flipkart, Amazon, Zomato and PhonePe, the case study round is often the deciding factor — because it tests whether you can think like a business analyst, not just a coder.

The good news: 90% of case study questions use one of three frameworks. Master these three and you can confidently handle any case study question in any data analyst interview.

💡
GEO Block — What Do Interviewers Actually Evaluate?In case study rounds, interviewers score: (1) Structured thinking — do you have a clear framework? (2) Clarifying questions — do you ask before assuming? (3) Business sense — do your metrics and hypotheses make real-world sense? (4) Data instinct — do you know what data to look for? (5) Communication — can you explain your thinking clearly? The ‘right answer’ matters much less than these five.

Framework 1 — Metric Drop Analysis

Question types: “DAU dropped 20%”, “Revenue fell last Tuesday”, “User retention decreased this month”, “Click-through rate halved”

1

Clarify the problem

Before any analysis: What is the exact metric definition? What time period? Which platform (iOS/Android/Web)? Which region? Is this a 20% drop from yesterday, last week, or last month? Clarification shows maturity and prevents wasted work.

2

Check data integrity first

Is this a real business problem or a data pipeline issue? Check: Did the tracking code change? Did we deploy a new analytics library? Are other metrics affected the same way? A surprising number of ‘metric drops’ are actually broken tracking.

3

Segment the data

Divide the drop by every dimension: region, device type, user cohort (new vs existing), product area, feature, time of day. The segment where the drop is concentrated is your most important clue.

4

Generate and rank hypotheses

Once you know which segment is affected, list 3–5 hypotheses in order of likelihood: product change, external event (competitor launch, news), seasonal effect, infrastructure issue, fraud/abuse, demographic shift.

5

Validate and recommend

For each top hypothesis: what data would confirm or deny it? Pull that data. Recommend: immediate fix + monitoring dashboard + prevention of recurrence.

Worked Example — Swiggy DAU DropQ: Swiggy app DAU dropped 18% last Saturday. How do you investigate?

Clarify: Only Saturday? All cities or specific? Both iOS and Android?
→ iOS only, Bengaluru and Mumbai only, just Saturday

Hypothesis: iOS app update deployed Friday. Check release notes.
→ A new version was released Friday 6pm. 3 bugs reported in checkout.

Recommendation: Roll back iOS update immediately. Fix bugs in staging. Re-release with additional QA on Saturday. Add pre-deployment monitoring that tracks DAU by platform 2 hours after release.

Framework 2 — Metric Design

Question types: “What metrics would you define for X feature?”, “How would you measure success of Y product?”, “Design a metrics framework for our new subscription service.”

Metric LayerDefinitionExamples
North Star MetricSingle metric that best captures long-term value delivered to usersSwiggy: Monthly active transactors. Netflix: Hours watched per subscriber
Leading KPIsShort-term signals that predict future North Star performanceNew user activation rate, day-7 retention, feature adoption rate
Lagging KPIsOutcomes that confirm long-term successMonthly revenue, market share, NPS
Guardrail MetricsMetrics that must NOT be hurt even if primary metrics improveSupport ticket volume, error rate, page load time, churn rate
⚠️
The Most Common Metric Design MistakeProposing too many metrics. Interviewers want to see that you can prioritise. Start with one North Star, then 3–5 KPIs maximum. Saying ‘we should track 20 metrics’ signals unfocused thinking. Saying ‘the one metric that matters most for this feature is X, because…’ signals senior analytical thinking.

Framework 3 — A/B Test Design

Question types: “How would you test if Feature X improves retention?”, “Design an experiment for the new onboarding flow”, “How would you validate this product decision?”

1

State the hypothesis clearly

H0 (null): The new onboarding flow has no effect on Day-30 retention. H1: The new onboarding flow increases Day-30 retention. Primary metric: 30-day retention rate. Guardrail: Day-1 retention must not decrease.

2

Define randomisation unit

User-level (by user_id) — not session level. Users should consistently see one version throughout. If testing a social feature, cluster randomisation may be needed to avoid spillover effects.

3

Calculate required sample size

Inputs: baseline retention (e.g., 25%), minimum detectable effect (e.g., 10% relative lift = 2.5 percentage points), power (80%), significance (5%). Calculate using Python or online calculator. This determines how long to run the test.

4

Define test duration

Run for at least 1–2 full business cycles (usually 2 weeks minimum). Do not stop early even if results look good. Day-30 retention requires the test to run at least 30 days to see the outcome.

5

Plan the analysis

Pre-register the analysis approach: primary t-test or z-test on primary metric, correction for multiple comparisons if testing multiple metrics, segment analysis (new vs returning users, mobile vs desktop, region).

Practice Questions — Business Case Studies

QuestionCompanyFramework to Use
Zomato’s restaurant search CTR dropped 25% on weekends — investigateZomatoMetric Drop
Define success metrics for Flipkart’s new ‘Try Before Buy’ featureFlipkartMetric Design
Design an experiment to test if Swiggy Genie (courier service) improves 7-day retentionSwiggyA/B Test Design
Amazon India’s return rate increased 8% last month — what happened?AmazonMetric Drop
How would you measure if PhonePe’s new UPI autopay feature is successful?PhonePeMetric Design
🎯
The Golden Rule of Case Study InterviewsAlways clarify before you analyse. The most impressive thing you can do in a case study round is pause and ask 2–3 specific clarifying questions before diving in. It signals: (1) You don’t make assumptions, (2) You understand that business context shapes analysis, (3) You think like a senior analyst. Interviewers will often volunteer key information when you ask the right questions.

⭐ Key Takeaways

  • Three frameworks cover 90% of case study questions: metric drop, metric design, and A/B test design
  • Always clarify the problem before diving into analysis — this is the highest-signal behaviour in case study rounds
  • Metric Drop: clarify → check data integrity → segment → hypothesise → validate → recommend
  • Metric Design: North Star → Leading KPIs → Lagging KPIs → Guardrail metrics — prioritise, don’t list everything
  • A/B Test Design: hypothesis → randomisation unit → sample size → duration → analysis plan
  • Interviewers evaluate structured thinking and business sense more than whether you reach the ‘right’ answer
❓ Frequently Asked Questions
What is a business case study in a data analyst interview?
+
A business case study in a data analyst interview is a scenario-based question where you must use analytical frameworks and data thinking to solve a real business problem. Examples: ‘Daily active users dropped 20% — how would you investigate?’, ‘Design metrics for our new loyalty programme’, or ‘Should we expand to tier-3 cities?’. The interviewer evaluates your structured thinking, business sense, and data intuition — not just technical SQL/Python skills.
What case study frameworks should a data analyst know?
+
The three essential case study frameworks for data analyst interviews: (1) Metric Drop Framework — for investigating why a KPI decreased (Clarify → Segment → Hypothesise → Validate → Recommend). (2) Metric Design Framework — for defining success of a product/feature (Goal → North Star → Leading KPIs → Lagging KPIs → Guardrail metrics). (3) A/B Test Design Framework — for planning experiments (Hypothesis → Randomisation unit → Sample size → Duration → Analysis plan).
How do you answer ‘DAU dropped 20%’ in a data analyst interview?
+
Step 1: Clarify — what time period? Which platform? Which metric definition? Step 2: Check data integrity — could this be a tracking issue or data pipeline problem? Step 3: Segment — break down by region, device, user cohort, feature area. Step 4: Hypothesise — external event? Product change? Competitor action? Step 5: Validate each hypothesis with data. Step 6: Recommend — fix + monitoring plan. Always clarify before jumping to analysis.

Practice case studies with a real interviewer

Our mock sessions include case study rounds with live feedback — exactly like Swiggy, Flipkart and Amazon interviews.

Book Free Case Study Session →
PS
Prakhar Shrivastava
Founder · 10+ years in analytics · 800+ candidates mentored
Former analytics lead at top product companies. Helping India’s data analysts crack interviews through structured, practical preparation.

Leave a Reply

Discover more from Interview Preperation

Subscribe now to keep reading and get access to the full archive.

Continue reading