Business Case Study Interview for Data Analysts — 3 Frameworks + Real Examples (2026)
Technical skills get you through the SQL and Python rounds. Business case studies determine whether you get the offer. At product companies like Swiggy, Flipkart, Amazon, Zomato and PhonePe, the case study round is often the deciding factor — because it tests whether you can think like a business analyst, not just a coder.
The good news: 90% of case study questions use one of three frameworks. Master these three and you can confidently handle any case study question in any data analyst interview.
Framework 1 — Metric Drop Analysis
Question types: “DAU dropped 20%”, “Revenue fell last Tuesday”, “User retention decreased this month”, “Click-through rate halved”
Clarify the problem
Before any analysis: What is the exact metric definition? What time period? Which platform (iOS/Android/Web)? Which region? Is this a 20% drop from yesterday, last week, or last month? Clarification shows maturity and prevents wasted work.
Check data integrity first
Is this a real business problem or a data pipeline issue? Check: Did the tracking code change? Did we deploy a new analytics library? Are other metrics affected the same way? A surprising number of ‘metric drops’ are actually broken tracking.
Segment the data
Divide the drop by every dimension: region, device type, user cohort (new vs existing), product area, feature, time of day. The segment where the drop is concentrated is your most important clue.
Generate and rank hypotheses
Once you know which segment is affected, list 3–5 hypotheses in order of likelihood: product change, external event (competitor launch, news), seasonal effect, infrastructure issue, fraud/abuse, demographic shift.
Validate and recommend
For each top hypothesis: what data would confirm or deny it? Pull that data. Recommend: immediate fix + monitoring dashboard + prevention of recurrence.
Clarify: Only Saturday? All cities or specific? Both iOS and Android?
→ iOS only, Bengaluru and Mumbai only, just Saturday
Hypothesis: iOS app update deployed Friday. Check release notes.
→ A new version was released Friday 6pm. 3 bugs reported in checkout.
Recommendation: Roll back iOS update immediately. Fix bugs in staging. Re-release with additional QA on Saturday. Add pre-deployment monitoring that tracks DAU by platform 2 hours after release.
Framework 2 — Metric Design
Question types: “What metrics would you define for X feature?”, “How would you measure success of Y product?”, “Design a metrics framework for our new subscription service.”
| Metric Layer | Definition | Examples |
|---|---|---|
| North Star Metric | Single metric that best captures long-term value delivered to users | Swiggy: Monthly active transactors. Netflix: Hours watched per subscriber |
| Leading KPIs | Short-term signals that predict future North Star performance | New user activation rate, day-7 retention, feature adoption rate |
| Lagging KPIs | Outcomes that confirm long-term success | Monthly revenue, market share, NPS |
| Guardrail Metrics | Metrics that must NOT be hurt even if primary metrics improve | Support ticket volume, error rate, page load time, churn rate |
Framework 3 — A/B Test Design
Question types: “How would you test if Feature X improves retention?”, “Design an experiment for the new onboarding flow”, “How would you validate this product decision?”
State the hypothesis clearly
H0 (null): The new onboarding flow has no effect on Day-30 retention. H1: The new onboarding flow increases Day-30 retention. Primary metric: 30-day retention rate. Guardrail: Day-1 retention must not decrease.
Define randomisation unit
User-level (by user_id) — not session level. Users should consistently see one version throughout. If testing a social feature, cluster randomisation may be needed to avoid spillover effects.
Calculate required sample size
Inputs: baseline retention (e.g., 25%), minimum detectable effect (e.g., 10% relative lift = 2.5 percentage points), power (80%), significance (5%). Calculate using Python or online calculator. This determines how long to run the test.
Define test duration
Run for at least 1–2 full business cycles (usually 2 weeks minimum). Do not stop early even if results look good. Day-30 retention requires the test to run at least 30 days to see the outcome.
Plan the analysis
Pre-register the analysis approach: primary t-test or z-test on primary metric, correction for multiple comparisons if testing multiple metrics, segment analysis (new vs returning users, mobile vs desktop, region).
Practice Questions — Business Case Studies
| Question | Company | Framework to Use |
|---|---|---|
| Zomato’s restaurant search CTR dropped 25% on weekends — investigate | Zomato | Metric Drop |
| Define success metrics for Flipkart’s new ‘Try Before Buy’ feature | Flipkart | Metric Design |
| Design an experiment to test if Swiggy Genie (courier service) improves 7-day retention | Swiggy | A/B Test Design |
| Amazon India’s return rate increased 8% last month — what happened? | Amazon | Metric Drop |
| How would you measure if PhonePe’s new UPI autopay feature is successful? | PhonePe | Metric Design |
⭐ Key Takeaways
- Three frameworks cover 90% of case study questions: metric drop, metric design, and A/B test design
- Always clarify the problem before diving into analysis — this is the highest-signal behaviour in case study rounds
- Metric Drop: clarify → check data integrity → segment → hypothesise → validate → recommend
- Metric Design: North Star → Leading KPIs → Lagging KPIs → Guardrail metrics — prioritise, don’t list everything
- A/B Test Design: hypothesis → randomisation unit → sample size → duration → analysis plan
- Interviewers evaluate structured thinking and business sense more than whether you reach the ‘right’ answer
Practice case studies with a real interviewer
Our mock sessions include case study rounds with live feedback — exactly like Swiggy, Flipkart and Amazon interviews.
Book Free Case Study Session →