“`html
Data Analyst
iOS vs Android — Which Mobile Platform Should You Learn First and Why Data Analysts Cannot Afford to Ignore This Debate in 2026
By dataanalystinterview.com Team · 30 April 2026 · 12 min read
Here is a number that should stop you mid-scroll: India crossed 750 million active smartphone users in 2025, and roughly 95 percent of those devices run Android. Yet the average revenue per user on iOS in India is nearly three times higher than on Android. If you are a data analyst working at a company like PhonePe, CRED, or Razorpay, those two facts sitting side by side create one of the most interesting analytical problems you will encounter in your career. The iOS versus Android debate is not just a developer conversation anymore. It is a business strategy conversation, and data analysts are right at the centre of it. Interviewers at product companies have started asking candidates to reason about platform-level user behaviour, and if you walk in without an opinion, you will look underprepared.
What the iOS vs Android Platform Debate Actually Means for Data in 2026
When developers debate iOS versus Android, they are talking about Swift versus Kotlin, App Store guidelines versus Play Store policies, and Xcode versus Android Studio. That is their world. But when a data analyst hears iOS versus Android, the question is entirely different. It is about segmentation, monetisation behaviour, retention curves, and product decisions that change based on which platform your users are on.
In India right now, the market is genuinely bifurcated in a way it has never been before. Android dominates volume. Entry-level Android devices from brands like Realme and Redmi have pushed smartphone penetration into Tier 2 and Tier 3 cities aggressively. Meanwhile, iPhone sales in India grew 47 percent year-on-year in 2024 according to IDC data, fuelled by domestic manufacturing in Chennai and Bengaluru and a rising aspirational middle class. Apple crossed five million units shipped in India in a single quarter for the first time.
What this means practically: companies that once had a predominantly Android user base are now seeing a meaningful and fast-growing iOS cohort. That cohort behaves differently. They spend more, churn differently, and engage with features in ways that can skew your aggregate metrics badly if you are not segmenting by platform. The analyst who understands this is suddenly very valuable.
Note
Most analysts segment by city tier or income bracket but forget to include platform as a primary dimension. At companies like CRED, where the iOS user base skews heavily premium, your aggregate retention numbers will look worse than they actually are if you do not break out platform separately. Platform is a proxy for purchasing power in the Indian context, and overlooking it can lead to incorrect product recommendations.
How the iOS vs Android Shift Is Affecting Analyst Hiring and Day-to-Day Work in India Right Now
Six months ago, a hiring manager at a fintech company in Bengaluru told me something that stuck: “We hired a product analyst who had done great work on our Android funnel. When we asked her to explain why our iOS payment success rate was three percentage points lower, she had no framework for it. She had never thought about the platforms differently.” That gap is increasingly a disqualifier.
Companies like Juspay, which powers payments for dozens of Indian apps, have entire teams looking at platform-level checkout success rates. Swiggy runs separate experiments for iOS and Android because the user demographics are different enough that a single A/B test would produce misleading results. Zepto, which has expanded rapidly across Indian cities, tracks delivery-time satisfaction differently across platforms because Android users in Tier 2 cities have different connectivity conditions than iOS users in Mumbai or Bengaluru. Meesho, whose core user base is deeply Android-heavy, makes supply-side decisions partly on the back of platform data.
For a data analyst, this translates into a set of very concrete skills. You need to know how to write queries that segment by platform without confusing device type with OS version. You need to understand that iOS users often have more consistent tracking because Safari’s Intelligent Tracking Prevention and Apple’s App Tracking Transparency have changed how event data is collected. Android fragmentation across hundreds of device models means you will encounter bizarre edge cases in your event logs. None of this requires you to write a single line of Swift or Kotlin. It requires you to think about data collection, user behaviour, and business metrics through a platform lens.
Interview Questions This Topic Is Generating at Top Indian Companies
Companies like Flipkart, PhonePe, and Swiggy have started weaving platform-level thinking into their analytics interview rounds, particularly for product analyst and growth analyst roles. The reason is straightforward: these companies are making real decisions about where to prioritise engineering effort, marketing spend, and feature rollout based on platform data. They want analysts who can reason about this clearly. The questions are designed to test whether you understand user behaviour at a platform level, whether you can translate that into metrics, and whether you can communicate findings to a product team that may have strong opinions about their preferred platform.
Interview Question 1 — “Our iOS retention is 12 percent higher than Android in Month 1. The product team wants to ship the iOS feature set to Android. What would you check before recommending that?”
The interviewer is testing whether you can separate correlation from causation. The strong answer starts with user demographics: are iOS users inherently higher-intent because of the device’s price point? Then move to feature parity — what features exist on iOS that do not exist on Android, and do the retention drivers come from features or from user quality? Finish by proposing a cohort analysis that controls for acquisition channel and device price band before making a recommendation.
Interview Question 2 — “Write a SQL query to find the 7-day retention rate broken down by platform for users acquired in the last 30 days.”
The trap here is forgetting to handle users who never returned — they are not missing from the table, they simply have no Day 7 event. You need a LEFT JOIN from your users table to your events table, not an INNER JOIN. The interviewer is testing whether you understand that a missing row and a null value both represent the same business fact: the user did not come back.
Interview Question 3 — “Paytm is seeing higher transaction failure rates on Android compared to iOS. Walk me through how you would investigate this as a business problem.”
Use a structured top-down approach. Start with the metric definition — is failure rate measured the same way across platforms at the data collection layer? Then move to segmentation: is this specific to Android versions, device models, or network types? Then go to the payment layer — are certain payment methods failing more on Android? A strong answer proposes a hypothesis tree, not a single answer, because the interviewer wants to see your diagnostic thinking, not just your conclusion.
Interview Question 4 — “The head of product says Android users are less valuable and wants to cut the Android marketing budget by 30 percent. How do you respond?”
This is a stakeholder communication question, and the interviewer is testing whether you push back with data or just agree to avoid conflict. The strong answer reframes the question: valuable in what sense? LTV, volume, market share, or strategic positioning in Tier 2 cities? You should ask for 48 hours to pull a platform-level LTV cohort analysis before the decision is made. Show that you protect decisions with data, not opinions.
Interview Question 5 — “You notice that 8 percent of your event logs have a null platform field. How do you handle this in your analysis?”
Do not just say “drop the nulls.” The interviewer wants to know whether you investigated why they are null. Is it a specific SDK version that is not logging platform correctly? Is it web traffic being misclassified? Is it a specific date range that correlates with a deployment? A strong answer proposes first understanding the pattern of nulls before deciding whether to impute, exclude, or flag them as a separate segment in the analysis.
Interview Tip
For platform-related questions, interviewers respond best to a metrics-first framing rather than a STAR story. Start with the metric that is behaving unexpectedly, move to the dimensions you would break it across, and then arrive at your hypothesis. The structure they love is: “The metric is X. I would decompose it by Y and Z. My hypothesis is A because of B.” This shows analytical structure immediately and keeps you from going into storytelling mode before you have established your thinking.
SQL You Need to Know for Platform-Level Analysis
Imagine you are a data analyst at a company similar to CRED. Your product table has user acquisition data, and your events table logs every session. The business question is: which platform has better 30-day retention, and does that differ by the acquisition channel the user came from? This is exactly the kind of query you would write on a Tuesday afternoon before a product review meeting, and it is also exactly what an interviewer might ask you to build from scratch on a whiteboard or shared screen.
-- Calculating 30-day retention by platform and acquisition channel
-- for users acquired in the last 90 days
WITH user_base AS (
SELECT
user_id,
platform, -- 'iOS' or 'Android'
acquisition_channel, -- 'organic', 'paid_social', 'referral', etc.
DATE(created_at) AS signup_date
FROM users
WHERE created_at >= DATEADD(day, -90, CURRENT_DATE)
AND created_at < CURRENT_DATE
),
day_30_activity AS (
SELECT DISTINCT
e.user_id
FROM events e
INNER JOIN user_base ub
ON e.user_id = ub.user_id
WHERE DATEDIFF(day, ub.signup_date, DATE(e.event_timestamp)) BETWEEN 28 AND 32
AND e.event_name = 'app_open'
),
retention_calc AS (
SELECT
ub.platform,
ub.acquisition_channel,
COUNT(DISTINCT ub.user_id) AS total_users,
COUNT(DISTINCT da.user_id) AS retained_users,
ROUND(
100.0 * COUNT(DISTINCT da.user_id)
/ NULLIF(COUNT(DISTINCT ub.user_id), 0),
2
) AS retention_rate_pct
FROM user_base ub
LEFT JOIN day_30_activity da
ON ub.user_id = da.user_id
GROUP BY
ub.platform,
ub.acquisition_channel
)
SELECT
platform,
acquisition_channel,
total_users,
retained_users,
retention_rate_pct
FROM retention_calc
ORDER BY
platform,
retention_rate_pct DESC;
The output of this query gives you a clean breakdown: iOS organic users retained at 38 percent while Android organic users retained at 24 percent, but Android referral users are actually matching iOS referral users at around 35 percent. That single finding changes the entire marketing budget conversation. Instead of cutting Android spend, you redirect it toward referral programs. The follow-up question an interviewer will almost certainly ask is: "How would you test whether this retention difference is driven by the platform or by the type of user who buys an iPhone in India?" That is your cue to propose a propensity score matching approach or at minimum a device price band cohort analysis.
Common Mistake
Candidates consistently use an INNER JOIN between users and the Day 30 events table, which silently drops every user who did not return. Your denominator shrinks, your retention rate looks artificially high, and your interviewer will catch it immediately by asking "what happened to users with no Day 30 event?" Always use a LEFT JOIN from your user base to your activity table, and let retained_users be null for churned users before aggregating. Also watch the DATEDIFF window — using exactly day 30 misses users who returned on day 31 due to timezone differences in event logging.
Python for This Topic: What Analysts Actually Do With Platform Data
The Python use case here is exploratory data analysis and visualisation. Specifically, imagine you have pulled a dataset of 50,000 users from a company like Razorpay's merchant analytics team. The business question is whether iOS merchants and Android merchants show different payment success rates across transaction value buckets. This kind of EDA happens before any formal A/B test is designed, and the output shapes which hypotheses the product team decides to pursue. Here is what that code looks like in practice.
# Analysing payment success rates by platform and transaction value bucket
# Dataset: merchant transaction logs from a payments platform
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# Load the dataset
df = pd.read_csv('merchant_transactions.csv')
# Inspect platform distribution
print("Platform distribution:")
print(df['platform'].value_counts())
print("\nNull platform values:", df['platform'].isna().sum())
# Drop rows where platform is null — after investigating the pattern
df = df.dropna(subset=['platform'])
# Create transaction value buckets
df['txn_bucket'] = pd.cut(
df['transaction_amount'],
bins=[0, 500, 2000, 10000, 50000, np.inf],
labels=['Under 500', '500-2K', '2K-10K', '10K-50K', '50K+']
)
# Calculate success rate by platform and bucket
success_rates = (
df.groupby(['platform', 'txn_bucket'])['is_success']
.agg(total='count', successful='sum')
.reset_index()
)
success_rates['success_rate'] = (
success_rates['successful'] / success_rates['total'] * 100
).round(2)
print("\nSuccess rate by platform and transaction bucket:")
print(success_rates.to_string(index=False))
# Pivot for plotting
pivot = success_rates.pivot(
index='txn_bucket',
columns='platform',
values='success_rate'
)
# Plot
fig, ax = plt.subplots(figsize=(10, 6))
pivot.plot(kind='bar', ax=ax, width=0.6)
ax.set_title('Payment Success Rate by Platform and Transaction Value')
ax.set_xlabel('Transaction Value Bucket')
ax.set_ylabel('Success Rate (%)')
ax.set_ylim(0, 100)
ax.legend(title='Platform')
plt.xticks(rotation=45)
plt.tight_layout()
plt.savefig('platform_success_rate.png', dpi=150)
plt.show()
What
