Data Science & Analytics Topics
Statistical analysis, data analytics, big data technologies, and data visualization. Covers statistical methods, exploratory analysis, and data storytelling.
Attribution Modeling and Multi Touch Attribution
Covers the theory and practice of assigning credit for conversions across marketing touchpoints. Candidates should know single touch models such as first touch and last touch, deterministic multi touch models like linear and time decay, and algorithmic or data driven models that use statistical or machine learning techniques. Discuss the pros and cons of each approach including bias introduced by simple models, the data and engineering requirements for algorithmic models, and trade offs between interpretability and accuracy. Topics include model selection aligned to business questions, dealing with long purchase cycles, cross device and cross channel journeys, limitations of deterministic attribution, approaches to model validation, and how attribution differs from causal incrementality testing.
Business Impact Measurement and Metrics
Selecting, measuring, and interpreting the business metrics and outcomes that demonstrate value and guide decisions. Topics include high level performance indicators such as revenue decompositions, lifetime value, churn and retention, average revenue per user, unit economics and cost per transaction, as well as operational indicators like throughput, quality and system reliability. Candidates should be able to choose leading versus lagging indicators for a given question, map operational KPIs to business outcomes, build hypotheses about drivers, recommend measurement changes and define evaluation windows. Measurement and attribution techniques covered include establishing baselines, experimental and quasi experimental designs such as A B tests, control groups, difference in differences and regression adjustments, sample size reasoning, and approaches to isolate confounding factors. Also included are quick back of the envelope estimation techniques for order of magnitude impact, converting technical metrics into business consequences, building dashboards and health metrics to monitor programs, communicating numeric results with confidence bounds, and turning measurement into clear stakeholder facing narratives and recommendations.
Metrics Selection and Dashboard Storytelling
Focuses on selecting metrics and designing dashboards and reports that directly support stakeholder decision making. Candidates should be able to identify distinct audiences and the specific decisions each audience must make, choose actionable metrics rather than vanity metrics, and balance leading indicators with lagging indicators as well as strategic metrics with operational metrics. This topic covers defining key performance indicators and targets and justifying each metric by the decision it enables, setting data freshness requirements and update cadence, and ensuring instrumentation and data quality to make metrics reliable. It includes dashboard architecture and visual narrative design such as layering from high level summaries to detailed drill down, tailoring views for executives, managers, and operational teams, selecting appropriate visualizations and annotations to guide interpretation, and enabling root cause analysis. Reporting practices are covered, including formatting, distribution channels, and alerting. Governance and metric definition topics include creating a single source of truth, assigning ownership, documenting definitions, and change control. Candidates must also recognize metric interactions and common pitfalls that can make metrics misleading such as aggregation bias, sampling issues, correlation versus causation, and perverse incentives, and propose mitigations. Interview questions typically ask candidates to design metric sets and dashboards for hypothetical scenarios, explain why metrics were chosen based on decisions they support, and describe cadence, distribution, drilling, and governance approaches.
Marketing Analytics and Measurement
Covers the design, implementation, and interpretation of marketing measurement systems that connect marketing activities to business outcomes. Topics include defining and prioritizing key performance indicators across the marketing funnel from awareness and consideration through conversion, retention, and advocacy. Core metrics and diagnostic measures include click through rate, conversion rate, impressions, engagement and session metrics, bounce rate, lead volume, cost per lead, cost per acquisition, customer acquisition cost, customer lifetime value, return on advertising spend, return on investment, marketing influenced revenue, pipeline contribution, marketing qualified leads, sales accepted leads, and lead to opportunity conversion rates. Measurement frameworks and methods include last click and multi touch attribution approaches, marketing mix modeling, incrementality testing and holdout group experiments, randomized controlled experiments and split testing, and considerations for statistical significance, sample size, noise, and distinguishing correlation from causation. Also covers data and instrumentation concerns such as tagging and event tracking, data flows from advertising and marketing systems into analytics platforms and data warehouses, data quality and identity resolution, and privacy driven tracking limitations. Reporting and dashboard design topics include selecting leading versus lagging indicators, balancing granular event level dashboards with executive level summaries, setting realistic targets and benchmarks, communicating findings and recommended actions to stakeholders, and using measurement to inform channel mix, campaign optimization, and budget allocation decisions.
Marketing Performance Reporting
Covers defining and measuring marketing key performance indicators for different stakeholders and designing dashboards that make marketing performance actionable. Candidates should understand how to select metrics for executives, marketing teams, and sales partners, including lead volume, cost per lead, lead quality, conversion rates, marketing influenced revenue, pipeline contribution, and attribution model outcomes. Includes principles of dashboard design such as appropriate segmentation by channel and audience, cohort and time window analysis, visualization choices that surface trends and anomalies, and how to make dashboards actionable for campaign optimization. Also covers data sources and tooling including web analytics platforms, customer relationship management reporting, marketing automation reporting, business intelligence tools, and custom analysis using SQL and Python. Candidates should be able to discuss measurement of experiments and split testing, common attribution approaches, data quality and tracking considerations, reporting cadence, and how to communicate results to nontechnical stakeholders. For junior candidates, focus on dashboards and reports they have built or contributed to, the metrics they tracked, and examples of how reporting influenced marketing decisions.
SQL for Marketing and Growth Analytics
Using SQL to analyze marketing and growth metrics. Covers querying leads campaigns and conversion data, joining campaign and user tables, aggregating by time periods and segments, calculating conversion rates and cost per acquisition, cohort and retention analysis, funnel analysis, and basic attribution queries. Includes date manipulation and techniques for deduplicating and cleaning marketing data. Candidates should demonstrate the ability to write queries that answer marketing and growth questions and produce metrics used by marketing and growth teams.
Data Driven Recommendations and Impact
Covers the end to end practice of using quantitative and qualitative evidence to identify opportunities, form actionable recommendations, and measure business impact. Topics include problem framing, identifying and instrumenting relevant metrics and key performance indicators, measurement design and diagnostics, experiment design such as A B tests and pilots, and basic causal inference considerations including distinguishing correlation from causation and handling limited or noisy data. Candidates should be able to translate analysis into clear recommendations by quantifying expected impacts and costs, stating key assumptions, presenting trade offs between alternatives, defining success criteria and timelines, and proposing decision rules and go no go criteria. This also covers risk identification and mitigation plans, prioritization frameworks that weigh impact effort and strategic alignment, building dashboards and visualizations to surface signals across HR sales operations and product, communicating concise executive level recommendations with data backed rationale, and designing follow up monitoring to measure adoption and downstream outcomes and iterate on the solution.
SQL for Data Analysis
Using SQL as a tool for data analysis and reporting. Focuses on writing queries to extract metrics, perform aggregations, join disparate data sources, use subqueries and window functions for trends and rankings, and prepare data for dashboards and reports. Includes best practices for reproducible analytical queries, handling time series and date arithmetic, basic query optimization considerations for analytic workloads, and when to use SQL versus built in reporting tools in analytics platforms.