Backend Engineering & Performance Topics
Backend system optimization, performance tuning, memory management, and engineering proficiency. Covers system-level performance, remote support tools, and infrastructure optimization.
Scalability Analysis and Bottleneck Identification
Techniques for analyzing existing systems to find and prioritize bottlenecks and to validate scaling hypotheses. Topics include profiling and benchmarking strategies instrumentation and monitoring of latency throughput error rates and resource utilization; identification of common bottlenecks such as database write throughput central processing unit saturation memory pressure disk input output limits and network bandwidth constraints; designing experiments and load tests to reproduce issues and validate mitigations; proposing incremental fixes such as caching partitioning asynchronous processing or connection pooling; and measuring impact with clear metrics and iteration. Interviewers will probe the candidate on moving from observations to root cause and on designing low risk experiments to validate improvements.
Performance Fundamentals and Troubleshooting
Core skills for identifying, diagnosing, and resolving general performance problems across applications and systems. Topics include establishing baselines and metrics, using monitoring and profiling tools to determine whether issues are CPU bound, memory bound, input output bound, or network bound, and applying systematic troubleshooting workflows. Candidates should be able to prioritize fixes, recommend temporary mitigations and long term solutions, and explain when to escalate to specialists. This canonical topic covers general performance awareness, common diagnostic tools, and basic remediation approaches for slow systems and resource exhaustion.
Performance Optimization and Latency Engineering
Covers systematic approaches to measuring and improving system performance and latency at architecture and code levels. Topics include profiling and tracing to find where time is actually spent, forming and testing hypotheses, optimizing critical paths, and validating improvements with measurable metrics. Candidates should be able to distinguish central processing unit bound work from input output bound work, analyze latency versus throughput trade offs, evaluate where caching and content delivery networks help or hurt, recognize database and network constraints, and propose strategies such as query optimization, asynchronous processing patterns, resource pooling, and load balancing. Also includes performance testing methodologies, reasoning about trade offs and risks, and describing end to end optimisation projects and their business impact.
Performance Debugging and Latency Investigation
Finding the root cause of latency spikes: checking CPU/memory/disk/network utilization, profiling applications, querying slow logs, and identifying bottlenecks. Understanding the difference between resource exhaustion and an algorithmic problem. Using monitoring and tracing tools to narrow down where time is spent.