Beats (CI Dashboard) is a system that ingests CI automation results (around 100,000+ tests per night), aggregates them, and produces rich analytical reports for Engineering and QA teams.
The PlanIT team is taking ownership of the product: you’ll maintain the current implementation and evolve the architecture to meet new requirements.
Key responsibilities:
Develop and maintain the Python backend for the CI Dashboard under high-load conditions (large data volumes, intensive writes/aggregations, heavy reporting queries).
Work with an existing codebase: quickly understand “someone else’s” code, identify bottlenecks, make safe changes, and reduce technical debt.
Design and implement improvements focused on performance and efficiency (CPU/memory/IO), speeding up nightly processing and report generation.
Own data model and storage efficiency: schemas, indexes, partitioning, query optimization, caching.
Strengthen engineering quality: tests, code reviews, standards, technical documentation; participate in planning and estimation.
Drive an evolutionary redesign (“moving the system onto new rails”):
- refactor critical modules without disrupting production,
- improve the data processing pipeline architecture,
- increase reliability and observability.
Requirements:
Strong Python backend experience (3+ years).
Hands-on experience with high-load systems; ability to reason about typical hotspots (DB, network, serialization, locks, GC, etc.) and fix them.
Strong SQL/PostgreSQL skills, including query tuning (EXPLAIN, indexes, partitioning).
Understanding of distributed systems and reliability practices: retries, idempotency, backpressure.
Experience maintaining and evolving large existing/legacy codebases; careful migrations and backward compatibility.
Familiarity with observability: metrics/logs/traces, alerting, incident investigation.
Solid profiling and performance optimization skills:
- latency/throughput diagnosis,
- application + database bottleneck analysis,
- optimizing hot paths.
Nice to have:
Kafka / queues / stream processing (especially for high-volume CI result ingestion).
ClickHouse / columnar stores or experience building analytics data marts.
Caching strategies (Redis), pre-aggregation, materialized views.
Docker/Kubernetes, CI/CD, performance/load testing.
Experience building reporting/analytics platforms: time-series, drill-down, filters/aggregations, deduplication, historical runs storage.
We offer:
Well-coordinated professional team;
Cutting edge technologies, interesting and challenging tasks, dynamic project, great opportunities for self-realization, professional and career growth;
Additional Health and Life Insurance Package;
Employee Assistance Program;
25 vacation days;
This role requires on-site presence at our office 4 days a week to support effective collaboration and teamwork.
