We're on a mission to build the best platform in the world for engineers to understand and scale their systems, applications, and teams. We operate at high scale—trillions of data points per day—providing always-on alerting, metrics visualization, logs, and application tracing for tens of thousands of companies. Our engineering culture values pragmatism, honesty, and simplicity to solve hard problems the right way.
As an engineer working on our distributed systems, you will build the high-throughput, low-latency systems that power our product. Your data pipelines will ingest, store, analyze, and query tens of millions of events per second from companies all over the globe.
- Build distributed, high-throughput, real-time data pipelines
- Do it in Go and Python, with bits of C or other languages
- Use Kafka, Redis, Cassandra, Elasticsearch and other open-source components
- Own meaningful parts of our service, have an impact, grow with the company
- You have a BS/MS/PhD in a scientific field or equivalent experience
- You have significant backend programming experience in one or more languages
- You can get down to the low-level when needed
- You care about code simplicity and performance
- You want to work in a fast, high-growth startup environment that respects its engineers and customers
- You wrote your own data pipelines once or twice before (and know what you'd like to change)
- You've built high scale systems with Cassandra, Redis, Kafka or Numpy
- You have significant experience with Go, C, or Python
- You have a strong background in stats