DIRECTV Logo

DIRECTV

Principal Platform Engineer – Data Ops Engineer

Posted 14 Hours Ago
Be an Early Applicant
Remote
2 Locations
128K-232K Annually
Senior level
Remote
2 Locations
128K-232K Annually
Senior level
The Data Ops Engineer will automate and optimize cloud-based data workflows using advanced tools, lead teams, and enhance orchestration and monitoring strategies across data platforms.
The summary above was generated by AI

DIRECTV Data Analytics and Operations Team is looking for a curious, talented highly motivated Data Ops Engineer to lead the automation, orchestration, and optimization of our cloud-based data workflows across Snowflake, Databricks, and AWS. The ideal candidate will have deep expertise in Apache Airflow, CI/CD, automation, and performance monitoring, with a passion for building scalable, efficient, and high-performance data operations solutions.

In this role, you will work closely with data engineering, DevOps, security, and business teams to design and implement next-generation orchestration, automation, and data deployment strategies that drive efficiency, reliability, and cost-effectiveness.

This is the perfect opportunity to become a part of a fast paced and innovative team that solves real world problems and drive business value.

Key Responsibilities

1. Platform Architecture & Data Orchestration Strategy (30%)

  • Define the long-term orchestration strategy and architectural standards for workflow management across Snowflake, Databricks, and AWS.
  • Lead the design, implementation, and optimization of complex workflows using Apache Airflow and related tools.
  • Mentor teams in best practices for DAG design, error handling, and resilience patterns.
  • Champion cross-platform orchestration that supports data mesh and modern data architecture principles.

2. Engineering Excellence & Automation Frameworks (25%)

  • Architect and guide the development of reusable automation frameworks in Python, Spark, and Shell that streamline data workflows and platform operations.
  • Lead automation initiatives across data platform teams, setting coding and modularization standards.
  • Evaluate and introduce emerging technologies and scripting tools to accelerate automation and reduce toil.

3. Enterprise CI/CD Governance & DevOps Leadership (20%)

  • Define and maintain enterprise-wide CI/CD standards for data pipelines and platform deployments using Jenkins, GitLab, and AWS CodePipeline.
  • Drive adoption of Infrastructure as Code (IaC) and GitOps practices to enable scalable and consistent environment provisioning.
  • Provide technical leadership for DevOps integration across Data, Security, and Cloud Engineering teams.

4. Performance Engineering & Platform Optimization (15%)

  • Lead performance audits and capacity planning efforts across Snowflake, Databricks, and orchestrated workflows.
  • Build frameworks for proactive monitoring, benchmarking, and optimization using Datadog, AWS CloudWatch, and JMeter.
  • Partner with platform teams to implement self-healing systems and auto-scaling capabilities.

5. Operational Resilience & Leadership Collaboration (10%)

  • Oversee complex incident resolution, lead post-mortems, and implement systemic preventive measures.
  • Develop standardized runbooks, incident response frameworks, and training programs to elevate Tier 2/3 capabilities.
  • Act as a liaison between engineering leadership, security, and business teams to drive platform roadmaps and risk mitigation.

Experience Requirements

  • 5 – 7 years required, 16+ years preferred of overall software engineering experience, including 10+ years preferred as a Big Data Architect, focused on end-to-end data infrastructure design using Spark, PySpark, Kafka, Databricks, and Snowflake​.
  • 5 – 7 years required, 8+ years preferred of hands-on programming experience with Python, PySpark, JavaScript, and Shell scripting, with demonstrated expertise in building reusable and configuration-driven frameworks for Databricks.
  • 5+ years of experience designing and implementing configuration-driven frameworks in PySpark on Databricks, enabling scalable and metadata-driven data pipeline orchestration.
  • 5 – 7 years required, 8+ years preferred of experience in CI/CD pipeline development and automation using GitLab, Jenkins, and Databricks REST APIs, including infrastructure provisioning and deployment at scale.
  • 5 – 7 years required, 8+ years preferred of deep expertise in Snowflake, Databricks, and AWS, including migration, optimization, and orchestration of data workflows, as well as advanced features such as masking, time travel, and Delta Lake​.
  • 7+ years of experience in performance monitoring and observability using tools like SonarQube, JMeter, Splunk, Datadog, and AWS CloudWatch, with a focus on optimizing pipeline efficiency and reducing cost.
  • 7+ years of experience in Tier 2/3 support roles, specializing in root cause analysis, incident resolution, and the creation of troubleshooting runbooks and automation for operational resilience.​
  • 4+ years of experience with dbt, including the conversion of traditional dimensional models to modular dbt models, integration with CI/CD, and application of testing and documentation best practices.
  • Deep expertise in Apache Airflow and orchestration technologies, having led large-scale orchestration implementations across multi-cloud environments.
  • Strong analytical and architectural skills to design, optimize, and troubleshoot complex data pipelines, with demonstrated success in delivering performance and cost improvements (e.g., $1M in annual savings through Spark SQL optimization).

Preferred Certifications

  • Databricks Certified Data Engineer Associate / Professional
  • SnowPro Advanced Architect Certification
  • AWS Certified DevOps Engineer – Professional
  • Apache Airflow Certification
  • ITIL 4 Managing Professional (for incident management expertise)
  • Certified ScrumMaster (CSM) (for agile collaboration skills)

Education

  • Master’s degree in Computer Science, Data Engineering is preferred.

May require a background check due to job duties requiring routine access to DIRECTV and DIRECTV customer’s proprietary data. Qualified applicants with arrest and conviction will be considered for employment in accordance with local ordinances and state law.

This is a remote position that can be located anywhere in the United States, with preference for Los Angeles, Dallas, or Atlanta. #LI-Remote

A career with us comes with big rewards:

DIRECTV's compensation structure is designed to be market-competitive and fully supports efforts to attract and retain employees. It is the company's policy to offer pay that is competitive with other employers in the local market. Our salary ranges are determined by role, level, and location.

The Base Salary range displayed below reflects the minimum and maximum target salary for each of DIRECTV's 4 (four) US Labor Market Zones. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training.

DIRECTV WAGE ZONES: $127,965 - $232,415

Low (N1): $127,965 - $191,995

Mid (N2): $134,700 - $202,100

High (N3): $148,170 - $222,310

Top (N4): $154,905 - $232,415

Click HERE to review information on some of the largest Designated Market Areas (DMAs). Your recruiter can share more about the specific salary range for your preferred location during the hiring process.

Please note that the salary ranges reflect base salary only and do not include bonus or benefits - when you consider all of these together, it represents a pretty impressive total compensation package.

Apply today!

Fair Chance Ordinance Notice for Los Angeles County applying for jobs at DIRECTVCompliance Notice Regarding Use of Automated Decision-Making Tools in Hiring ProcessRSRDTV

Top Skills

Apache Airflow
AWS
Aws Cloudwatch
Aws Codepipeline
Ci/Cd
Databricks
Datadog
Dbt
Gitlab
Jenkins
Jmeter
Kafka
Python
Shell
Snowflake
Spark

Similar Jobs

2 Hours Ago
Remote
Hybrid
2 Locations
72K-141K Annually
Senior level
72K-141K Annually
Senior level
Cloud • Insurance • Professional Services • Analytics • Cybersecurity
The Sr Software Engineer is responsible for system analysis, application development, integration, and testing. This role includes guiding teams and managing projects while designing and implementing efficient application solutions.
Top Skills: Big QueryGoogle Cloud PlatformIicsInformaticaJava 11Java 17PostgresReact JsSpring BatchSpring BootSQL
2 Hours Ago
Remote
USA
135K-159K Annually
Mid level
135K-159K Annually
Mid level
Cloud • Fintech • Cryptocurrency • NFT • Web3
Administer and optimize the Sprinklr platform for customer experience. Collaborate with teams to implement automation and enhance productivity. Ensure data integrity and integrate with CRM systems.
Top Skills: JavaScriptMaestroqaSalesforceSnowflakeSprinklr
3 Hours Ago
Remote
Hybrid
8 Locations
126K-223K Annually
Senior level
126K-223K Annually
Senior level
eCommerce • Fintech • Hardware • Payments • Software • Financial Services
As a Product Design Engineer, you will lead projects through all phases of the product design lifecycle, including prototyping and production builds. Collaborate with product teams, develop rapid prototypes, and design systems ensuring efficiency and thermal management in high power electronics.

What you need to know about the Boston Tech Scene

Boston is a powerhouse for technology innovation thanks to world-class research universities like MIT and Harvard and a robust pipeline of venture capital investment. Host to the first telephone call and one of the first general-purpose computers ever put into use, Boston is now a hub for biotechnology, robotics and artificial intelligence — though it’s also home to several B2B software giants. So it’s no surprise that the city consistently ranks among the greatest startup ecosystems in the world.

Key Facts About Boston Tech

  • Number of Tech Workers: 269,000; 9.4% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Thermo Fisher Scientific, Toast, Klaviyo, HubSpot, DraftKings
  • Key Industries: Artificial intelligence, biotechnology, robotics, software, aerospace
  • Funding Landscape: $15.7 billion in venture capital funding in 2024 (Pitchbook)
  • Notable Investors: Summit Partners, Volition Capital, Bain Capital Ventures, MassVentures, Highland Capital Partners
  • Research Centers and Universities: MIT, Harvard University, Boston College, Tufts University, Boston University, Northeastern University, Smithsonian Astrophysical Observatory, National Bureau of Economic Research, Broad Institute, Lowell Center for Space Science & Technology, National Emerging Infectious Diseases Laboratories

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account