Full-Time Senior Data Engineer
Docker is hiring a remote Full-Time Senior Data Engineer. The career level for this job opening is Expert and is accepting Europe/UK based applicants remotely. Read complete job description before applying.
Docker
Job Title
Posted
Career Level
Career Level
Locations Accepted
Share
Job Details
Docker is looking for a Senior Data Engineer to join our Data Engineering team which is led by our Director of Data Engineering. The team transforms billions of data points generated from the Docker products and services into actionable insights to directly influence product strategy and development. You'll leverage both software engineering and analytics skills as part of the team responsible for managing data pipelines across the company: Sales, Marketing, Finance, HR, Customer Support, Engineering, and Product Development.
In this role, you'll help design and implement event ingestion, data models, and ETL processes that support mission-critical reporting and analysis while building in mechanisms to support our privacy and compliance posture. You will also lay the foundation for our ML infrastructure to support data scientists and enhance our analytics capabilities. Our data stack consists of Snowflake as the central data warehouse, DBT/Airflow as the orchestration layer and Looker for visualization and reporting. Data flows in from Segment, Fivetran, S3, Kafka, and a variety of other cloud sources and systems. You'll work together with other data engineers, analysts, and subject matter experts to deliver impactful outcomes to the organization. As the company grows, ensuring reliable and secure data flows to all business units and surfacing insights and analytics is a huge and exciting challenge!
Responsibilities:
- Manage and develop ETL jobs, warehouse, and event collection tools and test process, validate, transport, collate, aggregate, and distribute data
- Build and manage the Central Data Model that powers most of our reporting
- Integrate emerging methodology, technology, and version control practices that best fit the team
- Build data pipelines and tooling to support our ML and AI projects
- Contribute to enforce SOC2 compliance across the data platform
- Support and enable our stakeholders and other data practitioners across the company
- Write and maintain documentation of technical architecture
Qualifications:
- 4+ yrs of relevant industry experience
- Experienced in data modeling and building scalable data pipelines involving complex transformations
- Proficiency working with a Data Warehouse platform (Snowflake or BigQuery preferred)
- Experience with data governance, data access, and security controls. Experience with Snowflake and dbt is strongly preferred
- Experience creating production-ready ETL scripts and pipelines using Python and SQL and using orchestration frameworks such as Airflow/Dagster/Prefect
- Experience designing and deploying high-performance systems with reliable monitoring and logging practices
- Familiarity with at least one cloud ecosystem: AWS/Azure Infrastructure/Google Cloud
- Experience with a comprehensive BI and visualization framework such Tableau or Looker
- Experience working in an agile environment on multiple projects and prioritizing work based on organizational priorities
- Strong verbal and written English communication skills