Full-Time Senior Data Engineer
Fetch is hiring a remote Full-Time Senior Data Engineer. The career level for this job opening is Senior Manager and is accepting USA based applicants remotely. Read complete job description before applying.
Fetch
Job Title
Posted
Career Level
Career Level
Locations Accepted
Share
Job Details
What we’re building and why we’re building it. Every month, millions of people use America’s Rewards App, earning rewards for buying brands they love – and a whole lot more. Whether shopping in the grocery aisle, grabbing a bite at the drive-through or playing a favorite mobile game, Fetch empowers consumers to live rewarded throughout their day. To date, we’ve delivered more than $1 billion in rewards and earned more than 5 million five-star reviews from happy users.
About the Role As a Senior Data Engineer, you’ll be at the core of our DataOps strategy, designing and building infrastructure that supports terabytes of both real-time and batch data processing every day. You’ll help create robust, scalable systems that power personalization and insights across the platform.
What You’ll Do
- Design, build, and maintain scalable, resilient data pipelines for both batch and streaming workloads.
- Automate infrastructure provisioning and deployment using infrastructure-as-code and CI/CD practices.
- Implement robust data reliability, observability, and quality assurance across the platform.
- Develop self-service tooling and developer-friendly interfaces for internal data stakeholders.
- Collaborate cross-functionally with engineering, analytics, and product teams to ensure trusted, consistent access to data.
- Mentor peers and contribute to evolving DataOps best practices and tooling.
What We’re Looking For
- 4+ years of experience in Data Engineering, Platform Engineering, or DevOps with a strong data focus.
- Strong problem-solving mindset and passion for building scalable internal platforms.
- Proficiency in a modern programming language (Python, Go, Java).
- Strong SQL skills and a solid understanding of data modeling principles.
- Experience with cloud infrastructure (AWS, GCP, or Azure).
- Experience building durable data pipelines.
- Familiarity with containerization (Docker) and orchestration systems (Kubernetes).
- Hands-on experience with infrastructure-as-code (Terraform or equivalent).
- Exposure to real-time messaging platforms (Kafka, SQS, or similar).
- Knowledge of data lifecycle management, schema versioning, and data quality checks.
- Understanding of monitoring, logging, and alerting systems for data pipelines.
- Experience with workflow orchestration tools (Airflow, Dagster, etc.).
Nice to Have
- Experience with large-scale data processing frameworks (Flink, Spark).
- Background in data contracts, governance, or schema validation tooling.
- Enthusiasm for automation, developer productivity, and clean systems architecture.