Full-Time Data Warehouse Engineer
Plutus is hiring a remote Full-Time Data Warehouse Engineer. The career level for this job opening is Experienced and is accepting Worldwide based applicants remotely. Read complete job description before applying.
Plutus
Job Title
Posted
Career Level
Career Level
Locations Accepted
Share
Job Details
We are seeking a skilled Data Warehouse Engineer to join our data team. In this role, you will be responsible for designing, developing, and maintaining scalable data solutions using dbt, Fivetran, and Braze. You will play a key role in building and optimising data pipelines, ensuring data quality, and supporting our business intelligence and marketing efforts through seamless data integration and automation.As a Data Warehouse Engineer, you will collaborate with data analysts, engineers, and marketing teams to ensure that data flows smoothly between different systems, creating a robust infrastructure that drives business insights and customer engagement.
Key Responsibilities:
- Data Pipeline Development: Design, build, and maintain ETL/ELT data pipelines using Fivetran to integrate various data sources into the data warehouse.
- Data Modeling with dbt: Develop and maintain data models and transformations using dbt (Data Build Tool) to optimise the structure of the data warehouse for analytics and reporting.
- Braze Integration: Work closely with the marketing team to integrate Braze for personalised customer engagement, ensuring smooth data flow between the warehouse and the platform.
- Data Warehouse Management:Â Maintain and optimise the performance of the data warehouse (e.g., Snowflake, BigQuery, Redshift) by managing schema design, partitioning, and indexing.
- Data Quality and Monitoring:Â Implement data quality checks, conduct audits, and monitor pipeline health to ensure reliable and accurate data delivery.
- Collaboration:Â Work closely with data analysts, BI teams, and marketing to understand data needs, improve data availability, and deliver actionable insights.
- Automation & Optimisation:Â Implement automation for data ingestion, transformation, and orchestration to improve operational efficiency and reduce manual intervention.
- Documentation & Best Practices:Â Create and maintain comprehensive documentation of data architecture, pipeline processes, and best practices for future reference and onboarding.
- Troubleshooting & Support:Â Identify, investigate, and resolve data-related issues in a timely manner