Full-Time Senior Data Engineer
Dwolla is hiring a remote Full-Time Senior Data Engineer. The career level for this job opening is Experienced and is accepting USA based applicants remotely. Read complete job description before applying.
Dwolla
Job Title
Posted
Career Level
Career Level
Locations Accepted
Share
Job Details
How do our Senior Data Engineers spend their time?
You can expect to spend about 50% building and growing the Dwolla data lake and date pipelines and about 20% of your time defining and implementing DataOps methodologies. Additionally, 20% of your time will be spent writing and optimizing queries and reports. Lastly, you’ll spend about 10% of your time supporting and monitoring existing databases.
Our team values collaboration, a passion for learning and a desire to become a master of your craft. We thrive in asynchronous communication. You will have a lot of support from leadership when you communicate proactively with detailed information about any roadblocks you may encounter.
Qualities of Data Engineers Who Thrive in This Role
🔥 You are a driven, self-starter type of person who isn’t afraid to dig for answers, stays up-to-date on industry trends and is always looking for ways to enhance your knowledge (yes, industry-related podcasts count! 🎧) .
💡Your skill set includes a blend of DataLake-related technologies in AWS (Redshift, Glue, Athena, S3 would be great!)
🖥️ Experience with Python and SQL are clutch! (Perhaps you’ve got a software engineering background?)
What’s expected of a Senior Data Engineer at Dwolla?
Design, build, monitor, and maintain infrastructure-as-code AWS-based data pipelines and warehousing tooling to support both internal and client-facing use cases.
Write and optimize Python code for ETL and other jobs to populate the Dwolla Data Lake and integrate with Salesforce (and related tools).
Help define and implement the principles and technologies for a Data Lifecycle to enable application of CI/CD principles to data problems while ensuring integrity, security, privacy, and performance.
Support the Technology department’s use of a combination of AWS database technologies through advising on implementations, troubleshooting, and maintenance.
Partner with other engineers and data specialists in writing and optimizing queries or developing performant schemas with consideration for security and privacy.
Assist other Engineers in supporting highly-available RDS database instances including user management, configuration, migrations, and performance troubleshooting.
Contribute to plans and exercises to ensure databases and data pipelines are resilient to outages and recoverable in the event of a disaster.
Communicate technical decisions through RFCs, design docs, technical training, and the wiki.
Mentor other engineers via pairing, design review, and code review.
What are the preferred qualifications for this position?
Experience implementing relevant AWS technologies for the purposes of building data pipelines and executing extract, transform, load (ETL) jobs.
Experience with infrastrastructure-as-code (i.e., Terraform, CloudFormation) methodologies.
Experience with distributed computing technologies such as Spark.
Experience designing and reviewing multi-function Python programs in accordance with style and security guidelines.
Experience optimizing SQL queries for high performance.
Experience with AWS database solutions such as RDS, DynamoDB and Redshift.
A knack for seeking empirical evidence through proofs of concept, tests and external research.
Empathy with the user of the work you are producing to guide decision-making.
Strong analytical thinking and troubleshooting skills.
Strong written and verbal communication skills.
Typically requires 5+ years of experience in a closely-related role. Education and certifications are valued and may contribute towards this criteria as applicable.