Full-Time Senior Data Engineer
Creative Chaos is hiring a remote Full-Time Senior Data Engineer. The career level for this job opening is Expert and is accepting Pakistan based applicants remotely. Read complete job description before applying.
Creative Chaos
Job Title
Posted
Career Level
Career Level
Locations Accepted
Share
Job Details
Job Brief:
We are seeking a highly skilled Senior Data Engineer to join our dynamic team. The ideal candidate will have a strong background in designing and implementing data pipelines, as well as experience in optimizing data workflows and handling large volumes of data. You will be responsible for developing and maintaining efficient data infrastructure, as well as collaborating with cross-functional teams to address data-related technical challenges and fulfill data infrastructure needs.
Duties and Responsibilities:
- Design and develop scalable and reliable data pipelines, ensuring high availability and performance.
- Construct intricate data sets that align with functional and non-functional business requirements.
- Identify, plan, and implement internal process enhancements, such as automation of manual processes, data delivery optimization, and infrastructure redesign for improved scalability.
- Implement best practices for data storage, processing, and retrieval.
- Collaborate with stakeholders including executives, data scientists, and product managers to understand data requirements and implement data solutions.
- Optimize and tune data workflows to achieve optimal performance and efficiency.
- Maintain data security and compliance with data privacy regulations.
- Stay up-to-date with emerging technologies and industry trends in data engineering and analytics.
- Mentor and guide more junior data engineers in the team.
Requirements
- Bachelor's or Master's degree in computer science, engineering, or a related field.
- Minimum of 7 years of industry experience as a data engineer or in a similar role.
- Strong programming skills in languages such as Python, Scala, or Java.
- Experience in designing and implementing data pipelines using tools like Apache Kafka, Apache Spark, or AWS Glue.
- Proficiency in SQL and database technologies like PostgreSQL, MySQL, or MongoDB.
- Knowledge of cloud platforms such as Azure.
- Experience with data modeling, ETL processes, and data warehousing concepts.
- Strong problem-solving and troubleshooting skills.
- Excellent communication and collaboration abilities.
- Ability to work effectively in cross-functional teams.
- Detail-oriented and proactive mindset.