Full-Time Engineering Sr Analyst
Blend360 is hiring a remote Full-Time Engineering Sr Analyst. The career level for this job opening is Experienced and is accepting Bogotá, Colombia based applicants remotely. Read complete job description before applying.
Blend360
Job Title
Posted
Career Level
Career Level
Locations Accepted
Share
Job Details
We are looking for an experienced Senior Data Engineer with a strong foundation in Python, SQL, and Azure, and hands-on expertise in Databricks.
In this role, you will build and maintain scalable data pipelines and architecture to support analytics, data science, and business intelligence initiatives.
You’ll work closely with cross-functional teams to drive data reliability, quality, and performance.
Responsibilities:
- Migrate legacy data infrastructure while ensuring performance, security, and scalability.
- Build and optimize data systems, pipelines, and analytical tools, leveraging a bronze-silver-gold architecture.
- Develop and configure Databricks notebooks and pipelines for ingestion of data from both on-premise and cloud environments, following native Databricks standards.
- Collaborate with stakeholders to translate business needs into robust data solutions, and effectively communicate technical progress, risks, and recommendations.
- Implement strong data governance and access control mechanisms to ensure data quality, security, and compliance.
- Conduct advanced data analysis to support decision-making and reporting needs.
- Apply QA testing practices within data workflows, particularly in healthcare environments.
- Drive initiatives in Data Discovery, Data Lineage, and Data Quality across the organization.
Requirements:
- Studies in computer science, Engineering, or a related field.
- 3+ years of hands-on experience in data engineering, with at least 2 years working with Azure or Databricks
- Experience working with Azure Cloud, CI/CD pipelines, and Agile methodologies
- Proficiency in developing and managing Databricks notebooks and implementing data engineering frameworks.
- Strong programming skills in Python for data processing and automation.
- Advanced proficiency in SQL for querying and transforming large datasets.
- Solid understanding of data modelling, warehousing, and performance optimization techniques.
- Proven experience in data cataloging and inventorying large-scale datasets.
- Pyspark experience would be a plus