Full-Time Data Engineer
Accesa is hiring a remote Full-Time Data Engineer. The career level for this job opening is Experienced and is accepting Romania based applicants remotely. Read complete job description before applying.
Accesa
Job Title
Posted
Career Level
Career Level
Locations Accepted
Share
Job Details
Job Description
One of our clients operates prominently in the financial sector. We enhance operations across their extensive network of 150,000 workstations and support a workforce of 4,500 employees.
As part of our commitment to optimizing data management strategies, we are migrating data warehouse (DWH) models into data products within the Data Integration Hub (DIH).
Responsibilities:
- Drive Data Efficiency: Create and maintain optimal data transformation pipelines.
- Master Complex Data Handling: Work with large, complex financial data sets to generate outputs that meet functional and non-functional business requirements.
- Lead Innovation and Process Optimization: Identify, design, and implement process improvements (e.g., automating manual processes, optimizing data delivery, redesigning infrastructure for higher scalability).
- Architect Scalable Data Infrastructure: Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using open-source technologies.
- Unlock Actionable Insights: Build/use analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.
- Collaborate with Cross-Functional Teams: Work with clients and internal stakeholders (Senior Management, Department Heads, Product, Data, and Design teams) to assist with data-related technical issues and support their data infrastructure needs.
Qualifications
Must have 3+ years of experience in a similar role, preferably within Agile teams.
- Strong analytical skills in working with both structured and unstructured data
- Skilled in SQL and relational databases for data manipulation
- Experience in building and optimizing Big Data pipelines and architectures
- Knowledge of Apache Spark framework and object-oriented programming in Java; experience with Python is a plus
- Experience with ETL processes, including scheduling and orchestration using tools like Apache Airflow (or similar)
- Proven experience in performing data analysis and root cause analysis on diverse datasets to identify opportunities for improvement
Nice to have:
- Expertise in manipulating and processing large, disconnected datasets to extract actionable insights
- Automate CI/CD pipelines using ArgoCD, Tekton, and Helm to streamline deployment and improve efficiency across the SDLC
- Manage Kubernetes deployments on OpenShift, focusing on scalability, security, and optimized container orchestration
- Technical skills in relational databases (e.g., PostgreSQL), Big Data Tools (e.g., Databricks), and workflow management (e.g., Airflow), and backend development using Spring Boot.