Full-Time Data Engineer
HighLevel is hiring a remote Full-Time Data Engineer. The career level for this job opening is Experienced and is accepting Delhi based applicants remotely. Read complete job description before applying.
HighLevel
Job Title
Posted
Career Level
Career Level
Locations Accepted
Share
Job Details
About HighLevel:HighLevel is a cloud-based, all-in-one white-label marketing and sales platform. Empowering marketing agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. Focuses on streamlining marketing efforts and providing comprehensive solutions, helping businesses of all sizes achieve their marketing goals.
About the Role:Seeking a talented and motivated data engineer to join our team. Responsible for designing, developing, and maintaining data infrastructure, developing backend systems and solutions. Supports real-time data processing, large-scale event-driven architectures, and integrations with various data systems. Collaborates with cross-functional teams to ensure data reliability, scalability, and performance. Works closely with data scientists, analysts, and software engineers for efficient data flow and storage, enabling data-driven decision-making.
Responsibilities:
- Software Engineering Excellence: Write clean, efficient, and maintainable code (JavaScript or Python).
- Design, Build, and Maintain Systems: Develop robust software solutions and implement RESTful APIs handling high volumes of data in real-time.
- Data Pipeline Development: Design, develop, and maintain data pipelines (ETL/ELT) for structured and unstructured data.
- Data Storage & Warehousing: Build and optimize databases, data lakes, and data warehouses (e.g., Snowflake).
- Data Integration: Work with APIs and various data sources for ingestion and transformation.
- Performance Optimization: Optimize queries, indexing, and partitioning for efficient data retrieval.
- Collaboration: Work with data analysts, data scientists, software developers, and product teams.
- Monitoring & Debugging: Set up logging, monitoring, and alerting for reliable data pipelines.
- Ownership & Problem-Solving: Proactively identify issues or bottlenecks and propose solutions.
Requirements:
- 3+ years of software development experience.
- Bachelor's or Master's degree in Computer Science or related field.
- Strong problem-solving skills.
- Solid understanding of data structures, algorithms, and design patterns.
- Experience with modern languages/frameworks (Node.js, JavaScript, Python, TypeScript, SQL, Scala or Java).
- Experience with ETL tools & frameworks (Airflow, dbt, Apache Spark, Kafka, Flink).
- Hands-on experience with cloud platforms (GCP, AWS).
- Strong experience with databases/warehousing (PostgreSQL, MySQL, Snowflake, NoSQL).
- Familiarity with version control (Git), CI/CD pipelines (Jenkins, Docker, Kubernetes).
- Excellent communication skills.
- Experience with data visualization tools (e.g., Superset, Tableau) is a plus.
- Experience with Terraform, IaC, ML/AI data pipelines and devops practices is a plus.