Full-Time DBA Infrastructure Engineer
Wrike Careers Page is hiring a remote Full-Time DBA Infrastructure Engineer. The career level for this job opening is Manager and is accepting Estonia based applicants remotely. Read complete job description before applying.
Wrike Careers Page
Job Title
Posted
Career Level
Career Level
Locations Accepted
Share
Job Details
As a Staff Cloud Ops Engineer at Wrike, your primary strength lies in PostgreSQL, where you excel in designing, managing, and optimizing internal database systems. Your advanced skills extend to cloud and data center infrastructure, emphasizing security, containers, networking, monitoring, automation, and debugging. You proactively define tasks based on team objectives, guide others, and propose impactful infrastructure improvements.
You would have the opportunity to engage in high-end technical projects, including the implementation of Kafka, development of an internal database management system built on PostgreSQL, and collaboration on command and control systems within an international team.
In this role, you would join a core development team of 250+ engineers developing Wrike and become a part of the whole operations department which is exposed to various technologies and systems. Does this sound like you? If your answer is yes, we'd love to speak with you!
More about Your team
We have 14 folks in the SysOps Department, consisting of two teams distributed in Prague, Cyprus, and Tallinn. As a core team member of our team you will be:
- Managing the Wrike product infrastructure with a focus on data infrastructure
- Designing reliable solutions to ensure a product uptime SLA of 99.99%
- Working with GCP and other providers in the IaC paradigm
- Implementing and supporting new infrastructure services
- Actively participating in incident response and management, including on-call duties
- Developing and maintaining professional connections within and outside of the team
- Driving end-to-end completion of significant projects, supported by senior team members
Technical Environment:
We run a Java based SaaS application in Kubernetes for a massive audience of over 20,000 organizations. Our data resides in more than 100 highly available PostgreSQL clusters across our production and pre-production environments. To ensure seamless data access, we've implemented robust data streaming pipelines. These pipelines empower our back-end and analytics teams to swiftly retrieve accurate data, contributing to the efficiency and success of our operations.
Key technologies and tools include:
- PostgreSQL as DB platform
- Kafka and rabbitmq for messaging
- Kubernetes and helm (Service-oriented architecture)
- Nginx, HAproxy and Istio for load balancing
- GCP, AWS and Cloudflare are our cloud providers
- Puppet, Ansible and Terraform for defining everything as a code
- Python to automate everything
- Prometheus (VictoriaMetrics) and Zabbix for monitoring
- Graylog, Logstash, Fluentd for logging
- Jenkins and Gitlab-CI for building pipelines
You will achieve your best if you have
- Advanced experience in designing, managing, and optimizing database management systems based on PostgreSQL
- Advanced knowledge in at least two of the following areas, intermediate knowledge of the rest: Data networks, Security, Databases, Cloud providers, Process automation, Containerised application management
- Advanced Linux administration skills with experience in maintaining highly available infrastructure for web application stack
- Sufficient scripting skills in Python/Bash or other scripting languages
- Upper Intermediate English skills.
You will stand out with
- Advanced experience running Kubernetes platform
- Advanced experience with any Cloud Provider management using IAC (AWS/GCP/Azure).
- Strong understanding of Linux fundamentals: security principles, hardware, troubleshooting, etc.
- Monitoring experience with Zabbix, Grafana or Prometheus
- Advanced experience with any of System Configuration Management tools (Ansible/Puppet/Salt etc.)