Full-Time Staff Hadoop & Tableau Admin - Big Data - Federal
ServiceNow is hiring a remote Full-Time Staff Hadoop & Tableau Admin - Big Data - Federal. The career level for this job opening is Experienced and is accepting Kirkland, Washington based applicants remotely. Read complete job description before applying.
ServiceNow
Job Title
Posted
Career Level
Career Level
Locations Accepted
Salary
Share
Job Details
Note: This position supports our US Federal Government Cloud Infrastructure and requires passing a ServiceNow background screening (USFedPASS), including credit, criminal checks, and a drug test. Employment is contingent upon passing the screening. Only US citizens, naturalized citizens, or US Permanent Residents (Green card holders) will be considered.
As a Staff DevOps Engineer on our Big Data Federal Team, you will deliver 24x7 support for our Cloud infrastructure. The Big Data team ensures that ServiceNow exceeds availability and performance SLAs for Customer instances deployed across the ServiceNow cloud and Azure cloud.
Our mission is to:
- Deliver state-of-the-art Monitoring, Analytics and Actionable Business Insights.
- Employ new tools, Big Data systems, Enterprise Data Lake, AI, and Machine Learning methodologies.
- Improve efficiencies across Cloud Operations, Customer Support, Product Usage Analytics, and Product Upsell Opportunities.
Responsibilities:
- Deploy, monitor, maintain, and support Big Data infrastructure and applications on ServiceNow Cloud and Azure environments.
- Deploy, scale, and manage containerized Big Data applications using Kubernetes, docker, and related tools.
- Proactively identify and resolve issues within Kubernetes clusters, containerized applications, and data pipelines.
- Provide expert-level support for incidents and perform root cause analysis.
- Triage network-related issues in a containerized environment.
- Provide production support to resolve critical Big Data pipelines, application issues, and mitigating or minimizing any impact on Big Data applications.
- Collaborate with SRE, Customer Support, Developers, QA, and System engineering teams.
- Enforce data governance policies and the Definition of Done (DoD) in all Big Data environments.
- Install, configure, and upgrade Tableau Server in a clustered environment.
- Manage user licensing, site administration, and content permissions for Tableau.
- Monitor Tableau Server performance, health, and usage; perform tuning to ensure optimal performance.
- Automate server monitoring and maintenance tasks using scripting (PowerShell, Python) and the Tableau Services Manager (TSM) CLI.
- Manage Tableau data source connections, extracts, and refresh schedules.
- Implement and manage security best practices for the Tableau environment, including user authentication (SAML, Active Directory) and content-level security.
To be successful in this role you have:
- Experience in leveraging or critically thinking about how to integrate AI into work processes.
- 6+ years of experience working with systems such as HDFS, Yarn, Hive, HBase, Kafka, RabbitMQ, Impala, Kudu, Redis, MariaDB, and PostgreSQL.
- Deep understanding of Hadoop / Big Data Ecosystem.
- Hands-on experience with Kubernetes in a production environment.
- Deep understanding of Kubernetes architecture, concepts, and operations.
- Strong knowledge in querying and analyzing large-scale data using VictoriaMetrics, Prometheus, Spark, Flink, and Grafana.
- Experience supporting CI/CD pipelines for automated applications deployment to Kubernetes.
- Strong Linux Systems Administration skills.
- Strong scripting skills in Bash, Python for automation routine tasks.
- Proficient with Git and version control systems.
- Familiarity with Cloudera Data Platform (CDP) and its ecosystem.
- Experience as a Tableau administrator.
- Familiarity with Tableau Services Manager (TSM).
- Ability to learn quickly in a fast-paced, dynamic team environment.