Contractor AWS Databricks Platform administrator
Nityo Infotech Corporation is hiring a remote Contractor AWS Databricks Platform administrator. The career level for this job opening is Experienced and is accepting USA based applicants remotely. Read complete job description before applying.
Nityo Infotech Corporation
Job Title
Posted
Career Level
Career Level
Locations Accepted
Share
Job Details
Detail JD:
Position Overview: The AWS Databricks Platform administrator is a technical role to design, implement and maintain process used to manage the organization s Databricks platform. The administrator will be responsible for facilitating the work of data analysts, data engineers and data scientists while maintaining best practices for security and compliance. Unity catalog will be used to implement access and identity policies. Infrastructure will be maintained using Terraform. The administrator will be responsible for assisting with compute issues, monitoring and alerting, and assuring the platform functions smoothly. This role requires great organization and communication skills.
Job responsibilities:
Manage and maintain role-based access to data and features in the Databricks Platform using Unity Catalog
Implement external access controls for outside teams using service principles, SQL warehouses and Delta Sharing
Work with platform users to solve problems and facilitate work
Create infrastructure for AWS and Databricks using Terraform: S3, IAM roles, Instance Profiles, KMS Keys
Improve processes and systems used to manage infrastructure, users and external access
Keep track of unused assets for pruning
Understand and implement best practices for security and compliance
Implement service principles and access tokens for external users
Understand how to utilize Databricks APIs to automate administrative tasks
Create queries and dashboards to monitor critical systems and processes
Create documentation for users and admins
Automate common admin tasks using Databricks notebooks
Manage workflow tags, cluster tags, workflow naming convention enforcement
Clusters health check and best practices implementation
Regular back up & recovery
Privilege reviews on users and resources
Required qualifications:
4 or more years of experience administering the Databricks platform
Strong understanding of data engineering needs
Experience creating AWS infrastructure with Terraform
Strong understanding of AWS infrastructure required by Databricks
Experience with compliance audits/audit documentation
Experience using Databricks APIs with tools like Postman
Understanding of the Databricks workspace and development environment
Experience using GitHub to manage source code
Strong SQL skills for creating admin related queries and dashboards
Strong debugging skills
Ability to do risk analysis for 3rd party integrations with AWS resources.
Knowledge of Python a plus
Familiarity with Datadog