Cloud DevOps Engineer IV - Mindteck
Quincy, MA 02169
About the Job
Responsibilities
-Develop general infrastructure technology in a public/private cloud
-Design, configure, optimize, manage, monitor, document and support platform services and components, as well as supporting enterprise data ingestion
-Assist in delivering our Databricks Unity Catalog with a focus on infrastructure as a code within that environment
-Own technical aspects of a project at the team level
-Lead design sessions and code reviews to elevate the quality of engineering across the organization
-Spearhead new feature use (innovate within existing tooling)
-Spearhead new software acquisition and use (innovate with new tooling)
-Leverage automation to remove redundant error prone tasks to improve the quality of solutions
-Provide advanced system administration, operational support, and problem resolution for a large complex cloud computing environment in and develop scripts to automate the deployment of resource stacks and associated configurations
-Extend standard systems management processes into the cloud including change, incident, and problem management
-Develop and maintain a library of deployable, tested, and documented automation design scripts, processes, and procedures
-Enable DevOps development activities and complex development tasks that will involve working with a wide variety of tools and container management systems
-Coordinate and bring application experts and other infrastructure teams together for finding optimal solutions to issues related to capacity, security, performance
-Implement and maintain CI/CD solutions and create code deployment models to support self-service automation
Qualifications
-Proven track record with at least 8 years of experience in DevOps data platform development, preferably in a senior or lead capacity
-Proficiency in infrastructure as code concepts and tools (e.g., Terraform, Ansible) for automating resource provisioning and configuration
-Hands-on experience with CI/CD pipeline tools (e.g., Jenkins, CircleCI) and version control systems (e.g., GitHub)
-Advanced understanding of DevOps concepts (Azure DevOps framework and tools preferred)
-Strong working knowledge of networking concepts (DNS/DHCP/Firewalls/Sub-netting, etc.) Azure preferred
-Experience with Databricks, more specifically with Unity Catalog implementations for enterprise customers
-Solid scripting skills in languages such as Python, Bash, or similar
-Advanced understanding of monitoring / observability concepts and tooling
-Extensive experience and strong understanding of cloud and infrastructure components
-Strong problem-solving and analytical skills, with the ability to troubleshoot complex DevOps platform issues and provide effective solutions
-Knowledge of developer tooling across the software development life cycle (task management, source code, building, deployment, operations, real-time communication)
-Excellent communication and collaboration skills, with the ability to work effectively with cross-functional teams and stakeholders
-8+ years of professional infrastructure and/or software development experience
-4+ years of experience with AWS, GCP, Azure, or another cloud service (Azure preferred)
-3+ years of experience with architecture and design
-Bachelor's or Master's degree in Computer Science, Data Science, or a related field
-Develop general infrastructure technology in a public/private cloud
-Design, configure, optimize, manage, monitor, document and support platform services and components, as well as supporting enterprise data ingestion
-Assist in delivering our Databricks Unity Catalog with a focus on infrastructure as a code within that environment
-Own technical aspects of a project at the team level
-Lead design sessions and code reviews to elevate the quality of engineering across the organization
-Spearhead new feature use (innovate within existing tooling)
-Spearhead new software acquisition and use (innovate with new tooling)
-Leverage automation to remove redundant error prone tasks to improve the quality of solutions
-Provide advanced system administration, operational support, and problem resolution for a large complex cloud computing environment in and develop scripts to automate the deployment of resource stacks and associated configurations
-Extend standard systems management processes into the cloud including change, incident, and problem management
-Develop and maintain a library of deployable, tested, and documented automation design scripts, processes, and procedures
-Enable DevOps development activities and complex development tasks that will involve working with a wide variety of tools and container management systems
-Coordinate and bring application experts and other infrastructure teams together for finding optimal solutions to issues related to capacity, security, performance
-Implement and maintain CI/CD solutions and create code deployment models to support self-service automation
Qualifications
-Proven track record with at least 8 years of experience in DevOps data platform development, preferably in a senior or lead capacity
-Proficiency in infrastructure as code concepts and tools (e.g., Terraform, Ansible) for automating resource provisioning and configuration
-Hands-on experience with CI/CD pipeline tools (e.g., Jenkins, CircleCI) and version control systems (e.g., GitHub)
-Advanced understanding of DevOps concepts (Azure DevOps framework and tools preferred)
-Strong working knowledge of networking concepts (DNS/DHCP/Firewalls/Sub-netting, etc.) Azure preferred
-Experience with Databricks, more specifically with Unity Catalog implementations for enterprise customers
-Solid scripting skills in languages such as Python, Bash, or similar
-Advanced understanding of monitoring / observability concepts and tooling
-Extensive experience and strong understanding of cloud and infrastructure components
-Strong problem-solving and analytical skills, with the ability to troubleshoot complex DevOps platform issues and provide effective solutions
-Knowledge of developer tooling across the software development life cycle (task management, source code, building, deployment, operations, real-time communication)
-Excellent communication and collaboration skills, with the ability to work effectively with cross-functional teams and stakeholders
-8+ years of professional infrastructure and/or software development experience
-4+ years of experience with AWS, GCP, Azure, or another cloud service (Azure preferred)
-3+ years of experience with architecture and design
-Bachelor's or Master's degree in Computer Science, Data Science, or a related field
Source : Mindteck