AWS Database Automation Engineer - AWS/RDS/Jenkins a MUST (130-150K) - Career Developers
Jersey City, NJ 07302
About the Job
Refer a friend: Referral fee program
Senior AWS Database/DataOps Engineer
Location : Jersey City, NJ - OnSite - 3 Days a week.
Salary : $130 - 150K + 7% Bonus + Pension
***Please note that sponsorship is not available for this role as it is a company that is regulated by the FHFA.
***PLEASE NOTE, THIS IS NOT A DEVOPS POSITION. It is a Data Engineering role within RDS/AWS.
The must-haves to be considered for this role
Experience:
Skills:
Interpersonal skills:
Technical Skills:
Credentials
Team Overview:
Position Overview:
This position is responsible for:
INDH
Data RDS AWS Redshit engineer Python Jenkins Ansible Data RDS AWS Redshit engineer Python Jenkins Ansible Data RDS AWS Redshit engineer Python Jenkins Ansible Data RDS AWS Redshit engineer Python Jenkins Ansible Data RDS AWS Redshit engineer Python Jenkins Ansible Data RDS AWS Redshit engineer Python Jenkins Ansible Data RDS AWS Redshit engineer Python Jenkins Ansible
Senior AWS Database/DataOps Engineer
Location : Jersey City, NJ - OnSite - 3 Days a week.
Salary : $130 - 150K + 7% Bonus + Pension
***Please note that sponsorship is not available for this role as it is a company that is regulated by the FHFA.
***PLEASE NOTE, THIS IS NOT A DEVOPS POSITION. It is a Data Engineering role within RDS/AWS.
The must-haves to be considered for this role
- 5+ years of DBA or Data engineering background.
- 3+ years of AWS / RDS background.
- 1+ years of Jenkins automation background (CI/CD)
- PostgreSQL or Oracle is preferred.
Experience:
- Minimum of Five years of experience in a Data Engineer role
- 3-5 years of hands-on experience in creating continuous integration and delivery pipelines leveraging cloud-native architectures/technologies on Amazon Web Services(AWS) Cloud
- 3-5 years of hands-on experience with DevOps tools for design, build, test, release, and monitor jobs - Jenkins/Ansible preferred
- 3-5 years of hands-on experience working with SQL, NoSQL database systems, data warehouses, and streaming solutions. Postgres, Oracle, Sql Server, Snowflake, Amazon RDS preferred
- 3-5 years of demonstrated experience with building, combining, analyzing, and optimizing large and complex data sets
- Two plus years working with one or more of the technologies: Kafka, API Gateway, Kinesis, Docker, Apache Airflow, and/or Lambda
- 1-2 years of experience in design and development of:
- Optimized AWS RDS instances including replication using AWS Database Migration Services (DMS)
- Information management/data architectures
- Multi-tier data layers and frameworks
- Data Integration methods for transactional systems
- Cloud storage solutions
- 1-2 years of working experience with scripting languages, and/or statistical tools: Python, R, Scala, SAS
- Working knowledge of security/monitoring tools and their integration with pipeline
- Knowledge of AI, Client, and Microservices deployment and release management will be a plus
- Identifying opportunities for eliminating bottlenecks and improve data integrity, reliability, and consistency for faster data access across business units
- Enhancing efficiency and effectiveness of data infrastructure and delivery processes using scripting/automation
- Build and support high highly available data pipelines and related integrations
- Education:
- Bachelor's degree in computer science, engineering, or related discipline. Master's degree preferred. Professional experience will be considered in lieu of education.
Skills:
Interpersonal skills:
- Team-oriented, collaborative, and coaching skills
- Strong analytical, conceptual, and critical thinking abilities
- Clear and concise communication and the ability to present ideas in user-friendly language
- Effective leadership qualities to negotiate with and influence a variety of stakeholders
- Effective relationship management skills at all levels
Technical Skills:
- Passionate about automation and developing best practices
- Strong organizational and process/workflow design skills
- Keen ability to identify and analyze patterns using complex data sets
- Good knowledge of agile delivery and DevOps frameworks
- Good ability to think at and switch between multiple levels (e.g. conceptual vs. logical vs. physical data models, strategic vs. tactical planning)
- Good understanding of data management functions as per the Data Management Body of Knowledge (DMBOK) and its application to design, development, and implementation of operational data solutions
- Basic understanding of Machine Learning and Artificial Intelligence concepts
Credentials
- Certifications in AWS DevOps, AWS RDS/DMS, data engineering, data science, automation, or database technologies are a plus. However, generally recognized certifications in data management, scripting, or automation will be preferred
Team Overview:
- As part of the Data Management Office (DMO), the Operational Data Services team streamlines, automates, hardens, and modernizes the processes and technologies by which the Bank's operational data is stored, shared, protected, and maintained at quality levels that are fit for purpose.
Position Overview:
This position is responsible for:
- Identifying opportunities for eliminating bottlenecks and improving data integrity, reliability, and consistency for faster data access across business units
- Enhancing efficiency and effectiveness of data infrastructure and delivery processes using scripting/automation
- Build and support high highly available data pipelines and related integrations
- Essential Job Function:
- Provide thought leadership in identifying reusable, flexible, and scalable data pipeline-based automated solutions.
- Perform system analysis to identify gaps and impediments for continuous data integration and delivery.
- Develop, construct, test, operationalize, and maintain data pipelines for operational and analytical efficiencies.
- Automate workflows for faster delivery of data from conception to Operationalization
- Set up best practice standards for data engineering in alignment with Bank's defined Enterprise and data architecture.
- Manage and enhance Bank's Non-IT Python Scripting environment for secure and automated script/package deployment.
- Effectively plan/prioritize/manage workload and project deliverables to deliver timely business value.
- Collaborate with Technology, business, and vendor stakeholders to plan, design, test, and deliver data pipelines and infrastructure
- Ensure all design and deliverables comply with PMLC, SDLC, DMO, and Bank standards
- Ensure monitoring and optimization of Bank's RDS instances for cost-effectiveness and compliance with Bank standards
- Provide training on supported capabilities
- Collaborate with other Data Management Office staff as needed to advance the maturity of the bank's overall data management
- Identify existing and avoid building new roadblocks to delivering business value in a timely manner
INDH
Data RDS AWS Redshit engineer Python Jenkins Ansible Data RDS AWS Redshit engineer Python Jenkins Ansible Data RDS AWS Redshit engineer Python Jenkins Ansible Data RDS AWS Redshit engineer Python Jenkins Ansible Data RDS AWS Redshit engineer Python Jenkins Ansible Data RDS AWS Redshit engineer Python Jenkins Ansible Data RDS AWS Redshit engineer Python Jenkins Ansible
Source : Career Developers