Senior Data Engineer (Secret Clearance) from ALTA IT Services
Arlington, VA 22209
About the Job
Senior Data Engineer
Rosslyn, VA – hybrid (1-2 days a week onsite)
Active Secret clearance is required
Job Description:
Design, build, and optimize our data infrastructure on AWS. Work closely with the Enterprise Data Architect and cross-functional teams to develop scalable, high-performance data solutions. Ensuring data integrity, availability, and accessibility to support business analytics and decision-making processes with an emphasis on data governance.
Responsibilities:
• Design, develop, and maintain robust data pipelines and ETL processes to ingest, transform, and load data from various sources into our AWS data platform.
• Collaborate with the Enterprise Data Architect to implement and optimize data models, databases, and data warehouses.
• Ensure data quality, integrity, and consistency by implementing comprehensive data validation and cleansing procedures.
• Optimize data storage and retrieval for performance, cost-efficiency, and scalability using AWS services such as Redshift, RDS, S3, Glue, and Athena.
• Develop and implement automation scripts and tools for data processing, monitoring, and maintenance.
• Work closely with data scientists, analysts, and business stakeholders to understand data requirements and deliver efficient data solutions.
• Troubleshoot and resolve data-related issues, ensuring minimal disruption to data operations.
• Implement data security and compliance measures to protect sensitive information and adhere to industry regulations.
• Provide technical guidance and mentorship to junior data engineers and other team members.
Skills:
• Proficiency in designing and implementing ETL processes and data pipelines using AWS services such as Glue, Data Pipeline, and Lambda.
• Extensive experience with SQL and database technologies, including Redshift, RDS, and DynamoDB.
• Strong programming skills in languages such as Python, Java, or Scala.
• Experience with data governance best practices and implementation.
• Knowledge of data modeling, data warehousing, and data architecture principles.
• Experience with big data technologies such as Hadoop, Spark, and Kafka.
• Solid understanding of data security, privacy, and compliance best practices.
Rosslyn, VA – hybrid (1-2 days a week onsite)
Active Secret clearance is required
Job Description:
Design, build, and optimize our data infrastructure on AWS. Work closely with the Enterprise Data Architect and cross-functional teams to develop scalable, high-performance data solutions. Ensuring data integrity, availability, and accessibility to support business analytics and decision-making processes with an emphasis on data governance.
Responsibilities:
• Design, develop, and maintain robust data pipelines and ETL processes to ingest, transform, and load data from various sources into our AWS data platform.
• Collaborate with the Enterprise Data Architect to implement and optimize data models, databases, and data warehouses.
• Ensure data quality, integrity, and consistency by implementing comprehensive data validation and cleansing procedures.
• Optimize data storage and retrieval for performance, cost-efficiency, and scalability using AWS services such as Redshift, RDS, S3, Glue, and Athena.
• Develop and implement automation scripts and tools for data processing, monitoring, and maintenance.
• Work closely with data scientists, analysts, and business stakeholders to understand data requirements and deliver efficient data solutions.
• Troubleshoot and resolve data-related issues, ensuring minimal disruption to data operations.
• Implement data security and compliance measures to protect sensitive information and adhere to industry regulations.
• Provide technical guidance and mentorship to junior data engineers and other team members.
Skills:
• Proficiency in designing and implementing ETL processes and data pipelines using AWS services such as Glue, Data Pipeline, and Lambda.
• Extensive experience with SQL and database technologies, including Redshift, RDS, and DynamoDB.
• Strong programming skills in languages such as Python, Java, or Scala.
• Experience with data governance best practices and implementation.
• Knowledge of data modeling, data warehousing, and data architecture principles.
• Experience with big data technologies such as Hadoop, Spark, and Kafka.
• Solid understanding of data security, privacy, and compliance best practices.
#M2