Hitachi Vantara, a wholly-owned subsidiary of Hitachi, Ltd., guides our customers from what's now to what's next by solving their digital challenges. Working alongside each customer, we apply our unmatched industrial and digital capabilities to their data and applications to benefit both business and society. More than 80% of the Fortune 100 trust Hitachi Vantara to help them develop new revenue streams, unlock competitive advantages, lower costs, enhance customer experiences, and deliver social and environmental value.
Summary of Position
The Senior DevOps Engineer will be a member of the Enterprise Data Platform team and will be responsible for maintaining CI/CD pipelines and infrastructure for the platform. The candidate will also be an integral part of building cutting-edge cloud data technologies that are highly scalable, elastic, and fault tolerant. They will be expected to stay up to date on the latest trends in DevOps and data-oriented cloud solutions. They will apply a vast amount of knowledge and experience toward the continual development of a big data and analytical platform as a service and conduct knowledge sharing sessions with other cross-functional team members as needed.
• Serve as the Senior DevOps engineer in the offshore team of a highly complex production system.
• Take the lead in developing cloud infrastructure that is scalable, resilient, and fault tolerant.
• Design, develop, and deploy automated CI/CD pipelines in the cloud to enable complex, real-time data collection.
• Develop, operate, and maintain core automation tools, governance, security, networking, reporting tasks and cloud infrastructure.
• Develop architectural blueprints and a long-term technical roadmap for our platform.
• The candidate must balance their focus on both immediate and long-term needs of the platform (i.e., projected future feature set, capacity, and scalability requirements).
• Collaborate closely with the global team, along with the ability to deliver projects independently.
• Evaluate and recommend tools, technologies, and processes to ensure that the services the team provides achieve the highest standards of quality and performance.
• Collaborate with other peer organizations (e.g., Data Engineering, technical support, etc.) to prevent and resolve technical issues and provide technical guidance.
• Focus on scalability, security, and availability of all applications and processes.
• Motivate and mentor team members on required coding standards and best practices through code review process.
• Excellent communication skills, and ability to lead technical discussions and engage with stakeholders.
• 8+ years of industry experience with minimum of 4 years in the design, development, and deployment of large-scale, distributed cloud infrastructure for data processing (preferably AWS)
• Experience with infrastructure as code using Terraform / CloudFormation
• Scripting experience (e.g. Bash, Python, Ruby, PowerShell)
• Experience with systems in a mixed environment (Linux - Windows)
• Experience with the automation of the CI/CD Pipelines
• Familiar with configuration management and CI/CD tools like GitHub, Puppet, Ansible, Chef, AWS CodePipeline, AWS CodeBuild, and AWS CodeDeploy
• Minimum 2 years of experience with AWS including, but not limited to, EC2, S3, EMR, Athena, Glue, SQS, Aurora, Redshift
• Bachelor or higher degree in computer science or related field
We are an equal opportunity employer. All applicants will be considered for employment without attention to age, race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status.