Hitachi Vantara, a wholly owned subsidiary of Hitachi, Ltd., helps data-driven leaders use the value in their data to innovate intelligently and reach outcomes that matter for business and society - what we call a double bottom line. Only Hitachi Vantara combines 100+ years of experience in operational technology (OT) and 60+ years in IT to unlock the power of data from your business, your people and your machines. We help enterprises store, enrich, activate and monetize data for better customer experiences, new revenue streams and lower business costs.
47Lining, a part of Hitachi Vantara is an AWS Premier Consulting Partner with Big Data and Machine Learning Competency designations. We develop big data solutions and deliver big data managed services built from underlying AWS building blocks like Amazon Redshift, Kinesis, S3, DynamoDB, Machine Learning and Elastic MapReduce. We help customers build, operate and manage breathtaking "Data Machines" for their data-driven businesses. We architect solutions that address traditional data warehousing, Internet-of-Things analytics back-ends, predictive analytics and machine learning to open up new business opportunities. Our experience spans use cases in multiple industries including industrial, manufacturing, oil - gas, energy, life sciences, gaming, retail analytics, financial services and media - entertainment.The Role
We are seeking experienced and versatile AWS DevOps, DataOps, and Cloud Engineer Consultants. The ideal contributor will work with a skilled team developing and operating enterprise-grade services backbone products and services supporting industrial process optimization data lakes, agile analytics, and DataOps platforms.
47Lining is growing at an exponential rate and can only grow as quickly as we are able to find the right talent. We pay extremely competitive salaries, bonuses, free training and conference attending budget, flexible hours and we have a genuinely extremely talented team.Responsibilities
- Extensive background in Unix system administration
- Strong experience with environment and deployment automation, infrastructure-as-code, deployment pipeline specification and development.
- Operations design, metrics and SLA definition, trigger specification
- Experience managing applications in Amazon Web Services and familiarity with all their core, compute, networking, storage, security, compliance, serverless, and analytics offerings
- Embed with development teams to ensure system reliability and performance
- Deliver ongoing releases using tiered pipelines and continuous integration tools like Jenkins and Bamboo
- Create and maintain release and update processes using build tools like Maven
- Specify and manage the provisioning of deployment environments using tools like CloudFormation, Terraform, Puppet, Chef, and Ansible
- Support database environments including replication, log shipping, performance tuning, and backups
- Maintain distinct environments such as development, staging, production
- Maintain team and client accounts and permissions
- Define, develop, and maintain monitoring and reporting infrastructure
- Specification and Implementation of Cloud Operations Standard Operating Procedures
- Strong scripting skills, preferably with Bash, Ruby and Python
- Continuous integration embedded with or in partnership with development teams
- Development and Production Environment specification, creation and maintenance
- Operations Design
- Good working knowledge of deploying large scale applications and services
- Proficient in usage of distributed revision control systems with branching, tagging (git)
- Exposure to networking - load balancing solutions
- On-going monitoring and support
- Excellent documentation habits
- Build scripting
We are an equal opportunity employer. All applicants will be considered for employment without attention to age, race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status.