Join our Talent Network

Skip to main content

Corporate Information  |  Careers

Careers Home > Job Search Results > Global IT - BI Data Engineer

Global IT - BI Data Engineer

This job posting is no longer active.

Location: Hyderabad India
Job ID: 1019163HV
Date Posted: Oct 17, 2022
Segment: IT
Business Unit: Hitachi Vantara
Company Name: Hitachi Vantara Corporation
Profession (Job Category): IT, Telecom & Internet

Share: mail
Save Job Saved

Our Company

Hitachi Vantara is part of the Global Hitachi family. We balance innovation with an open, friendly culture and the backing of a long-established parent company, known for its ethical reputation. We guide customers from what's now to what's next by unlocking the value of their data and applications to solve their digital challenges, achieving outcomes that benefit both business and society.

Our people are our biggest asset, they drive our innovation advantage and we strive to offer a flexible and collaborative workplace where they can thrive. Diversity of thought is welcomed and our employee base is represented by several active Employee Resource Group communities. We offer industry leading benefits packages (flexible working, generous pension and private healthcare) and promote a creative and inclusive culture. If driving real change gives you a sense of pride and you are passionate about powering social good, we'd love to hear from you.

Our Values

We strive to create an inclusive environment for all and are open to considering home working, compressed/flexible hours and flexible arrangements. Get in touch with us to explore how we might be able to accommodate your specific needs.

We are proud to say we are an equal opportunity employer and welcome all applicants for employment without attention to race, colour, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status. With Japanese roots going back over 100 years, our culture is founded on the values of our parent company expressed as the Hitachi Spirit:

Roles & Responsibilities:

• 7-10 years of overall IT experience
• Build, and test an optimal data pipeline architecture (preferably on any cloud environments -AWS experience is a must)
• Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
• Rich experience in Data ingestion using Lambda Function, Python, Glue, Docker, etc., from variety of sources.
• Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and cloud-based big data technologies.
• Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.
• Understand and implement practices to comply with PHI, GDPR and other emerging data privacy initiatives.
Required Technical and Professional Expertise
• SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases (SQL Server, Oracle, Athena, RDS, Redshift) & AWS services (EC2, EMR, Glue, Lambda, etc.,)
• Solid experience building and optimizing big data pipelines, architectures, and data sets.
• Deep knowledge and experience with JSON and XML schemas and documents.
• Experience in Python data structures and data engineering libraries is mandatory
• Working knowledge of REST and implementation patterns pertaining to Data and Analytics.
• Build processes supporting data transformation, data structures, metadata, dependency and workload management.
• Working knowledge of message queuing, stream processing, and highly scalable big data data stores (Kafka, Kinesis, Storm) and ETL orchestration tools such as Airflow.
Preferred Technical and Professional Expertise
• Working on cloud-based big data automation and orchestration solution
• Understanding of Business Intelligence and Data Warehousing concepts and methods.
• Fully conversant with big-data processing approaches and schema-on-read methodologies. Preference for deep understanding of Spark, Databricks and Delta Lake, and applying them to solve data science and machine learning business problems.
• AWS/Azure DevOps - CI/CD.
• 3-4 Years of Data Ingestion engineer, using Python, Lamdba, other ingestion tools
• 3-4 years of Experience with Spark, Scala, Kafka

Wa - Harmony, Trust, Respect

Makoto - Sincerity, Fairness, Honesty, Integrity

Kaitakusha-Seishin - Pioneering Spirit, Challenge
Share: mail