Join our Talent Network

Skip to main content

Corporate Information  |  Careers

Careers Home > Job Search Results > Data Scientist / AI/ML Engineer

Data Scientist / AI/ML Engineer

Location: Bangalore, Karnataka, India
Job ID: R0004301
Date Posted: Oct 19, 2021
Segment: Others (Including Headquarters and R&D )
Business Unit: Hitachi Regional Headquarters
Company Name: Hitachi India Pvt, Ltd.
Profession (Job Category): General Management
Job Type (Experience Level): Experienced
Job Schedule: Full time
Remote: No

Share: mail
Save Job Saved


Duties and Responsibilities: -

  • Implement the application solving customer’s issue by applying AI/ML technologies

  • Apply ML and DL models for various industry problems.

  • Setting-up collaboration environment and framework for AI related research and development projects.

  • Design and develop different architectural models for scalable data storage, processing, and large-scale analytics.

  • Work with cross-functional teams to understand technical needs.

  • Set-up big data environment that helps establish rapid POCs and prototype developments on both on-premises and cloud-based platforms.

  • Monitor and optimize performance of the big data ecosystem.

  • Ensure data accessibility to researchers via different programming languages.

  • Keep up to date with state of the art in the industry.


Minimum -

  • Bachelor’s degree in Computer Science with substantial industry experience of 5 years or more of Data Engineering/ETL/Administration experience.

  • Hands-on experience in Statistical data analysis and Machine learning

  • Possess significant knowledge of Big Data technologies and tools.

  • Good coding skills in at least one scripting language (Shell, Python, R, etc.)

  • Experience with various Hadoop distribution like Hortonworks and Cloudera.

  • Knowledge of cluster monitoring tools like Ambari, Ganglia, or Nagios.

  • Delivered Big Data solutions in the cloud with AWS or Azure or Google Cloud.

  • Experience in Java programming, Scala programming.

  • Experience with RDBMS (MySQL, PostgreSQL, etc.)

  • Experience with NoSQL database administration & development like MongoDB.

  • Experience with Hadoop eco-system (MapReduce, Streaming, Pig, HIVE, Spark).

  • Experience using DevOps toolbox such as Jenkins, Chef, Puppet.

  • Proven ability to create and manage big data pipeline using Kafka, Flume & Spark

  • Knowledge of BI tools such as Tableau, Pentaho, etc.

  • Experience building large-scale distributed applications and services.

  • Experience with agile development methodologies. -

  • Knowledge of industry standards and trends.

  • Good communication, logical thinking, and presentation skills.

Additional qualifications (preferred but not mandatory)-

  • Master’s degree in Computer Science or equivalent with at least 5 years industry experience of data engineering/ETL/Administration.

  • Experience applying Deep learning.

  • Substantial industry experience developing prototypes and demonstrating PoCs

Share: mail