Join our Talent Network

Skip to main content

Corporate Information  |  Careers

Careers Home > Job Search Results > Data Streaming Engineer-Kafka

Data Streaming Engineer-Kafka

Location: Dallas, Texas, United States
Job ID: 1017413HV
Date Posted: Jun 10, 2022
Segment: IT
Business Unit: Hitachi Vantara
Company Name: Hitachi Vantara Corporation
Profession (Job Category): Consulting & Offerings

Share: mail
Save Job Saved

The Team:
At Hitachi Vantara's Digital Insights practice, we help our clients by building technology solutions that addresses business challenges and improve business outcomes with data-driven insights. As we continue expand our big data team, we are looking for Data Streaming engineers who are passionate about technology and want to build a career working on the latest technology platforms.
Position Overview:
A Data streaming Engineer kafka is proficient in the development of all aspects of data processing including data warehouse architecture/modeling and ETL processing. The position focuses research on development and delivery of analytical solutions using various tools including Confluent Kafka, Kinesis, Glue, Lambda, Snowflake and SQL Server. A Data streaming Engineer must be able to work autonomously with little guidance or instruction to deliver business value.

Position Responsibilities
  • Design and recommend the best approach suited for data movement to/from different sources using Apache/Confluent Kafka.
  • Good understanding of Event-based architecture, messaging frameworks and stream processing solutions using Kafka Messaging framework.
  • Hands-on experience working on Kafka connect using schema registry in a high-volume environment.
  • Strong knowledge and exposure to Kafka brokers, zookeepers, KSQL, KStream and Kafka Control centre.
  • Good knowledge of big data ecosystem to design and develop capabilities to deliver solutions using CI/CD pipelines.
  • Skilled experience in writing and troubleshooting Python/PySpark scripts to generate extracts, cleanse, conform and deliver data for consumption
  • Strong working knowledge of the AWS Data analytics eco-system such as AWS Glue, S3, Athena, SQS etc.
  • Good understanding of other AWS services such as CloudWatch monitoring, scheduling and automation services
  • Good experience working on Kafka connectors such as MQ connectors, Elastic Search connectors, JDBC connectors, File stream connectors, JMS source connectors, Tasks, Workers, converters, and Transforms.
  • Working knowledge on Kafka Rest proxy and experience on custom connectors using the Kafka core concepts and API.
  • Create topics, set up redundancy cluster, deploy monitoring tools, and alerts, and has good knowledge of best practices.

Develop and ensure adherence to published system architectural decisions and development standards

With Japanese Roots Going Back Over 100 Years, Our Culture Is Founded On The Values Of Our Parent Company Expressed As The Hitachi Spirit

We are proud to say we are an equal opportunity employer and welcome all applicants for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status.

Wa - Harmony, Trust, Respect

Makoto - Sincerity, Fairness, Honesty, Integrity

Kaitakusha-Seishin - Pioneering Spirit, Challenge
Share: mail