Required Skills:

  • Expertise in Big Data Ecosystem with experience in Hadoop, Hive, Spark, Strome, Cassandra, NoSQL DB’s.
  • Expertise in MPP architecture and knowledge of MPP engine (Spark, Impala etc).
  • Data pipeline/workflow management tools such as Azkaban, Airflow and oozie Cloud Development experience.
  • Experience in building scalable/highly available distributed systems in production.
  • Understanding of stream processing with knowledge on Kafka.
  • Knowledge of Software Engineering best practices with experience in implementing CI/CD, Log aggregation/Monitoring/alerting for a production system.
  • Very good expertise in production support related activities (issue identification, resolution)

Roles & Responsibility:

· Develop high performance and scalable solutions that extract, transform, and load big data

. Design, build, test and deploy cutting edge solutions at scale, impacting millions of customers worldwide drive value from data at client Scale Experience performing root cause analysis on data and processes to answer specific business questions and identify opportunities for improvement

. Experience building and optimizing ‘big data’ data pipelines, architectures and data sets involving petabyte and terabyte of data

. Interact with client engineering teams across geographies to leverage expertise and contribute to the tech community

. Engage with Product Management and Business to drive the agenda, set your priorities and deliver awesome product features to keep platform ahead of market scenarios

. Closely interact with Data Engineers from within client to identify right open source tools to deliver product features by performing research, POC/Pilot

. Engage with Product Management and Business to support and build data solutions and develop expertise w.r.t data thereby being known as the true data analyst

. Engage with the engineering team to provide seamless production support.


Source link