• Minimum 5+ years of experience in software industry with at least 3 years experience in Hadoop.
  • Experience in Hadoop development using spark & Scala. Should have ability to understand business requirements and map to the current Hadoop / Spark Jobs / scripts.
  • Should be able to enhance existing spark, scala code based on requirements.
  • Troubleshoot Job Failures, do RCA & identify permanent fixes and evaluate performance needs for the spark / scala jobs in non-prod and production clusters.
  • Should have good understanding of Hadoop architecture and should have knowledge on AWS.
  • Primary Skills:

  • Hadoop Developer with working knowledge of HDFS, MR, SPARK, Scala, TEZ, Hive, Oozie
  • Secondary Skills:

  • Python, Oracle, Ansible, Unix Scripting, Splunk, Java/ J2EE
  • Good to Have:

  • Sqoop, Zeppelin

  • Source link