Client: MNC
Experience: 10+Years
Location: Pan India
Skills: Hadoop/MapReduce, Python, PySpark, HDFS, Hive, Pig, Flume, Sqoop, Zookeeper, Spark, MapReduce2, YARN, HBase, Kafka, and Storm.
Job Mode: C2H / Contrcat
Note: Please share profiles IF candidates agree to work from Sunday to Thursday in a week.
Description:
• 12+ years of experience in IT with 8+ years in Big Data Ecosystem
• Vast experience in development on Hadoop technologies like HDFS, Hive, Pig, Flume, Sqoop, Zookeeper, Spark, MapReduce2, YARN, HBase, Kafka, and Storm.
• Experience in Planning, designing and strategizing the Big Data roadmap to meet the Organization’s objective and goal towards Data Analytics revolution
• Build distributed, reliable and scalable data pipelines to ingest and process data in batch and real-time.
• Experience in end-to-end responsibility of the Hadoop life cycle in the organization
• Experience in development of large-scale Data Platform and in real-time streaming analytics
• Experience in Implementing, managing and administering the overall hadoop infrastructure.
• To be able to clearly articulate pros and cons of various Big Data technologies and platforms
• To be able to document Big Data use cases, solutions and recommendations
• To be able to help program and project managers in the design, planning and governance of implementing Big Data projects of any kind
• Responsible for identifying data sources and deciding the components to work on the data sources for ingestion
• To be able to perform detailed analysis of business problems and technical environments and use this in designing the Big Data solution
• To be able to work creatively and analytically in a problem-solving environment
• Experience in an agile Big Data environment
• Design Big data processing pipelines
• Experience with Spark Programming
• Experience with integration of data from multiple data sources
• Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming
• Experience in fine tuning applications and systems for high performance and higher volume throughput.
• Ability to work with huge volumes of data so as to derive Business Intelligence
• Experience in translating, loading and exhibiting unrelated data sets in various formats and sources like JSON, text files, Kafka queues, and log data
• Experience in designing and Implementing ETL/ELT process
• Monitoring performance and advising any necessary infrastructure changes
• Defining data retention policies
• Cross-industry, cross-functional and cross-domain experience (Oil and Gas Industry experience will be an added advantage)
• To have excellent written and verbal communication skills
Kindly send your updated profiles to [Confidential Information]
Thanks & Regards,
Balaram K
Mobile: +91- 9000749410 / 9848771366
[Confidential Information]Aurum Data Solutions India Pvt Ltd.,
(a subsidiary of Aurum Data Solutions Inc, USA)
Source link