Job Description

  • Proficient in building optimal and scalable data pipeline by closely liaising with business stakeholder and technology leaders
  • Ability to work with large and complex data sets, data lakes and data wareshouses.
  • Working proficiency in building enterprise level ETL and business analytics from wide variety of data sources (SQL and No-SQL) and big-data technologies
  • Evaluate and utilize new technologies/tools/frameworks centered around AWS provided solutions
  • Work with the architecture engineering team to ensure quality solutions are implemented and engineering best practices are adhered
  • Drive a culture of collaborative reviews of designs, code and test plans
  • Abjility to indept data management, privacy and security measures
  • Working experience in build open source analytical and reporting tools by utilizing the modern AWS cloud data pipeline, address KPIs and operational efficiencies.
  • Qualifications
    1. A minimum hand-on experience of 5+ years on data engineer role (programming experience in Python/Spark frameworks is a must). Total experience must be close to or more than a decade in any Software domain.
    2. Strong analytic skills related to working with structured (SQL) and unstructured datasets (NoSQL).
    3. Hands on Experience with data scripting /statistical analysis languages: Python,R
    4. Hands on experience on AWS analytical eco-systems like glue, data pipeline, redshift,Kinesis, S3, Quicksight is a must. Any equivalent solutions on other cloud services are also preferred.
    5. Thorough working experience in building statistical API s on top to Analytics Services using GraphQL or Rest.
    Required Skills: Python, Spark, Airflow, AWS eco-system, Kafka, SQL & No-SQL Databases, Sneaql,
    Nice to have
    1. AWS Certified Data Analytics Specialty
    2. Attention to detail and strong follow through
    3. Proficiency in Scrum/Kanban methodologies
    4. Excellent Articulation, Oration and written communication skills


    Source link