Join our mission
Intuit is a global technology platform that helps our customers and communities overcome their most important financial challenges. We help give over 50 million consumer, small business, and self-employed customers around the world the opportunity to prosper.
Overview
Intuit is a leading software provider of business and financial management solutions for small and mid-d businesses, consumers, financial institutions and accounting professionals. You probably know us by our flagship products, QuickBooks®, Quicken® and TurboTax®, but that’s just the start. Over 50 million users, seven million small businesses and 1,600 financial institutions depend on Intuit because we innovate at the crossroads of real customer problems and breakthrough technology. Join us and let your ingenious ideas find expression.
Technology leaders at Intuit think strategically and drive for results. They build high performing teams by putting the right people on the right job at the right time. They lead their teams to embrace new ideas that produce outstanding results for our customers. Leaders at Intuit inspire others thru action by creating a spirit of collaboration.
Come join the Intuit Data Fabric (IDF) team as a Staff Engineer. The IDF team owns the Intuit Analytics Platform which is the foundation of big data at Intuit and which enables real-time data ingestion, cataloging, analytics and machine learning on all of Intuit’s data. With Intuit’s customers growing rapidly year over year, the volume of data we handle is ever increasing and the excellent data engineering we do in IDF helps Intuit keep up with this data volume and leverage it for machine learning and data-driven product innovations.
What you’ll bring

  • 12+ years of relevant experience with at least 5+ years on the big data domain
  • Should have an experience in architecting E2E ecosystem for big data and analytical platform.
  • Expert level experience in building big data platform and Big Data solutions primarily based on Hadoop echo system that is fault-tolerant & scalable.
  • Expert level experience with Java and scala programming.
  • Expert level experience designing high throughput data services..
  • Experience with Big Data Technologies (Hive, HBase, Spark, Kafka, Storm, Map Reduce, HDFS, Splunk, zoo keeper, MemSQL, Cassandra, Redshift, Graphdb ), understands the concepts and technology ecosystem around both real-time and batch processing in Hadoop.
  • Strong communication skills
  • BE/Btech/MS in Computer Science (or equivalent)
  • Employs effective listening and demonstrates strong collaboration to lead change by example and through influence

How you will lead

  • Architect, Design and build big data platform and Big Data solutions primarily based on open source technologies that is fault-tolerant & scalable.
  • Build solid & scalable architecture to address on the Normalization, lineage, Data governance, Ontology typical data catalog UCs
  • Design solutions that involve complex, multi-system integration, possibly across BUs or domains
  • Work with Analysts and Data Scientists to identify datasets needed for deep customer insights and for building operational propensity models.
  • Hands on contribution to biz logic using Hadoop echo system (Java MR, Spark, Scala, Hbase, Hive) and build framework(s) to support data pipelines on streaming applications
  • Work on technologies related to NoSQL, SQL and InMemory platform(s)
  • Conducts code reviews to ensure code quality, consistency and best practices adherence.
  • Drives alignment between Enterprise Architecture and business needs.
  • Conducts quick Proof of Concept (POCs)for feasibility studies and take it to the prod
  • Lead fast moving development teams using agile methodologies.
  • Lead by example, demonstrating best practices for unit testing, CI/CD, performance testing, capacity planning, documentation, monitoring, alerting, and incident response.

Source link