Exp 3 - 6 years
CTC 18 - 20 LPA
Must Haves: Big data, Java, Python, AWS, Docker, Kubernetes, H2O
Highly Preferred: Candidates based out of Chennai
Talents from eComm/Product/BFS/Telecom Only
- Create and maintain optimal data pipeline architecture,
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Work with data and analytics experts to strive for greater functionality in our data systems.
- BS or MS degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
- Experience performing analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Strong analytic skills related to working with unstructured datasets.
- A successful history of manipulating, processing and extracting value from large disconnected datasets.
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
- Strong project management and organizational skills.
- Experience supporting and working with cross-functional teams in a dynamic environment.
- Technology experience:
- Big data tools: Hadoop, Spark, Kafka, etc.
- Relational SQL and NoSQL databases, including Postgres and Cassandra.
- Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
- AWS cloud services (and/or GCP/Azure): EC2, S3, RDS, Redshift, Lambda functions.
- Stream-processing systems: Storm, Spark-Streaming, etc.
- Container technologies: Docker, Kubernetes
- Big data analysis software: H2O
- Object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.