publicis Sapient : Senior Associate L1– Data Engineering

Brief Description of position:

Publicis Sapient Overview

We at Publicis Sapient, enable our clients to thrive in Next and to create business value through expert strategies, customer-centric experience design, and world-class product engineering.

The future of business is disruptive, transformative and becoming digital to the core.

In our 20 + years in IT, never before have we seen such a dire need for transformation in every major industry - from financial services to automotive, consumer products, retail, energy, and travel.

To make this transformative journey a reality in these exciting times, we seek thought leaders and rock stars who will: 

  • brave it out to go do the next; “what will be” from “what is”
  • exhibit the optimism that says there is no limit to what we can achieve 
  • deeply-skilled, bold, collaborative, flexible
  • Reimagine the way the world works to help businesses improve the daily lives of people and the world. 

Our people thrive because of the belief that it is both our privilege and responsibility to usher our clients and the world into Next.

Our work is fueled by 

  • challenging boundaries, 
  • multidisciplinary collaboration, 
  • highly agile teams, and 
  • the power of the newest technologies and platforms.

If that’s you, come talk to us!

This is the world-class engineering team where you should build your career.

Job Summary:

As Senior Associate L1 in Data Engineering, you will do technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution

The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. Having hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms will be preferable.

Role & Responsibilities:

Your role is focused on Design, Development and delivery of solutions involving:

  • Data Ingestion, Integration and Transformation
  • Data Storage and Computation Frameworks, Performance Optimizations
  • Analytics & Visualizations
  • Infrastructure & Cloud Computing
  • Data Management Platforms
  • Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time
  • Build functionality for data analytics, search and aggregation

Experience Guidelines:

Mandatory Experience and Competencies: 

#

Competency

1

Overall 3.5+ years of IT experience with 1.5+ years in Data related technologies

2

Minimum 1.5 years of experience in Big Data technologies

3

Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge on real-time data pipelines is added advantage. 

4

Strong experience in at least of the programming language Java, Scala, Python. Java preferable

5

Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc

 

Preferred Experience and Knowledge (Good to Have):

#

Competency

1

Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience

2

Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc

3

Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures

4

Performance tuning and optimization of data pipelines

5

CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality 

6

Working knowledge with data platform related services on at least 1 cloud platform, IAM and data security

7

Cloud data specialty and other related Big data technology certifications

Personal Attributes: 

  • Strong written and verbal communication skills
  • Articulation skills
  • Good team player
  • Self-starter who requires minimal oversight
  • Ability to prioritize and manage multiple tasks
  • Process orientation and the ability to define and set up processes
Location
Gurgaon, Bangalore
Support

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy.

Feedback

We believe in making Analytics Vidhya the best experience possible for Data Science enthusiasts. Help us by providing valuable Feedback.