Senior Associate L2 – Data Engineering
Publicis Sapient Overview
We at Publicis Sapient, enable our clients to thrive in Next and to create business value through expert strategies, customer-centric experience design, and world-class product engineering.
The future of business is disruptive, transformative and becoming digital to the core.
In our 20 + years in IT, never before have we seen such a dire need for transformation in every major industry - from financial services to automotive, consumer products, retail, energy, and travel.
To make this transformative journey a reality in these exciting times, we seek thought leaders and rock stars who will:
Our people thrive because of the belief that it is both our privilege and responsibility to usher our clients and the world into Next.
Our work is fueled by
If that’s you, come talk to us!
This is the world-class engineering team where you should build your career.
Job Summary:
As Senior Associate L2 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. You are also required to have hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms.
Role & Responsibilities:
Your role is focused on Design, Development and delivery of solutions involving:
Experience Guidelines:
Mandatory Experience and Competencies:
# |
Competency |
1 |
Overall 5+ years of IT experience with 3+ years in Data related technologies |
2 |
Minimum 2.5 years of experience in Big Data technologies and working exposure in at least one cloud platform on related data services (AWS / Azure / GCP) |
3 |
Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. |
4 |
Strong experience in at least of the programming language Java, Scala, Python. Java preferable |
5 |
Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc |
6 |
Well-versed and working knowledge with data platform related services on at least 1 cloud platform, IAM and data security |
Preferred Experience and Knowledge (Good to Have):
# |
Competency |
1 |
Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience |
2 |
Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc |
3 |
Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures |
4 |
Performance tuning and optimization of data pipelines |
5 |
CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality |
6 |
Cloud data specialty and other related Big data technology certifications |
Personal Attributes: