Publicis Sapient : Associate L1 – Data Engineering

Apply
Brief Description of position:

Publicis Sapient Overview

We at Publicis Sapient, enable our clients to thrive in Next and to create business value through expert strategies, customer-centric experience design, and world-class product engineering.

The future of business is disruptive, transformative and becoming digital to the core.

In our 20 + years in IT, never before have we seen such a dire need for transformation in every major industry - from financial services to automotive, consumer products, retail, energy, and travel.

To make this transformative journey a reality in these exciting times, we seek thought leaders and rock stars who will: 

  • brave it out to go do the next; “what will be” from “what is”
  • exhibit the optimism that says there is no limit to what we can achieve 
  • deeply-skilled, bold, collaborative, flexible
  • Reimagine the way the world works to help businesses improve the daily lives of people and the world. 

Our people thrive because of the belief that it is both our privilege and responsibility to usher our clients and the world into Next.

Our work is fueled by 

  • challenging boundaries, 
  • multidisciplinary collaboration, 
  • highly agile teams, and 
  • the power of the newest technologies and platforms.

If that’s you, come talk to us!

This is the world-class engineering team where you should build your career.

Job Summary: 

As Associate L1 in Data Engineering, you will implement components for data engineering solution. The role requires a hands-on technologist with programming background like Java / Scala / Python, should be capable to write code for data Ingestion, Integration and transformation with minimal oversight. Having hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms will be preferable.

Role & Responsibilities:

Your role is focused on Design, Development and delivery of solutions involving:

  • Data Ingestion, Integration and Wrangling
  • Data Storage and Computation Frameworks
  • Infrastructure & Cloud Computing
  • Build functionality for data ingestion from multiple heterogeneous sources
  • Build functionality for data analytics, search and aggregation


Experience Guidelines: 

Mandatory Experience and Competencies: 

#

Competency

 1

Overall 1+ years of IT experience in Data related technologies

 2

6 months training/internship is required

 3

Hands-on experience with the Hadoop stack – HDFS, sqoop, Spark, hive, oozie, airflow and other components required in developing data pipeline.

 4

Working experience in at least of the programming language Java, Scala, Python. Java preferable


Preferred Experience and Knowledge (Good to Have):

#

Competency

 1

Working knowledge with data platform related services on any 1 cloud platform


Personal Attributes: 

  • Strong written and verbal communication skills
  • Articulation skills
  • Good team player
  • Self-starter who requires minimal oversight
  • Ability to prioritize and manage multiple tasks
  • Process orientation and the ability to define and set up processes
Minimum Work Experience:
1 year
Mandatory SkillSet:
Azure, GCP, Cloud Computing, Python, Hadoop, Java, Sqoop, Scala, Spark, AWS.

Feedback

We believe in making Analytics Vidhya the best experience possible for Data Science enthusiasts. Help us by providing valuable Feedback.