Cogito : Data Engineer

Apply
Brief Description of position:

Base location in Pune. Open to Remote working options 

AI coaching that goes beyond traditional speech analytics

Cogito (www.cogitocorp.com) is the market leader in human-cantered AI coaching, with over a decade of advanced R&D and hundreds of millions of analysed phone conversations. Our unique combination of rich human behaviour insights with real time streaming natural language processing, unique to Cogito, allows us to combine the best of both worlds for a truer understanding of employee behaviour and customer sentiment in real-time.

Cogito (www.cogitocorp.com) is advancing the way people communicate by applying machine learning to enhance conversations in real time. This is your chance to be a part of an industry-leading team that is making a real difference using the latest cloud technologies for building highly-scalable products. Cogito performs in-call voice analysis and delivers real-time guidance to call centre agents and unprecedented insights to managers to an impressive portfolio of Fortune 500 clients.

Cogito is searching for a Data Engineer to join our organization. The ideal candidate has a mix of data  modelling, dimensional modelling, analytical and strong software engineering and QA testing skills. 

You have experience in participating in the product design phase, modeling and documenting data  requirements. You can translate data requirements into database schemas and document models.  You are good in establishing data standards, creating data dictionaries and business glossaries to  

provide clarity around business and data definitions. You enjoy working with various teams, like Data  Science, Customer Success and Customer Support to support their data needs. 

You have worked on operational datastores, data warehouses, data marts and data lakes in your  previous roles and understand the role of each of these in the enterprise SaaS environment. You  have experience in data migrations and have moved large production workloads from legacy systems  to Kubernetes based containerized microservices architecture successfully. You possess very strong  troubleshooting skills and have developed software using CI/CD pipelines in the cloud native  Kubernetes environment. 

You have strong software engineering skills - you are very confident in writing good code and have  solid experience writing unit tests, integration tests, and test fixtures. You have experience in writing  ETL and data pipeline jobs, packaging them into Docker containers and running them in Kubernetes  based environments. You have used Apache Airflow (or AWS Glue ) to create high performing data  pipelines in the past. 


Responsibilities: 

  • Delight internal and external customers by responsive support and highly performant and  well maintained ETL tools, data pipelines and Kubernetes based production infrastructure. Provide timely resolution to customer concerns and issues. Troubleshoot  software, data pipelines and infrastructure issues as needed. 
  • Maintain PCI, HITRUST, HIPAA and SOC2 status by maintaining the tools and systems  you are responsible for, keeping the software updated and providing support for our  security team during the security audits.
  • Continuously improve the reliability and cost efficiency of our data services and  infrastructure. 
  • Contribute to Data engineering engagement model, participate in production readiness  reviews and improve our operational processes to enable company growth and  international expansion. 
  • Automate processes and practices to manage data engineering software lifecycle and  configurations to client specifications. 
  • Design and implement technical solutions to meet customer requirements and  communicate to a broad range of stakeholders within the business. 

Requirements: 

  • Bachelor’s degree in a CS/IS/IT or related field or equivalent experience
  • 5+ years in a Data Engineer or equivalent role 
  • Willingness to learn new technologies and skills on the fly 
  • Demonstrate a history of working in environments with any of the following compliance  standards: PCI, HITRUST, HIPAA, Sarbanes Oxley, ISO27002, CIS L1 & L2 

Skills: 

  • Strong software engineering skills - very confident writing good code and experience  writing unit tests, integration tests, and test fixtures 
  • Experience writing ETL and Data Pipeline jobs. 
  • String experience in SQL (PostGres and MySQL) 
  • Experience in building microservices / SOA 
  • Experience in Stream processing 
  • Experience with ETL design and tools 
  • Strong Python/Pandas and Java experience 
  • Experience with a CI/CD tool such as Jenkins, Travis CI etc. 
  • Excellent communication and documentation skills
Minimum Qualification:
Graduate
Support

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy.

Feedback

We believe in making Analytics Vidhya the best experience possible for Data Science enthusiasts. Help us by providing valuable Feedback.