A verification link has been sent to your email id
If you have not recieved the link please goto
Sign Up page again
Loading...
Please enter the OTP that is sent to your registered email id
Loading...
Please enter the OTP that is sent to your email id
Loading...
Please enter your registered email id
This email id is not registered with us. Please enter your registered email id.
Don't have an account yet?Register here
Loading...
Please enter the OTP that is sent your registered email id
Loading...
Please create the new password here
Futurense : Data Engineer
Brief Description of position:
Job Profile: The desired profile will prepare and transform data using pipelines. This involves extracting data from various data source systems, transforming it into the staging area, and loading it into a data warehouse system. This process is known as ETL (Extract, Transform and Load). The person's job is to organize the collection, processing, and storing of data from different sources. To do this, in-depth knowledge of Python and SQL and other database solutions is required for the position.
Responsibilities:
1. Create and maintain optimal data pipeline architecture 2. Assemble large, complex data sets that meet business requirements 3. Identify, design, and implement internal process improvements 4. Optimize data delivery and re-design infrastructure for greater scalability 5. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS technologies 6. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics 7. Work with internal and external stakeholders to assist with data-related technical issues and support data infrastructure needs 8. Create data tools for analytics and data scientist team members
Skills Required:
1. Working knowledge of ETL on Cloud (Azure / AWS / GCP) 2. Expert in Python (Programming/ Scripting) 3. Good understanding of any of the data warehousing concepts (Snowflake / AWS Redshift / Azure Synapse Analytics / Google Big Query / Hive) 4. In-depth understanding of principles of database structure 5. Good understanding of any of the ETL technologies (Informatica PowerCenter / AWS Glue / Data Factory / SSIS / Spark / Matillion / Talend / Azure) 6. Expert in SQL 7. Knowledge in Change case Management / Version control – (VSS / DevOps / TFS / GitHub, Bit bucket, CICD Jenkin)
We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy.
Feedback
We believe in making Analytics Vidhya the best experience possible for Data Science enthusiasts.
Help us by providing valuable Feedback.