Skip to content

This website works best using cookies which are currently disabled.Cookie policy  Allow cookies

Ware - £600 - 660 per Day Contract Posted by: Elevate Direct Posted: Tuesday, 30 June 2020
Data Engineer: Talend, Azure, Power BI, Docker/Kubernetes

Job Purpose

The data engineer, which is an emerging role in [GSK] PSC DDA team, will play a pivotal role in operationalizing data and analytics initiatives for GSK's digital business initiatives. The bulk of the data engineer's work would be in building, managing and optimizing data pipelines and then moving these data pipelines effectively into production for key data and analytics consumers (like business/data analysts, data scientists or any persona that needs curated data for data and analytics use cases).

Data engineers also need to guarantee compliance with data governance and data security requirements while creating, improving and operationalizing these integrated and reusable data pipelines. This would enable faster data access, integrated data reuse and vastly improved time-to-solution for GSK's DDA analytics initiatives. The data engineer will be measured on their ability to integrate analytics and (or) data science results GSK's business processes.

The newly hired data engineer will be the key interface in operationalizing data and analytics on behalf of the [business unit(s)) and organizational outcomes. This role will require both creative and collaborative working with Tech and the wider business. It will involve evangelizing effective data management practices and promoting better understanding of data and analytics. The data engineer will also be tasked with working with key business stakeholders, Tech experts and subject-matter experts to plan and deliver optimal analytics and data science solutions.

Key Responsibilities

Build data pipelines: Managed data pipelines consist of a series of stages through which data flows (for example, from data sources or endpoints of acquisition to integration to consumption for specific use cases). These data pipelines have to be created, maintained and optimized as workloads move from development to production for specific use cases. Architecting, creating and maintaining data pipelines will be the primary responsibility of the data engineer.

Drive Automation through effective metadata management: The data engineer will be responsible for using innovative and modern tools, techniques and architectures to partially or completely automate the most-common, repeatable and tedious data preparation and integration tasks in order to minimize manual and error-prone processes and improve productivity. The data engineer will also need to assist with renovating the data management infrastructure to drive automation in data integration and management.

Technical Knowledge/Skills

Strong ability to design, build and manage data pipelines for data structures encompassing data transformation, data models, schemas, metadata and workload management. The ability to work with both IT and business in integrating analytics and data science output into business processes and workflows.

Strong experience in working with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures and integrated datasets using traditional data integration technologies. These should include [ETL/ELT, data replication/CDC, message-oriented data movement, API design and access] and upcoming data ingestion and integration technologies such as [stream data integration, CEP and data virtualization].

Hands-on development exposure on Microsoft Azure cloud services-Spark structure streaming with Azure,Spark on HD insight, Azure Stream Analytics, Azure Data Factory(v1 & v2), Azure Blob Storage, Azure Data Lake Store(Gen1/Gen2), SQL DW, and Denodo, Talend, others

Hands on Talend a MUST

Basic experience working with popular data discovery, analytics and BI software tools like [Tableau, Qlik, PowerBI and others] for semantic-layer-based data discovery.

Basic experience in working with [data governance/data quality] and [data security] teams and specifically [information stewards] and [privacy and security officers] in moving data pipelines into production with appropriate data quality, governance and security standards and certification.

Demonstrated ability to work across multiple deployment environments including [cloud, on-premises and hybrid], multiple operating systems and through containerization techniques such as [Docker, Kubernetes, and others].

If you match these requirements, please apply as usual. Elevate will send you an email, please open, click and action it and your application will be visible to the hiring organisation directly. Elevate provides a route to contract and contingent assignments across many skills areas by matching your profile to relevant jobs that our customers post to the platform.

Please note that Elevate Direct is a software provider and not a recruitment agency. As such, Elevate is not involved in the recruitment processes for any employer, who uses the platform. Please contact employers directly through your Elevate profile with any queries related to your application.

Ware, UK
6 months
£600 - 660 per Day
Elevate Direct
Elevate Direct
30/06/2020 19:28:36

We strongly recommend that you should never provide your bank account details to an advertiser during the job application process. Should you receive a request of this nature please contact support giving the advertiser's name and job reference.

Other jobs like this

South London
£600.00 - £700.00 per Day
Luton, Bedfordshire
Up to £400 per Day + Remote Working
See more