Skip to content

This website works best using cookies which are currently disabled.Cookie policy  Allow cookies
JobServe
 

Job Application

 
 
 

Please answer the following questions in order to process your application.

 
 
Email Address *
 
Select your working status in the UK *
 
 
 
File Attachments:
(2MB file maximum. doc, docx, pdf, rtf or txt files only)
 
Attach a CV * 
 
Optional covering letter 
OR
Clear covering letter
 
 
 * denotes required field
 
 
 
Additional Information:
 
First Name
 
Last Name
 
Address
 
Country
 
Home Telephone
 
Mobile/Cell
 
Availability/Notice
 
Hourly Rate GBP
 
Approximately how far are you willing to travel to work (in miles) ?
 
 
 

Key Privacy Information

When you apply for a job, JobServe will collect the information you provide in the application and disclose it to the advertiser of the job.

If the advertiser wishes to contact you they have agreed to use your information following data protection law.

JobServe will keep a copy of the application for 90 days.

More information about our Privacy Policy.

 

Job Details

 

Spark Scal Engineer (Contract)

Location: Leeds Country: UK Rate: 400 Pound/day
 

Spark Scala Engineer

Leeds, UK (3 days in a week onsite)

Contract 6months + - Inside Ir35

400 Pound/day

Develop and maintain data pipelines:

You'll be responsible for designing, building, and maintaining data pipelines using Apache Spark and Scala.

This includes tasks like:

  • Extracting data from various sources (databases, APIs, files)
  • Transforming and cleaning the data
  • Loading the data into data warehouses or data lakes (eg, BigQuery, Amazon Redshift)
  • Automating the data pipeline execution using scheduling tools (eg, Airflow)
  • Work with Big Data technologies: You'll likely work with various Big Data technologies alongside Spark, including:
    • Hadoop Distributed File System (HDFS) for storing large datasets
    • Apache Kafka for Real Time data streaming
    • Apache Hive for data warehousing on top of HDFS
    • Cloud platforms like AWS, Azure, or GCP for deploying and managing your data pipelines

Data analysis and modelling: While the primary focus might be on data engineering, some JDs might require basic data analysis skills: Writing analytical queries using SQL or Spark SQL to analyze processed data and Building simple data models to understand data relationships

Required Skills:

  • Programming languages: Proficiency in Scala and Spark is essential. Familiarity with Python and SQL is often a plus.
  • Big Data technologies: Understanding of HDFS, Kafka, Hive, and cloud platforms is valuable.
  • Data engineering concepts: Knowledge of data warehousing, data pipelines, data modelling, and data cleansing techniques is crucial.
  • Problem-solving and analytical skills: You should be able to analyze complex data problems, design efficient solutions, and troubleshoot issues.
  • Communication and collaboration: The ability to communicate effectively with data scientists, analysts, and business stakeholders is essential.

Desired Skills (may vary):

  • Machine learning libraries: Familiarity with Spark ML or other machine learning libraries in Scala can be advantageous.
  • Cloud computing experience: Experience with cloud platforms like AWS, Azure, or GCP for data pipelines deployment is a plus.
  • DevOps tools: Knowledge of DevOps tools like Git, CI/CD pipelines, and containerization tools (Docker, Kubernetes) can be beneficial.

Posted Date: 22 Mar 2024 Reference: JS Company: Axiom Software Solutions Ltd Contact: Ashish Singh