top of page

Data Engineer



We are the owners of the Enterprise Data Hub - a Hadoop implementation based upon Hive and Spark, currently favouring Scala. Lloyds Banking Group has also signed up to a strategic partnership with Google to utilise their Cloud Platform services (GCP) to create a new strategic platform for the Group to prepare our Bank of the Future service for customers. We anticipate very significant opportunity for review of our systems, data processing methods and approaches which you would be a key part of.



Your responsibilities:

·Create solutions that ingest data from source systems into our big data platform, where the data is transformed, intelligently curated and made available for consumption by downstream operational and analytical processes.

·Focus efficiency and innovation at the heart of the design process to create design blueprints (patterns) that can be re-used by other teams delivering similar types of work.

·Build high quality code that is able to effectively process large volumes of data at scale.

·Create code that is in-line with team, industry and group best practice using a wide array of engineering tools such as GHE (Github Enterprise), Jenkins, Urbancode, Cucumber, Xray etc.

·Being a part of an Agile team, taking part in relevant ceremonies and always helping to drive a culture of continuous improvement.

·Endeavor across the full software delivery lifecycle from requirements gathering/definition through to design, estimation, development, testing and deployment ensuring solutions are of a high quality and non-functional requirements are fully considered.

·Scrutinize platform resource requirements throughout the development lifecycle with a view to minimising resource consumption.

·Use modern engineering techniques such as DevOps, automation and Agile to deliver big data applications efficiently.

·Once Cloud is proven within the bank, help to successfully transition on-prem applications and working practices to GCP.


We require:

·Experience to work in Java or Scala.

·Nice to have a knowledge a Hive, Pig, Sqoop and Data transfer technologies such as Kafka, Attunity, CDC.

·Strong experience doing technical development on big data systems (large scale Hadoop, Spark, Beam, Flume or similar data processing paradigms), and associated data transformation and ETL experience.

·Expertise level in GCP or Cloud.

·Being involved in data and technology.

·Good team player with a strong team ethos

·Excellent communication skills, able to communicate with technical and non-technical colleagues alike.


We offer:

·Set of social benefits to choose from.

·Training program.

·Work in the multinational company.

·Participate in international projects and gain.



ID.: DEBE

Numer ref.: 39309

Comments


bottom of page