Job Description
Purpose of the Role:
To design, build, and maintain the systems that collect, process, and analyse data – including pipelines, data warehouses, and data lakes – ensuring data is accurate, accessible, and secure.
Key Responsibilities:
-
Develop, maintain, and optimise scalable data pipelines and data storage solutions.
-
Design and implement data warehousing and data lake architectures.
-
Ensure robust data governance, security, and compliance across all systems.
-
Collaborate with cross-functional teams to gather requirements and deliver solutions that support business objectives.
-
Apply machine learning and data mining techniques where appropriate to add value.
-
Support change and transformation initiatives by providing reliable data systems.
-
Identify risks, implement controls, and follow secure coding practices.
-
Provide strategic input on data architecture, technology adoption, and best practices.
Skills & Competencies:
-
Strong knowledge of database structures, data modelling, and data engineering practices.
-
Proficiency in requirements analysis, problem-solving, and strategic thinking.
-
Ability to work collaboratively with technical and non-technical stakeholders.
-
Business acumen with a focus on delivering data-driven value.
-
Strong understanding of risk, controls, and compliance in data management.
Technical Skills:
-
Hands-on experience with Python, PySpark, and SQL.
-
Experience with AWS (preferred).
-
Knowledge of data warehousing (DW) concepts and ETL processes.
-
Familiarity with DevOps principles and secure coding practices.
Experience:
-
Proven track record in data engineering, data governance, and large-scale data systems.
-
Experience working on change and transformation projects.
-
Background in applying machine learning or data mining is an advantage.