- You will write code for ETL needs using Python or PySpark.
- You will build and deploy data pipelines in the cloud.
- You will make sure that the data the organization works with is protected.
- You will build automated data ingestion systems that retrieve data from the data lake.
- You will automate deployments
- You have at least 4 years of experience as an Azure Data Engineer.
- You have experience programming with Python, Pyspark and Spark SQL
- You have experience with Azure Pipelines, YAML, Azure CLI, CI/CD, Powershell
- You have experience with Azure Data Bricks.
- You have experience with SQL Database and data warehouses.
- You have Object Oriented principles and Design Patterns knowledge.
- You have experience NoSQL solutions like Cosmos DB, Neo4j (Nice to have)
- You have good communication skills and are a team player/
- You have excellent oral and written communication skills in English.
- You are analytical, result-driven and a proactive person.
- Holiday pay 8% of gross annual salary;
- 25 Vacation days + 0.5 bonus day per quarter if you have remained fit and healthy (read: do not report sick);
- Travel allowance 19 cents / kilometer;
- E-learning portal with (almost) all IT training and education that you can propose;
- 50 euros for medical expenses (you can count gross on top of your salary!)
We are looking for a Data Engineer to deliver end-to-end solutions for different stakeholders within the organization. You will work in a team that uses data to determine the risks the organization can encounter when their customers apply for loans, from both businesses and private customers. You will be in charge, together with your team, to build the logic behind this risk assessment.