Data Engineer

Sydney, New South Wales, Australia | Engineering | Full-time


Founded in 2002, Quantium combines the best of human and artificial intelligence to power possibilities for individuals, organisations and society. Our solutions make sense of what has happened and what will, could or should be done to re-shape industries and societies around the needs of the people they serve. 

As one of the world’s fully diversified data science and AI leaders we operate across every sector of the economy and we’re growing fast - with growth comes opportunity! We’re passionate about building out our team of smart, fun, diverse and motivated people. 

We combine a team of experts that spans data scientists, actuaries, statisticians, business analysts, strategy consultants, engineers, technologists, programmers, product developers, and futurists – all dedicated to harnessing the power of data to drive transformational outcomes for our clients. 

We actively foster a culture where our people can stretch themselves to reach their full potential. We also know that work has to work for you, and modern life is fast-paced and balance can be tricky. You want to work where you are respected and valued as an individual, not a number. Quantium embraces a flexible and supportive environment dedicated to powering possibilities for our team members, clients and partners. 

We’re looking for Data Engineers who know their way around the Hadoop eco-system with Spark at the top of your go to frameworks. You will have a data engineering mindset with some experience in Scala, Java or Python.

Key activities:

  • Big Data development using Hadoop, Spark, Scala and Java
  • Create and schedule complex workflows
  • Implementing cloud based d datawarehouse solutions using Snowflake
  • Develop ETL solutions using SSIS
  • Create and Maintain database objects in SQL Server and Teradata environments
  • Re-write existing logic, where ever applicable, to improve query and report performance

We work in multi-discipline teams so you’ll be working alongside Data Scientists, Analysts, Testers and Devops.

Skills & Requirements

Do you have a bullet point checklist for me to check off my suitability?

We know people are not bullet points but sure thing!

You have:

  • Experience building building data pipelines (Hadoop, Spark, Scala, Java, Python)
  • Expertise in performance analysis, query and workload tuning, index optimization, etc.
  • Confident working with very large, complex data-sets
  • Solid understanding of database design and dimensional modelling principles (Kimball)
  • A passion for solving problems and writing efficient algorithms
  • An awareness of considerations around structuring data on distributed systems to support analytic use cases
  • A passion for delivering high-quality, peer-reviewed, well-tested code
  • A love for knowledge sharing, you know what works, but you’re also happy to learn new methods and technology