Big Data Engineering

Sydney, New South Wales, Australia Full-time

Before we begin, here's a quick explainer of why we are recruiting at all levels in such a hopelessly vague advert title:

A lot of companies talk about hiring "due to growth" and we all know that this is usually the advert writer either being economical with the truth or because they've copied and pasted an older advert when it was actually "due to growth".

The reality here is that we have just won three major projects, all requiring some serious back end engineering and algorithmic coding, just as we have supported six of our engineers to move the USA to help one of our clients build something seriously cool. 

As a result, we are looking across the experience spectrum and are keen to look for all levels of experience.

For over 15 years Quantium have combined the best of human and artificial intelligence to power possibilities for individuals, organisations and society. Our solutions make sense of what has happened and what will, could or should be done to re-shape industries and societies around the needs of the people they serve.

Times and technology have changed, but this remains our goal. Instead of wrangling single, SQL-based databases, our MapR Hadoop platform runs across 200 nodes with multiple clusters using the latest big data technology.

Working with Scala, Spark and the rest of the Hadoop ecosystem, you’ll be building applications to work with unique data sets (some of the largest and most complex in Australia) to make a real difference to our clients.

FAQ:

I see you're happy to relocate? What does this mean?

We have some very simple rules on relocation. We will be happy to relocate anyone at Senior level or above who has been working with Functional Scala (and most preferably Spark also) at a commercial level who can pass our interview process. 

Is your Scala fully FP?

Not always. We focus on building highly maintainable code so there’s often a decision to make on what the best way to go is. We’re not purists, but we love to use FP where it’s the best solution

How do your teams work?

We work in multi-discipline teams so you’ll be working alongside Data Scientists, Analysts, Testers and Devops

What are you looking for in the ideal candidate?

We’re looking for Big Data Software engineers, but firstly, you can be anybody, from any walk of life. Before we get to your skills we want you to know that we actively try to foster an environment where all our employees feel safe, welcomed and celebrated. We look for the same in all people we hire

You’ve been working with Scala and love to play with it, you know your way around the Hadoop eco-system with Spark at the top of your go to frameworks

You’re a pragmatist, a true engineer and love to solve complex issues

You can also have a conversation about Scalaz without alienating anyone!

You haven't ticked "Do programmers have quiet working conditions?" In the Joel test?

We have an open plan office, which is normally pretty quiet, but if you want to head to a meeting room or use headphones, that's totally fine

Skills & Requirements

Do you have a bullet point checklist for me to check off my suitability?

We know people are not bullet points but sure thing!

You have:

  • Experience developing applications using Apache Spark or similar Hadoop-based big data technologies
  • Experience building Scala applications, preferably in distributed contexts (if you have a real love of Scala and are itching to move from Java, we’ll also consider it)
  • A solid foundation in functional and object-oriented programming with data structures
  • A passion for solving problems and writing efficient algorithms
  • An awareness of considerations around structuring data on distributed systems to support analytic use cases
  • A passion for delivering high-quality, peer-reviewed, well-tested code
  • A love for knowledge sharing, you know what works, but you’re also happy to learn new methods and technology