Case studies

All Jobs

Data Engineer - Healthcare - Remote

ITFull-timePermanent

About Us:

Halo Labs is a future focused, end-to-end data solutions firm - transforming tomorrow, today. Our intelligent and secure technology systems and data driven solutions drive meaningful outcomes and unlock lasting value.

Why work at Halo Labs?

  • Leading Innovation: We don’t just solve problems; we illuminate a never-ending stream of innovation.
  • Exceptional Perks: Enjoy outstanding perks like a dedicated learning budget, performance bonuses, and comprehensive wellbeing support.
  • Remote-First Organisation: Experience the perks of remote work while having the flexibility to travel to client locations throughout Australia.
  • Inclusive and Engaging: Celebrate diversity in a welcoming space that thrives on new ideas and open conversations, all in a respectful environment.
  • Inspiring Origins: With a compelling founder story, we are a customer-focused, culture-first organisation.

About the Role:

  • Framework Development: Design, develop, and enhance in-house PySpark frameworks to ensure standardised implementations and promote reusability across projects.
  • Mentorship: Provide technical guidance and mentorship to junior team members, fostering skill development within the analytics community.
  • Stakeholder Collaboration: Collaborate with internal and external stakeholders to deliver high-quality, data-driven solutions that meet business needs.
  • Data Analysis & Engineering: Conduct exploratory data analysis and design end-to-end data engineering solutions, including ETL/ELT pipelines using Azure Data Framework, Databricks, and other Azure services.
  • Pipeline Implementation: Build, test, and deploy scalable data pipelines using Python, PySpark, and Azure services, ensuring adherence to best practices for automated testing, documentation, and CI/CD via Azure DevOps (Git, Azure Pipelines).
  • Platform Monitoring: Monitor and support batch operations for the Data and Insights Platform, ensuring reliability and performance.

About You

  • Cloud Expertise: Hands-on experience with Azure Cloud Services, including Databricks, Azure Data Factory, Azure SQL Server, and Azure Storage.
  • Databricks Proficiency: Strong experience with Databricks for big data processing.
  • Leveraging Databricks notebooks for collaborative data engineering and analytics.
  • Utilising Databricks clusters for scalable and efficient data processing.
  • Data Engineering Tools: Proficiency in SQL, Python, and PySpark for data processing and pipeline development.
  • Data & Analytics Concepts: Strong understanding of data warehousing, data lakes, ETL/ELT processes, and visualisation/reporting tools (Power BI, Tableau) to drive business value.
  • Data Architecture: Deep knowledge of data architecture, data modelling, data marts, and metadata management best practices.
  • Modern Data Platforms: Minimum 3 years of experience working with modern data platforms (AWS, Azure) and data engineering, with a focus on Agile delivery methodologies.
  • DevOps Practices: Expertise in Azure DevOps, including Git for version control and Azure Pipelines for CI/CD.
  • Education: Degree in Computer Science, Data Science, IT, or a related field

While Queensland-based candidates are preferred, outstanding applicants from other locations will also be considered.

You’re interested in this exciting opportunity and feel you’re the right fit please click “apply”


Apply for this job
Join our team

Do you have what it
takes to change the world

We are always looking for passionate, talented individuals to join our team and contribute 
to our vision. If you’re passionate about driving innovation, creating exceptional digital experiences, and making a real impact, we’d love to hear from you.