# Lead Data Engineer (019-878)

**Company:** [Hunt St](http://jobs.workable.com/companies/di1bo3LVzjhxpvRWdcoroy.md)
**Location:** Remote
**Workplace:** remote
**Employment type:** Full-time
**Department:** Recruitment

[Apply for this job](http://jobs.workable.com/view/1c78258d-a3cc-42fb-88e8-150f5c08dbe0)

## Description

**​​Looking for Philippines-based candidates**

**Job Role:** Lead Data Engineer

**Compensation range:** $4,000-5,000 AUD / Monthly (depending on experience and client discretion)

**Engagement type:** Independent Contractor Agreement

**Work Schedule:** This role is expected to align with the AU business hours (approx. 9 AM - 5 PM, Monday to Friday) for collaboration, but as a contractor, you’ll have flexibility in how you manage your time.

**Who We Are:** At Hunt St, we help Australian companies hire top remote talent in the Philippines. For this role, you will be engaged directly by the client as an independent contractor. We are not an outsourcing agency. All of our roles are 100% remote so you'll be able to work from home.

**Who The Client Is:** Our client is an independent technology consulting firm with over 15 years of experience helping organizations improve their systems, data, and workflows across industries such as manufacturing, healthcare, research, and laboratory environments. Rather than selling software, they provide unbiased consulting focused on system implementations, data migration, workflow optimization, and regulatory compliance. Their collaborative approach helps businesses modernize legacy systems, improve operational efficiency, and maximize the value of their existing technology so they can scale with confidence.

**Role Overview:** This role is central to the successful design, build, and ongoing governance of a DataHub. The Lead Data Engineer acts as the primary interface between offshore and onshore teams, setting architectural direction and engineering standards while ensuring consistent code quality and adherence to agreed design principles. During the build phase, the role oversees architecture decisions, coordinates with IT teams on core data layer delivery, and approves work prior to UAT release.

As delivery transitions from build to run, the role shifts into operational ownership—managing DataHub SLAs, overseeing new dataset intake, and serving as the escalation point for production issues. The role also requires strong stakeholder engagement, including facilitating agile ceremonies and communicating effectively with both technical and non-technical audiences across a distributed, cross-timezone team.

**Key Responsibilities:** 

-   Own the dbt project architecture, including model structure, naming conventions, testing standards, CI/CD pipelines, and documentation frameworks to ensure consistency across all engineering work.
-   Manage the dataset delivery backlog with onshore stakeholders, including phase scoping, validation of source data availability, and sequencing of builds to maximise business value.
-   Govern code quality by reviewing all Silver and Gold models prior to UAT, ensuring no production deployment occurs without lead approval.
-   Own the end-to-end dataset handover process from Build to Run, including test sign-off, data catalogue documentation, observability configuration, and operational runbooks.
-   Coordinate and prioritise work across Build and Run workstreams operating in parallel, ensuring alignment and delivery efficiency.
-   Participate in standups (async where required) and lead sprint demonstrations on a fortnightly basis.
-   In Run steady state operations, own DataHub SLA performance, oversee new dataset onboarding, and act as the final escalation point for production support issues.

**Run & Steady-State Responsibilities:**

-   Own DataHub SLA performance, ensuring pipeline uptime and data freshness commitments are met across all production Gold datasets.
-   Manage new dataset intake from onshore teams, scoping requests and determining whether they require a new build phase or can be absorbed into the Run team.
-   Act as the primary escalation point for onshore stakeholders and the final offshore escalation for critical incidents that cannot be resolved by the data operations team.

## Requirements

-   7+ years data engineering experience (including 2+ years in a team lead or principal engineer role)
-   Snowflake (advanced): warehouse management, RBAC, dynamic data masking, Snowflake Tasks, performance tuning, and query optimisation
-   dbt Cloud (advanced): incremental models, snapshots, macros, packages, Semantic Layer / MetricFlow, CI/CD using GitHub Actions
-   Python (intermediate): Snowpark, pipeline scripting, and data quality automation
-   Data modelling (expert): dimensional modelling, medallion architecture (Bronze / Silver / Gold), and retail data models (orders, inventory, customers)
-   Git (advanced): branching strategies, PR workflows, and managing shared dbt projects across multiple engineers
-   Azure Data Factory (intermediate): pipeline monitoring and troubleshooting; ability to engage IT credibly on pipeline issues (build owned by IT)
-   Retail domain knowledge (preferred): familiarity with order management, inventory planning, and DTC data structures
-   AI & Automation Tools: Hands-on experience with tools such as GitHub Copilot, Cursor, Claude (or similar LLMs), Snowflake Cortex AI\_COMPLETE, Elementary, and data observability platforms for AI-assisted development, SQL debugging, anomaly detection, alert classification, incident reporting, and workflow automation.

**What Success Looks Like:**

-   Excited to build and scale a modern data platform from the ground up
-   Proven experience leading data engineering teams and maintaining engineering quality standards
-   Comfortable working with separate upstream data delivery teams
-   Has built dbt medallion architecture projects (Bronze / Silver / Gold) from scratch
-   Strong quality ownership; confident holding back datasets not ready for UAT
-   Uses AI tooling naturally as part of day-to-day engineering workflows
-   Owns AI triage pipeline accuracy and incident log quality
-   Ensures alerts are acknowledged and resolved within agreed SLAs
-   Maintains accurate documentation and quality certifications across Gold datasets
-   Experienced in operating and tuning data pipeline monitoring and alerting processes
-   Able to communicate incidents clearly to business stakeholders in concise, non-technical language
-   Thrives in stable, operational environments where reliability is critical
-   Understands escalation boundaries and collaborates effectively with senior engineers

**Work Arrangement & Expectations:**

This is a remote role that will be set up as an independent contractor engagement.

To ensure alignment and transparency, successful candidates will be expected to:

-   Disclose any existing ongoing roles or client work
-   Reflect this engagement on their LinkedIn profile (clearly marked as “Independent Contractor”)
