Senior Data Engineer
About:
CATCHES builds physics-backed AI for garment simulation and virtual try-on, used by luxury fashion brands. Launched at Nvidia GTC 2026 after several years in stealth at the cutting edge of physics informed research and development we are now handling enterprise scale consumer facing virtual try-on’s for brand partners globally.
We are backed by investors from both sides of the industry, including Antoine Arnault, Natalia Vodianova Arnault, Roy Chung (founder, Apollo.io), Dillon Erb (founder, Paperspace), Gary Sheinbaum (former CEO, Tommy Hilfiger) and Sarah Willersdorf (former Head of Luxury, BCG)
Role:
You'll be working across our platform and AI teams, building and maintaining the data infrastructure that underpins our physics-backed simulation and virtual try-on products. With significant new product development underway, you'll be shaping the data layer from an early stage.
Our data ecosystem spans simulation outputs, real-time consumer interactions, and model training pipelines. We're looking for engineers who are comfortable working across a diverse stack and who bring both care and pragmatism to building reliable, observable systems.
You'll have experience building robust data pipelines and analytics platforms, and you're confident making the right tooling decisions early, when they matter most.
You:
4+ years building and maintaining production data pipelines and infrastructure.
Strong Python experience across data engineering and scripting contexts.
Orchestration tooling experience, Airflow or equivalent, for workflow scheduling and dependency management.
Hands-on experience with an analytics platform such as dbt, Databricks, Snowflake, or BigQuery.
Cloud deployment experience (GCP preferred, AWS/Azure welcome).
Event-driven architectures and message queues in a data context, for example Kafka or Pub/Sub.
Some exposure to ML or AI workflows: feature stores, training data pipelines, or model monitoring.
-
Shipping production-grade systems with reliability, performance, and observability in mind.
Nice to have:
Comfort working across both streaming and batch processing contexts.
Familiarity with Infrastructure as Code and modern data observability tooling.
-
Exposure to 3D, simulation, or computer vision data at scale.
Apply for the job
If this sounds like your kind of problem, we'd love to hear from you. We welcome applications from all backgrounds.
