How the Netlify Data Team Migrated From Databricks to Snowflake
Summary
Netlify’s data team shares their migration playbook from Databricks to Snowflake. The single most important rule: don’t refactor or optimize during migration. The temptation to clean up code while porting it is real, but mixing migration with refactoring compounds risk and makes rollback impossible to reason about.
The mental model is simple: migration is a transport problem, not a quality problem. Move first, improve later. Two distinct phases with distinct success criteria.
Relevance
Highly relevant to 01-projects/phdata/index — platform migration is core consulting work, and this “no refactoring during migration” rule should be a standard engagement principle. It’s also a 06-reference/concepts/skills-as-building-blocks candidate: “clean migration discipline” is a transferable skill across any system swap.
Pairs with 06-reference/2026-04-03-snowflake-rapid-growth-doordash — DoorDash optimizes after the fact; Netlify explicitly separates migration from optimization. Same underlying principle from two directions.
Useful for 01-projects/phdata/career-transition interview prep — migration war stories are common Snowflake consulting interview territory.
Open Questions
- What’s the right “soak period” between migration and optimization? How do you know when migration is truly done?
- Does this principle apply to AI/ML pipeline migrations too, or does the model retraining requirement break the clean separation?
- How would you structure a migration engagement at phData — fixed-scope transport phase, then separate optimization phase?