dbt fundamentally does one thing: it enables analytics folks to easily create new warehouse tables to reduce repetitive work during consumption. (View Highlight)
Before dbt: data is often redundantly pre-processed with the start of any new project. Friction in setting up reusable, version controlled downstream models (e.g. SQL operator in airflow, constructing opaque views) was too high. Effort to make new data models - effort to repeat work >> productivity gain, so people just repeated work instead. (View Highlight)
After dbt: dbt did a simple thing: lowered the friction to reducing repetitive work and finding downstream models during consumption. Meaning more analysts started writing data models, making it easier to find tables in a more easily consumable form. Analysts were suddenly empowered to clean data once and re-use that data later, removing it as a redundant step from related workflows. (View Highlight)