This dbt package transforms data from Fivetran's Iterable connector into analytics-ready tables.
- Number of materialized models¹: 45
- Connector documentation
- dbt package documentation
This package enables you to understand the efficacy of your growth marketing and customer engagement campaigns across email, SMS, push notification, and in-app platforms. It creates enriched models with metrics focused on event interactions, campaign performance, and user engagement.
Final output tables are generated in the following target schema:
<your_database>.<connector/schema_name>_iterable
By default, this package materializes the following final tables:
| Table | Description |
|---|---|
| iterable__events | Tracks all user events with campaign attribution, user details, and channel information to analyze user behavior, conversion paths, and campaign effectiveness at the event level. See tracked events details. Example Analytics Questions:
|
| iterable__user_campaign | Aggregates user-level engagement with specific campaigns and experiment variations including event counts by type to measure individual user responses to campaign messaging. Example Analytics Questions:
|
| iterable__campaigns | Tracks campaign performance with user interaction metrics, event counts, experiment variations, and template details to measure campaign effectiveness and optimize email strategy. Example Analytics Questions:
|
| iterable__users | Provides a comprehensive view of each user with campaign engagement history, list memberships, unsubscription status, and interaction metrics to understand user preferences and lifetime engagement. Example Analytics Questions:
|
| iterable__list_user_history | Chronicles user-list membership history to track when users join or leave lists, manage audience segmentation, and analyze list growth without excessive Monthly Active Rows (MAR) usage. Example Analytics Questions:
|
| iterable__user_unsubscriptions | Tracks all user unsubscriptions by message type and channel to manage communication preferences, protect sender reputation, and identify unsubscribe patterns. Example Analytics Questions:
|
¹ Each Quickstart transformation job run materializes these models if all components of this data model are enabled. This count includes all staging, intermediate, and final models materialized as view, table, or incremental.
To use this dbt package, you must have the following:
- At least one Fivetran Iterable connection syncing data into your destination.
- A BigQuery, Snowflake, Redshift, PostgreSQL, or Databricks destination.
For connections created past August 2023, the user_unsubscribed_channel_history and user_unsubscribed_message_type_history Iterable objects will no longer be history tables as part of schema changes following Iterable's API updates. The fields have also changed. There is no lift required, since we have checks in place that will automatically persist the respective fields depending on what exists in your schema (they will still be history tables if you are using the old schema).
Please be sure you are syncing them as either both history or non-history.
You can either add this dbt package in the Fivetran dashboard or import it into your dbt project:
- To add the package in the Fivetran dashboard, follow our Quickstart guide.
- To add the package to your dbt project, follow the setup instructions in the dbt package's README file to use this package.
Include the following Iterable package version in your packages.yml file.
TIP: Check dbt Hub for the latest installation instructions or read the dbt docs for more information on installing packages.
packages:
- package: fivetran/iterable
version: [">=1.4.0", "<1.5.0"]Many of the models in this package are materialized incrementally, so we have configured our models to work with the different strategies available to each supported warehouse.
For BigQuery and Databricks All Purpose Cluster runtime destinations, we have chosen insert_overwrite as the default strategy, which benefits from the partitioning capability.
For Snowflake, Redshift, and Postgres databases, we have chosen delete+insert as the default strategy.
Regardless of strategy, we recommend that users periodically run a
--full-refreshto ensure a high level of data quality.
- Databricks Runtime 12.2 or later is required to run all models in this package.
- We also recommend using the
dbt-databricksadapter overdbt-sparkbecause each adapter handles incremental models differently. If you must use thedbt-sparkadapter and run into issues, please refer to this section found in dbt's documentation of Spark configurations.
By default, this package runs using your destination and the iterable schema. If this is not where your Iterable data is (for example, if your Iterable schema is named iterable_fivetran), add the following configuration to your root dbt_project.yml file:
vars:
iterable:
iterable_database: your_database_name
iterable_schema: your_schema_nameIf you have multiple Iterable connections in Fivetran and would like to use this package on all of them simultaneously, we have provided functionality to do so. For each source table, the package will union all of the data together and pass the unioned table into the transformations. The source_relation column in each model indicates the origin of each record.
To use this functionality, you will need to set the iterable_sources variable in your root dbt_project.yml file:
# dbt_project.yml
vars:
iterable:
iterable_sources:
- database: connection_1_destination_name # Required
schema: connection_1_schema_name # Required
name: connection_1_source_name # Required only if following the step in the following subsection
- database: connection_2_destination_name
schema: connection_2_schema_name
name: connection_2_source_nameIf you are running the package through Fivetran Transformations for dbt Core™, the below step is necessary in order to synchronize model runs with your Iterable connections. Alternatively, you may choose to run the package through Fivetran Quickstart, which would create separate sets of models for each Iterable source rather than one set of unioned models.
By default, this package defines one single-connection source, called iterable, which will be disabled if you are unioning multiple connections. This means that your DAG will not include your Iterable sources, though the package will run successfully.
To properly incorporate all of your Iterable connections into your project's DAG:
- Define each of your sources in a
.ymlfile in your project. Utilize the following template for thesource-level configurations, and, most importantly, copy and paste the table and column-level definitions from the package'ssrc_iterable.ymlfile.
# a .yml file in your root project
version: 2
sources:
- name: <name> # ex: Should match name in iterable_sources
schema: <schema_name>
database: <database_name>
loader: fivetran
config:
loaded_at_field: _fivetran_synced
freshness: # feel free to adjust to your liking
warn_after: {count: 72, period: hour}
error_after: {count: 168, period: hour}
tables: # copy and paste from iterable/models/staging/src_iterable.yml - see https://support.atlassian.com/bitbucket-cloud/docs/yaml-anchors/ for how to use anchors to only do so onceNote: If there are source tables you do not have (see Enabling/Disabling Models), you may still include them, as long as you have set the right variables to
False.
- Set the
has_defined_sourcesvariable (scoped to theiterablepackage) toTrue, like such:
# dbt_project.yml
vars:
iterable:
has_defined_sources: trueYour Iterable connection might not sync every table that this package expects. If your syncs exclude certain tables, it is either because you do not use that functionality in Iterable or have actively excluded some tables from your syncs. In order to enable or disable the relevant tables in the package, you will need to add the following variable(s) to your dbt_project.yml file.
By default, all variables are assumed to be true.
vars:
iterable__using_campaign_label_history: false # default is true
iterable__using_user_unsubscribed_message_type_history: false # default is true
iterable__using_campaign_suppression_list_history: false # default is true
iterable__using_event_extension: false # default is true This package includes fields we judged were standard across Iterable users. However, the Fivetran connector allows for additional columns to be brought through in the event_extension and user_history objects. Therefore, if you wish to bring them through, leverage our passthrough column variables. For event_extension columns, ensure that iterable__using_event_extension is set to True, which is the default.
You will see these additional columns populate in the end iterable__list_user_history, iterable__events, and iterable__users models.
Notice: A dbt run --full-refresh is required each time these variables are edited.
These variables allow for the passthrough fields to be aliased (alias) and casted (transform_sql) if desired, but not required. Datatype casting is configured via a sql snippet within the transform_sql key. You may add the desired sql while omitting the as field_name at the end and your custom pass-though fields will be casted accordingly. Use the below format for declaring the respective pass-through variables:
# dbt_project.yml
vars:
iterable_event_extension_pass_through_columns:
- name: "event_extension_field"
alias: "renamed_field"
transform_sql: "cast(renamed_field as string)"
iterable_user_history_pass_through_columns:
- name: "user_attribute"
alias: "renamed_user_attribute"
- name: "user_attribute_2"By default, this package will build the following Iterable models within the schemas below in your target database:
- Final models within a schema titled (
<target_schema>+_iterable) - Intermediate models in (
<target_schema>+_int_iterable) - Staging models within a schema titled (
<target_schema>+_stg_iterable)
If this is not where you would like your modeled Iterable data to be written to, add the following configuration to your dbt_project.yml file:
models:
iterable:
+schema: my_new_schema_name # Leave +schema: blank to use the default target_schema.
staging:
+schema: my_new_schema_name # Leave +schema: blank to use the default target_schema.Note: If your profile does not have permissions to create schemas in your destination, you can set each
+schemato blank. The package will then write all tables to your pre-existing target schema.
If an individual source table has a different name than what the package expects, add the table name as it appears in your destination to the respective variable:
IMPORTANT: See this project's
dbt_project.ymlvariable declarations to see the expected names.
vars:
iterable_<default_source_table_name>_identifier: "your_table_name"In the iterable__user_campaign model, there are metrics calculated based on Iterable events. By default, all the below metrics are enabled by default. If not all metrics apply to your use case, you can specify which event metrics to include by adjusting the iterable__event_metrics variable in your own dbt_project.yml.
vars:
iterable__event_metrics:
- "emailClick"
- "emailUnSubscribe"
- "emailComplaint"
- "customEvent"
- "emailSubscribe"
- "emailOpen"
- "pushSend"
- "smsBounce"
- "pushBounce"
- "inAppSendSkip"
- "smsSend"
- "inAppSend"
- "pushOpen"
- "emailSend"
- "pushSendSkip"
- "inAppOpen"
- "emailSendSkip"
- "emailBounce"
- "inAppClick"
- "pushUninstall"Records from the source can sometimes arrive late. Since several of the models in this package are incremental, by default we look back 7 days to ensure late arrivals are captured while avoiding the need for frequent full refreshes. While the frequency can be reduced, we still recommend running dbt --full-refresh periodically to maintain data quality of the models.
To change the default lookback window, add the following variable to your dbt_project.yml file:
vars:
iterable:
iterable_lookback_window: number_of_days # default is 7The Iterable connector schema originally misspelled the CAMPAIGN_SUPPRESSION_LIST_HISTORY table as CAMPAIGN_SUPRESSION_LIST_HISTORY (note the singular P). As of August 2021, Fivetran has deprecated the misspelled table and will only continue syncing the correctly named CAMPAIGN_SUPPRESSION_LIST_HISTORY table.
By default, this package refers to the new table (CAMPAIGN_SUPPRESSION_LIST_HISTORY). To change this so that the package works with the old misspelled source table (we do not recommend this, however), add the following configuration to your dbt_project.yml file:
vars:
iterable_campaign_suppression_list_history_identifier: "campaign_supression_list_history"Fivetran offers the ability for you to orchestrate your dbt project through Fivetran Transformations for dbt Core™. Learn how to set up your project for orchestration through Fivetran in our Transformations for dbt Core setup guides.
This dbt package is dependent on the following dbt packages. These dependencies are installed by default within this package. For more information on the following packages, refer to the dbt hub site.
IMPORTANT: If you have any of these dependent packages in your own
packages.ymlfile, we highly recommend that you remove them from your rootpackages.ymlto avoid package version conflicts.
packages:
- package: fivetran/fivetran_utils
version: [">=0.4.0", "<0.5.0"]
- package: dbt-labs/dbt_utils
version: [">=1.0.0", "<2.0.0"]
The Fivetran team maintaining this package only maintains the latest version of the package. We highly recommend you stay consistent with the latest version of the package and refer to the CHANGELOG and release notes for more information on changes across versions.
A small team of analytics engineers at Fivetran develops these dbt packages. However, the packages are made better by community contributions.
We highly encourage and welcome contributions to this package. Learn how to contribute to a package in dbt's Contributing to an external dbt package article.
- If you have questions or want to reach out for help, see the GitHub Issue section to find the right avenue of support for you.
- If you would like to provide feedback to the dbt package team at Fivetran or would like to request a new dbt package, fill out our Feedback Form.