Fix!: Dont normalize aliases in merge and when matched#5014
Fix!: Dont normalize aliases in merge and when matched#5014themisvaltinos merged 9 commits intomainfrom
Conversation
|
Does this require a migration? |
if I’m not missing anything im not sure it is needed, because the merge into would fail unless the engine is one with uppercase like Snowflake and in that case nothing would change in state regarding the aliases. my thinking is if a when matched was in state and the dialect was case-insensitive but not uppercase (like Postgres) it simply wouldn't work as the alias would be lowercase causing |
| if isinstance(expression, exp.Column) and (first_part := expression.parts[0]): | ||
| if first_part.this.lower() in ("target", "dbt_internal_dest", "__merge_target__"): | ||
| first_part.replace(normalized_merge_target_alias) | ||
| first_part.replace(exp.to_identifier(MERGE_TARGET_ALIAS, quoted=True)) |
There was a problem hiding this comment.
It took me some time to grok this but I see why it works. quoted=True is the key part to prevent it being un-done later.
This essentially ensures that everything is uppercase regardless of the engine's normalization strategy, which lines up with what EngineAdapter._merge uses.
EngineAdapter._merge produces an unquoted uppercase value, which gets quoted by default during EngineAdapter.execute, which is why it needs to either be uppercase here or normalized in EngineAdapter._merge
4569963 to
047e160
Compare
.circleci/continue_config.yml
Outdated
| branches: | ||
| only: | ||
| - main | ||
| # filters: |
There was a problem hiding this comment.
Todo: to reinstate prior to merge
453ecde to
fbbaeb0
Compare
|
I've also added an integration test for this, if you want to have another look @erindru |
erindru
left a comment
There was a problem hiding this comment.
I think that test failure will disappear if you rebase from main
| pytest.skip(f"{ctx.dialect} on {ctx.gateway} doesnt support merge") | ||
|
|
||
| # DuckDB and some other engines use logical_merge which doesn't support when_matched | ||
| if ctx.dialect not in ["bigquery", "databricks", "postgres", "snowflake", "spark"]: |
There was a problem hiding this comment.
nit: would a more robust test check if ctx.engine_adapter is an instance of LogicalMergeMixin instead?
There was a problem hiding this comment.
yes good point and much cleaner this way revised it
|
|
||
| return validate_expression(v, dialect=dialect) | ||
| v = validate_expression(v, dialect=dialect) | ||
| return t.cast(exp.Whens, v.transform(d.replace_merge_table_aliases, dialect=dialect)) |
There was a problem hiding this comment.
Previously we only ran the replace_merge_table_aliases transform when loading from disk. When reading back from state, we assumed it had already been applied so didn't apply it again.
However, applying it regardless like you do here will transparently "fix" what was in state, right? So that when people upgrade SQLMesh, everything should still match
There was a problem hiding this comment.
yes that was my thinking behind it to foolproof it
This reverts commit 453ecde.
fbbaeb0 to
46073d0
Compare
This pr : #4847 along with the columns added normalisation for MERGE_SOURCE_ALIAS and MERGE_TARGET_ALIAS constants that since are capitalised in the merge statement are mismatched and lead to error when executing the merge for dialects like Postgres so we'd get:
instead of:
This pr leaves the normalisation logic, but reverts to the previous behaviour for the aliases.