Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nested columns on pyspark fail in expectations that use ColumnValueCounts #9926

Open
gerileka opened this issue May 14, 2024 · 0 comments
Open

Comments

@gerileka
Copy link

gerileka commented May 14, 2024

Describe the bug
We would like to use expectations for nested columns. However we found out that expectations that use ColumnValueCounts, fail due to this line. To be more specific is the groupby count that fails. It should be handled how you fixed the line in this PR.

Or even something more easier like:

df.withColumn(column, F.col(column)).where(F.col(column).isNotNull()).groupBy(column).count()

To Reproduce

from pyspark.sql import DataFrame as SparkDataFrame
from pyspark.sql import Row, SparkSession
from pyspark.sql import functions as F
from pyspark.sql.types import StringType, StructField, StructType
import great_expectations as gx

spark_session = SparkSession.builder.getOrCreate()

schema = StructType(
    [
        StructField(
            "address_information",
            StructType([StructField("city", StringType(), True)]),
            True,
        )
    ]
)

data = [
    Row(
        address_information=Row(
            city="paris",
        ),
    ),
    Row(
        address_information=Row(
            city="london",
        ),
    ),
]

dataframe = spark_session.createDataFrame(data, schema=schema)

context = gx.get_context()
datasource = context.sources.add_spark(name="orders")
data_asset = datasource.add_dataframe_asset(name="orders")
batch_request = data_asset.build_batch_request(dataframe=dataframe)
# Create expectations
context.add_or_update_expectation_suite("spark_data_validation")
# Create a validator
validator = context.get_validator(
    batch_request=batch_request, expectation_suite_name="spark_data_validation"
)

validator.expect_column_distinct_values_to_be_in_set(
    column="address_information.city", value_set=["london", "paris"]
)
---------------------------------------------------------------------------
AnalysisException                         Traceback (most recent call last)
File ~/.local/lib/python3.10/site-packages/great_expectations/execution_engine/execution_engine.py:548, in ExecutionEngine._process_direct_and_bundled_metric_computation_configurations(self, metric_fn_direct_configurations, metric_fn_bundle_configurations)
    545 try:
    546     resolved_metrics[
    547         metric_computation_configuration.metric_configuration.id
--> 548     ] = metric_computation_configuration.metric_fn(  # type: ignore[misc] # F not callable
    549         **metric_computation_configuration.metric_provider_kwargs
    550     )
    551 except Exception as e:

File ~/.local/lib/python3.10/site-packages/great_expectations/expectations/metrics/metric_provider.py:50, in metric_value.<locals>.wrapper.<locals>.inner_func(*args, **kwargs)
     48 @wraps(metric_fn)
     49 def inner_func(*args, **kwargs):
---> 50     return metric_fn(*args, **kwargs)

File ~/.local/lib/python3.10/site-packages/great_expectations/expectations/metrics/column_aggregate_metrics/column_value_counts.py:163, in ColumnValueCounts._spark(cls, execution_engine, metric_domain_kwargs, metric_value_kwargs, **kwargs)
    160 column: str = accessor_domain_kwargs["column"]
    162 value_counts_df: pyspark.DataFrame = (
--> 163     df.select(column).where(F.col(column).isNotNull()).groupBy(column).count()
    164 )
    166 if sort == "value":

File [/pyenv/versions/3.10.11/lib/python3.10/site-packages/pyspark/sql/group.py:38](https://jupyter.sprockets.cgs-rls.ifsalpha.com/pyenv/versions/3.10.11/lib/python3.10/site-packages/pyspark/sql/group.py#line=37), in dfapi.<locals>._api(self)
     37 name = f.__name__
---> 38 jdf = getattr(self._jgd, name)()
     39 return DataFrame(jdf, self.session)

File [/pyenv/versions/3.10.11/lib/python3.10/site-packages/py4j/java_gateway.py:1322](https://jupyter.sprockets.cgs-rls.ifsalpha.com/pyenv/versions/3.10.11/lib/python3.10/site-packages/py4j/java_gateway.py#line=1321), in JavaMember.__call__(self, *args)
   1321 answer = self.gateway_client.send_command(command)
-> 1322 return_value = get_return_value(
   1323     answer, self.gateway_client, self.target_id, self.name)
   1325 for temp_arg in temp_args:

File [/pyenv/versions/3.10.11/lib/python3.10/site-packages/pyspark/errors/exceptions/captured.py:185](https://jupyter.sprockets.cgs-rls.ifsalpha.com/pyenv/versions/3.10.11/lib/python3.10/site-packages/pyspark/errors/exceptions/captured.py#line=184), in capture_sql_exception.<locals>.deco(*a, **kw)
    182 if not isinstance(converted, UnknownException):
    183     # Hide where the exception came from that shows a non-Pythonic
    184     # JVM exception message.
--> 185     raise converted from None
    186 else:

AnalysisException: [UNRESOLVED_COLUMN.WITH_SUGGESTION] A column or function parameter with name `address_information`.`city` cannot be resolved. Did you mean one of the following? [`city`].;
'Aggregate ['address_information.city], ['address_information.city, count(1) AS count#45L]
+- Project [city#40]
   +- Filter isnotnull(address_information#2.city)
      +- Project [address_information#2.city AS city#40, address_information#2]
         +- LogicalRDD [address_information#2], false


The above exception was the direct cause of the following exception:

MetricResolutionError                     Traceback (most recent call last)
Cell In[3], line 38
     35 # Create a validator
     36 validator = context.get_validator(batch_request=batch_request, expectation_suite_name="spark_data_validation")
---> 38 validator.expect_column_distinct_values_to_be_in_set(column="address_information.city", value_set=["london","paris"])

File ~/.local/lib/python3.10/site-packages/great_expectations/validator/validator.py:594, in Validator.validate_expectation.<locals>.inst_expectation(*args, **kwargs)
    588         validation_result = ExpectationValidationResult(
    589             success=False,
    590             exception_info=exception_info,
    591             expectation_config=configuration,
    592         )
    593     else:
--> 594         raise err
    596 if self._include_rendered_content:
    597     validation_result.render()

File ~/.local/lib/python3.10/site-packages/great_expectations/validator/validator.py:557, in Validator.validate_expectation.<locals>.inst_expectation(*args, **kwargs)
    553     validation_result = ExpectationValidationResult(
    554         expectation_config=copy.deepcopy(expectation.configuration)
    555     )
    556 else:
--> 557     validation_result = expectation.validate(
    558         validator=self,
    559         evaluation_parameters=self._expectation_suite.evaluation_parameters,
    560         data_context=self._data_context,
    561         runtime_configuration=basic_runtime_configuration,
    562     )
    564 # If validate has set active_validation to true, then we do not save the config to avoid
    565 # saving updating expectation configs to the same suite during validation runs
    566 if self._active_validation is True:

File ~/.local/lib/python3.10/site-packages/great_expectations/expectations/expectation.py:1276, in Expectation.validate(self, validator, configuration, evaluation_parameters, interactive_evaluation, data_context, runtime_configuration)
   1267 self._warn_if_result_format_config_in_expectation_configuration(
   1268     configuration=configuration
   1269 )
   1271 configuration.process_evaluation_parameters(
   1272     evaluation_parameters, interactive_evaluation, data_context
   1273 )
   1274 expectation_validation_result_list: list[
   1275     ExpectationValidationResult
-> 1276 ] = validator.graph_validate(
   1277     configurations=[configuration],
   1278     runtime_configuration=runtime_configuration,
   1279 )
   1280 return expectation_validation_result_list[0]

File ~/.local/lib/python3.10/site-packages/great_expectations/validator/validator.py:1069, in Validator.graph_validate(self, configurations, runtime_configuration)
   1067         return evrs
   1068     else:
-> 1069         raise err
   1071 configuration: ExpectationConfiguration
   1072 result: ExpectationValidationResult

File ~/.local/lib/python3.10/site-packages/great_expectations/validator/validator.py:1048, in Validator.graph_validate(self, configurations, runtime_configuration)
   1041 resolved_metrics: _MetricsDict
   1043 try:
   1044     (
   1045         resolved_metrics,
   1046         evrs,
   1047         processed_configurations,
-> 1048     ) = self._resolve_suite_level_graph_and_process_metric_evaluation_errors(
   1049         graph=graph,
   1050         runtime_configuration=runtime_configuration,
   1051         expectation_validation_graphs=expectation_validation_graphs,
   1052         evrs=evrs,
   1053         processed_configurations=processed_configurations,
   1054         show_progress_bars=self._determine_progress_bars(),
   1055     )
   1056 except Exception as err:
   1057     # If a general Exception occurs during the execution of "ValidationGraph.resolve()", then
   1058     # all expectations in the suite are impacted, because it is impossible to attribute the failure to a metric.
   1059     if catch_exceptions:

File ~/.local/lib/python3.10/site-packages/great_expectations/validator/validator.py:1207, in Validator._resolve_suite_level_graph_and_process_metric_evaluation_errors(self, graph, runtime_configuration, expectation_validation_graphs, evrs, processed_configurations, show_progress_bars)
   1199 resolved_metrics: _MetricsDict
   1200 aborted_metrics_info: Dict[
   1201     _MetricKey,
   1202     Dict[str, Union[MetricConfiguration, Set[ExceptionInfo], int]],
   1203 ]
   1204 (
   1205     resolved_metrics,
   1206     aborted_metrics_info,
-> 1207 ) = self._metrics_calculator.resolve_validation_graph(
   1208     graph=graph,
   1209     runtime_configuration=runtime_configuration,
   1210     min_graph_edges_pbar_enable=0,
   1211 )
   1213 # Trace MetricResolutionError occurrences to expectations relying on corresponding malfunctioning metrics.
   1214 rejected_configurations: List[ExpectationConfiguration] = []

File ~/.local/lib/python3.10/site-packages/great_expectations/validator/metrics_calculator.py:287, in MetricsCalculator.resolve_validation_graph(self, graph, runtime_configuration, min_graph_edges_pbar_enable)
    282 resolved_metrics: _MetricsDict
    283 aborted_metrics_info: Dict[
    284     _MetricKey,
    285     Dict[str, Union[MetricConfiguration, Set[ExceptionInfo], int]],
    286 ]
--> 287 resolved_metrics, aborted_metrics_info = graph.resolve(
    288     runtime_configuration=runtime_configuration,
    289     min_graph_edges_pbar_enable=min_graph_edges_pbar_enable,
    290     show_progress_bars=self._show_progress_bars,
    291 )
    292 return resolved_metrics, aborted_metrics_info

File ~/.local/lib/python3.10/site-packages/great_expectations/validator/validation_graph.py:209, in ValidationGraph.resolve(self, runtime_configuration, min_graph_edges_pbar_enable, show_progress_bars)
    203 resolved_metrics: Dict[_MetricKey, MetricValue] = {}
    205 # updates graph with aborted metrics
    206 aborted_metrics_info: Dict[
    207     _MetricKey,
    208     Dict[str, Union[MetricConfiguration, Set[ExceptionInfo], int]],
--> 209 ] = self._resolve(
    210     metrics=resolved_metrics,
    211     runtime_configuration=runtime_configuration,
    212     min_graph_edges_pbar_enable=min_graph_edges_pbar_enable,
    213     show_progress_bars=show_progress_bars,
    214 )
    216 return resolved_metrics, aborted_metrics_info

File ~/.local/lib/python3.10/site-packages/great_expectations/validator/validation_graph.py:315, in ValidationGraph._resolve(self, metrics, runtime_configuration, min_graph_edges_pbar_enable, show_progress_bars)
    311                 failed_metric_info[failed_metric.id]["exception_info"] = {
    312                     exception_info
    313                 }
    314     else:
--> 315         raise err
    316 except Exception as e:
    317     if catch_exceptions:

File ~/.local/lib/python3.10/site-packages/great_expectations/validator/validation_graph.py:285, in ValidationGraph._resolve(self, metrics, runtime_configuration, min_graph_edges_pbar_enable, show_progress_bars)
    280         computable_metrics.add(metric)
    282 try:
    283     # Access "ExecutionEngine.resolve_metrics()" method, to resolve missing "MetricConfiguration" objects.
    284     metrics.update(
--> 285         self._execution_engine.resolve_metrics(
    286             metrics_to_resolve=computable_metrics,  # type: ignore[arg-type]  # Metric typing needs further refinement.
    287             metrics=metrics,  # type: ignore[arg-type]  # Metric typing needs further refinement.
    288             runtime_configuration=runtime_configuration,
    289         )
    290     )
    291     progress_bar.update(len(computable_metrics))
    292     progress_bar.refresh()

File ~/.local/lib/python3.10/site-packages/great_expectations/execution_engine/execution_engine.py:283, in ExecutionEngine.resolve_metrics(self, metrics_to_resolve, metrics, runtime_configuration)
    274 metric_fn_bundle_configurations: List[MetricComputationConfiguration]
    275 (
    276     metric_fn_direct_configurations,
    277     metric_fn_bundle_configurations,
   (...)
    281     runtime_configuration=runtime_configuration,
    282 )
--> 283 return self._process_direct_and_bundled_metric_computation_configurations(
    284     metric_fn_direct_configurations=metric_fn_direct_configurations,
    285     metric_fn_bundle_configurations=metric_fn_bundle_configurations,
    286 )

File ~/.local/lib/python3.10/site-packages/great_expectations/execution_engine/execution_engine.py:552, in ExecutionEngine._process_direct_and_bundled_metric_computation_configurations(self, metric_fn_direct_configurations, metric_fn_bundle_configurations)
    546         resolved_metrics[
    547             metric_computation_configuration.metric_configuration.id
    548         ] = metric_computation_configuration.metric_fn(  # type: ignore[misc] # F not callable
    549             **metric_computation_configuration.metric_provider_kwargs
    550         )
    551     except Exception as e:
--> 552         raise gx_exceptions.MetricResolutionError(
    553             message=str(e),
    554             failed_metrics=(
    555                 metric_computation_configuration.metric_configuration,
    556             ),
    557         ) from e
    559 try:
    560     # an engine-specific way of computing metrics together
    561     resolved_metric_bundle: Dict[
    562         Tuple[str, str, str], MetricValue
    563     ] = self.resolve_metric_bundle(
    564         metric_fn_bundle=metric_fn_bundle_configurations
    565     )

MetricResolutionError: [UNRESOLVED_COLUMN.WITH_SUGGESTION] A column or function parameter with name `address_information`.`city` cannot be resolved. Did you mean one of the following? [`city`].;
'Aggregate ['address_information.city], ['address_information.city, count(1) AS count#45L]
+- Project [city#40]
   +- Filter isnotnull(address_information#2.city)
      +- Project [address_information#2.city AS city#40, address_information#2]
         +- LogicalRDD [address_information#2], false---------------------------------------------------------------------------
AnalysisException                         Traceback (most recent call last)
File ~/.local/lib/python3.10/site-packages/great_expectations/execution_engine/execution_engine.py:548, in ExecutionEngine._process_direct_and_bundled_metric_computation_configurations(self, metric_fn_direct_configurations, metric_fn_bundle_configurations)
    545 try:
    546     resolved_metrics[
    547         metric_computation_configuration.metric_configuration.id
--> 548     ] = metric_computation_configuration.metric_fn(  # type: ignore[misc] # F not callable
    549         **metric_computation_configuration.metric_provider_kwargs
    550     )
    551 except Exception as e:

File ~/.local/lib/python3.10/site-packages/great_expectations/expectations/metrics/metric_provider.py:50, in metric_value.<locals>.wrapper.<locals>.inner_func(*args, **kwargs)
     48 @wraps(metric_fn)
     49 def inner_func(*args, **kwargs):
---> 50     return metric_fn(*args, **kwargs)

File ~/.local/lib/python3.10/site-packages/great_expectations/expectations/metrics/column_aggregate_metrics/column_value_counts.py:163, in ColumnValueCounts._spark(cls, execution_engine, metric_domain_kwargs, metric_value_kwargs, **kwargs)
    160 column: str = accessor_domain_kwargs["column"]
    162 value_counts_df: pyspark.DataFrame = (
--> 163     df.select(column).where(F.col(column).isNotNull()).groupBy(column).count()
    164 )
    166 if sort == "value":

File [/pyenv/versions/3.10.11/lib/python3.10/site-packages/pyspark/sql/group.py:38](https://jupyter.sprockets.cgs-rls.ifsalpha.com/pyenv/versions/3.10.11/lib/python3.10/site-packages/pyspark/sql/group.py#line=37), in dfapi.<locals>._api(self)
     37 name = f.__name__
---> 38 jdf = getattr(self._jgd, name)()
     39 return DataFrame(jdf, self.session)

File [/pyenv/versions/3.10.11/lib/python3.10/site-packages/py4j/java_gateway.py:1322](https://jupyter.sprockets.cgs-rls.ifsalpha.com/pyenv/versions/3.10.11/lib/python3.10/site-packages/py4j/java_gateway.py#line=1321), in JavaMember.__call__(self, *args)
   1321 answer = self.gateway_client.send_command(command)
-> 1322 return_value = get_return_value(
   1323     answer, self.gateway_client, self.target_id, self.name)
   1325 for temp_arg in temp_args:

File [/pyenv/versions/3.10.11/lib/python3.10/site-packages/pyspark/errors/exceptions/captured.py:185](https://jupyter.sprockets.cgs-rls.ifsalpha.com/pyenv/versions/3.10.11/lib/python3.10/site-packages/pyspark/errors/exceptions/captured.py#line=184), in capture_sql_exception.<locals>.deco(*a, **kw)
    182 if not isinstance(converted, UnknownException):
    183     # Hide where the exception came from that shows a non-Pythonic
    184     # JVM exception message.
--> 185     raise converted from None
    186 else:

AnalysisException: [UNRESOLVED_COLUMN.WITH_SUGGESTION] A column or function parameter with name `address_information`.`city` cannot be resolved. Did you mean one of the following? [`city`].;
'Aggregate ['address_information.city], ['address_information.city, count(1) AS count#45L]
+- Project [city#40]
   +- Filter isnotnull(address_information#2.city)
      +- Project [address_information#2.city AS city#40, address_information#2]
         +- LogicalRDD [address_information#2], false


The above exception was the direct cause of the following exception:

MetricResolutionError                     Traceback (most recent call last)
Cell In[3], line 38
     35 # Create a validator
     36 validator = context.get_validator(batch_request=batch_request, expectation_suite_name="spark_data_validation")
---> 38 validator.expect_column_distinct_values_to_be_in_set(column="address_information.city", value_set=["london","paris"])

File ~/.local/lib/python3.10/site-packages/great_expectations/validator/validator.py:594, in Validator.validate_expectation.<locals>.inst_expectation(*args, **kwargs)
    588         validation_result = ExpectationValidationResult(
    589             success=False,
    590             exception_info=exception_info,
    591             expectation_config=configuration,
    592         )
    593     else:
--> 594         raise err
    596 if self._include_rendered_content:
    597     validation_result.render()

File ~/.local/lib/python3.10/site-packages/great_expectations/validator/validator.py:557, in Validator.validate_expectation.<locals>.inst_expectation(*args, **kwargs)
    553     validation_result = ExpectationValidationResult(
    554         expectation_config=copy.deepcopy(expectation.configuration)
    555     )
    556 else:
--> 557     validation_result = expectation.validate(
    558         validator=self,
    559         evaluation_parameters=self._expectation_suite.evaluation_parameters,
    560         data_context=self._data_context,
    561         runtime_configuration=basic_runtime_configuration,
    562     )
    564 # If validate has set active_validation to true, then we do not save the config to avoid
    565 # saving updating expectation configs to the same suite during validation runs
    566 if self._active_validation is True:

File ~/.local/lib/python3.10/site-packages/great_expectations/expectations/expectation.py:1276, in Expectation.validate(self, validator, configuration, evaluation_parameters, interactive_evaluation, data_context, runtime_configuration)
   1267 self._warn_if_result_format_config_in_expectation_configuration(
   1268     configuration=configuration
   1269 )
   1271 configuration.process_evaluation_parameters(
   1272     evaluation_parameters, interactive_evaluation, data_context
   1273 )
   1274 expectation_validation_result_list: list[
   1275     ExpectationValidationResult
-> 1276 ] = validator.graph_validate(
   1277     configurations=[configuration],
   1278     runtime_configuration=runtime_configuration,
   1279 )
   1280 return expectation_validation_result_list[0]

File ~/.local/lib/python3.10/site-packages/great_expectations/validator/validator.py:1069, in Validator.graph_validate(self, configurations, runtime_configuration)
   1067         return evrs
   1068     else:
-> 1069         raise err
   1071 configuration: ExpectationConfiguration
   1072 result: ExpectationValidationResult

File ~/.local/lib/python3.10/site-packages/great_expectations/validator/validator.py:1048, in Validator.graph_validate(self, configurations, runtime_configuration)
   1041 resolved_metrics: _MetricsDict
   1043 try:
   1044     (
   1045         resolved_metrics,
   1046         evrs,
   1047         processed_configurations,
-> 1048     ) = self._resolve_suite_level_graph_and_process_metric_evaluation_errors(
   1049         graph=graph,
   1050         runtime_configuration=runtime_configuration,
   1051         expectation_validation_graphs=expectation_validation_graphs,
   1052         evrs=evrs,
   1053         processed_configurations=processed_configurations,
   1054         show_progress_bars=self._determine_progress_bars(),
   1055     )
   1056 except Exception as err:
   1057     # If a general Exception occurs during the execution of "ValidationGraph.resolve()", then
   1058     # all expectations in the suite are impacted, because it is impossible to attribute the failure to a metric.
   1059     if catch_exceptions:

File ~/.local/lib/python3.10/site-packages/great_expectations/validator/validator.py:1207, in Validator._resolve_suite_level_graph_and_process_metric_evaluation_errors(self, graph, runtime_configuration, expectation_validation_graphs, evrs, processed_configurations, show_progress_bars)
   1199 resolved_metrics: _MetricsDict
   1200 aborted_metrics_info: Dict[
   1201     _MetricKey,
   1202     Dict[str, Union[MetricConfiguration, Set[ExceptionInfo], int]],
   1203 ]
   1204 (
   1205     resolved_metrics,
   1206     aborted_metrics_info,
-> 1207 ) = self._metrics_calculator.resolve_validation_graph(
   1208     graph=graph,
   1209     runtime_configuration=runtime_configuration,
   1210     min_graph_edges_pbar_enable=0,
   1211 )
   1213 # Trace MetricResolutionError occurrences to expectations relying on corresponding malfunctioning metrics.
   1214 rejected_configurations: List[ExpectationConfiguration] = []

File ~/.local/lib/python3.10/site-packages/great_expectations/validator/metrics_calculator.py:287, in MetricsCalculator.resolve_validation_graph(self, graph, runtime_configuration, min_graph_edges_pbar_enable)
    282 resolved_metrics: _MetricsDict
    283 aborted_metrics_info: Dict[
    284     _MetricKey,
    285     Dict[str, Union[MetricConfiguration, Set[ExceptionInfo], int]],
    286 ]
--> 287 resolved_metrics, aborted_metrics_info = graph.resolve(
    288     runtime_configuration=runtime_configuration,
    289     min_graph_edges_pbar_enable=min_graph_edges_pbar_enable,
    290     show_progress_bars=self._show_progress_bars,
    291 )
    292 return resolved_metrics, aborted_metrics_info

File ~/.local/lib/python3.10/site-packages/great_expectations/validator/validation_graph.py:209, in ValidationGraph.resolve(self, runtime_configuration, min_graph_edges_pbar_enable, show_progress_bars)
    203 resolved_metrics: Dict[_MetricKey, MetricValue] = {}
    205 # updates graph with aborted metrics
    206 aborted_metrics_info: Dict[
    207     _MetricKey,
    208     Dict[str, Union[MetricConfiguration, Set[ExceptionInfo], int]],
--> 209 ] = self._resolve(
    210     metrics=resolved_metrics,
    211     runtime_configuration=runtime_configuration,
    212     min_graph_edges_pbar_enable=min_graph_edges_pbar_enable,
    213     show_progress_bars=show_progress_bars,
    214 )
    216 return resolved_metrics, aborted_metrics_info

File ~/.local/lib/python3.10/site-packages/great_expectations/validator/validation_graph.py:315, in ValidationGraph._resolve(self, metrics, runtime_configuration, min_graph_edges_pbar_enable, show_progress_bars)
    311                 failed_metric_info[failed_metric.id]["exception_info"] = {
    312                     exception_info
    313                 }
    314     else:
--> 315         raise err
    316 except Exception as e:
    317     if catch_exceptions:

File ~/.local/lib/python3.10/site-packages/great_expectations/validator/validation_graph.py:285, in ValidationGraph._resolve(self, metrics, runtime_configuration, min_graph_edges_pbar_enable, show_progress_bars)
    280         computable_metrics.add(metric)
    282 try:
    283     # Access "ExecutionEngine.resolve_metrics()" method, to resolve missing "MetricConfiguration" objects.
    284     metrics.update(
--> 285         self._execution_engine.resolve_metrics(
    286             metrics_to_resolve=computable_metrics,  # type: ignore[arg-type]  # Metric typing needs further refinement.
    287             metrics=metrics,  # type: ignore[arg-type]  # Metric typing needs further refinement.
    288             runtime_configuration=runtime_configuration,
    289         )
    290     )
    291     progress_bar.update(len(computable_metrics))
    292     progress_bar.refresh()

File ~/.local/lib/python3.10/site-packages/great_expectations/execution_engine/execution_engine.py:283, in ExecutionEngine.resolve_metrics(self, metrics_to_resolve, metrics, runtime_configuration)
    274 metric_fn_bundle_configurations: List[MetricComputationConfiguration]
    275 (
    276     metric_fn_direct_configurations,
    277     metric_fn_bundle_configurations,
   (...)
    281     runtime_configuration=runtime_configuration,
    282 )
--> 283 return self._process_direct_and_bundled_metric_computation_configurations(
    284     metric_fn_direct_configurations=metric_fn_direct_configurations,
    285     metric_fn_bundle_configurations=metric_fn_bundle_configurations,
    286 )

File ~/.local/lib/python3.10/site-packages/great_expectations/execution_engine/execution_engine.py:552, in ExecutionEngine._process_direct_and_bundled_metric_computation_configurations(self, metric_fn_direct_configurations, metric_fn_bundle_configurations)
    546         resolved_metrics[
    547             metric_computation_configuration.metric_configuration.id
    548         ] = metric_computation_configuration.metric_fn(  # type: ignore[misc] # F not callable
    549             **metric_computation_configuration.metric_provider_kwargs
    550         )
    551     except Exception as e:
--> 552         raise gx_exceptions.MetricResolutionError(
    553             message=str(e),
    554             failed_metrics=(
    555                 metric_computation_configuration.metric_configuration,
    556             ),
    557         ) from e
    559 try:
    560     # an engine-specific way of computing metrics together
    561     resolved_metric_bundle: Dict[
    562         Tuple[str, str, str], MetricValue
    563     ] = self.resolve_metric_bundle(
    564         metric_fn_bundle=metric_fn_bundle_configurations
    565     )

MetricResolutionError: [UNRESOLVED_COLUMN.WITH_SUGGESTION] A column or function parameter with name `address_information`.`city` cannot be resolved. Did you mean one of the following? [`city`].;
'Aggregate ['address_information.city], ['address_information.city, count(1) AS count#45L]
+- Project [city#40]
   +- Filter isnotnull(address_information#2.city)
      +- Project [address_information#2.city AS city#40, address_information#2]
         +- LogicalRDD [address_information#2], false

Expected behavior
It should be able to handle nested columns for expectations that use ColumnValueCounts

Environment (please complete the following information):

  • Operating System: Linux
  • Great Expectations Version: 0.17.19
  • Data Source: Pyspark

Additional context
expect_column_values_to_be_between works just fine with nested columns

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant