You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Still getting up to speed on the literature around CATE, but noticed that the docs on validation didn't have anything problem specific, and I wondered if there were some case-specific validations we could perform at least on the fitness of base-learners (mu and treatment propensity). It seems like it'd be fairly straightforward to implement a .score_baselearners(X,Y,T) method which could return metrics that could detect whether the base-learners were over/under-fitting.
Currently accessible via the .models_mu_(t|c) but thinking this might have value to add as a dedicated method for model evaluation. Happy to take a stab at implementing if folks think this would be useful/worth adding as a native feature.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Still getting up to speed on the literature around CATE, but noticed that the docs on validation didn't have anything problem specific, and I wondered if there were some case-specific validations we could perform at least on the fitness of base-learners (mu and treatment propensity). It seems like it'd be fairly straightforward to implement a
.score_baselearners(X,Y,T)
method which could return metrics that could detect whether the base-learners were over/under-fitting.Currently accessible via the
.models_mu_(t|c)
but thinking this might have value to add as a dedicated method for model evaluation. Happy to take a stab at implementing if folks think this would be useful/worth adding as a native feature.Beta Was this translation helpful? Give feedback.
All reactions