class DoubleMLEstimator extends Estimator[DoubleMLModel] with ComplexParamsWritable with DoubleMLParams with SynapseMLLogging with Wrappable
Double ML estimators. The estimator follows the two stage process, where a set of nuisance functions are estimated in the first stage in a cross-fitting manner and a final stage estimates the average treatment effect (ATE) model. Our goal is to estimate the constant marginal ATE Theta(X)
In this estimator, the ATE is estimated by using the following estimating equations: .. math :: Y - \\E[Y | X, W] = \\Theta(X) \\cdot (T - \\E[T | X, W]) + \\epsilon
Thus if we estimate the nuisance functions :math:q(X, W) = \\E[Y | X, W]
and
:math:f(X, W)=\\E[T | X, W]
in the first stage, we can estimate the final stage ate for each
treatment t, by running a regression, minimizing the residual on residual square loss,
estimating Theta(X) is a final regression problem, regressing tilde{Y} on X and tilde{T})
.. math :: \\hat{\\theta} = \\arg\\min_{\\Theta}\ \E_n\\left[ (\\tilde{Y} - \\Theta(X) \\cdot \\tilde{T})^2 \\right]
Where
\\tilde{Y}=Y - \\E[Y | X, W]
and :math:\\tilde{T}=T-\\E[T | X, W]
denotes the
residual outcome and residual treatment.
The nuisance function :math:q
is a simple machine learning problem and
user can use setOutcomeModel to set an arbitrary sparkML model
that is internally used to solve this problem
The problem of estimating the nuisance function :math:f
is also a machine learning problem and
user can use setTreatmentModel to set an arbitrary sparkML model
that is internally used to solve this problem.
- Alphabetic
- By Inheritance
- DoubleMLEstimator
- Wrappable
- DotnetWrappable
- RWrappable
- PythonWrappable
- BaseWrappable
- SynapseMLLogging
- DoubleMLParams
- HasParallelismInjected
- HasParallelism
- HasWeightCol
- HasMaxIter
- HasFeaturesCol
- HasOutcomeCol
- HasTreatmentCol
- ComplexParamsWritable
- MLWritable
- Estimator
- PipelineStage
- Logging
- Params
- Serializable
- Serializable
- Identifiable
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Value Members
-
final
def
clear(param: Param[_]): DoubleMLEstimator.this.type
- Definition Classes
- Params
-
val
confidenceLevel: DoubleParam
- Definition Classes
- DoubleMLParams
-
def
copy(extra: ParamMap): Estimator[DoubleMLModel]
- Definition Classes
- DoubleMLEstimator → Estimator → PipelineStage → Params
-
def
dotnetAdditionalMethods: String
- Definition Classes
- DotnetWrappable
-
def
explainParam(param: Param[_]): String
- Definition Classes
- Params
-
def
explainParams(): String
- Definition Classes
- Params
-
final
def
extractParamMap(): ParamMap
- Definition Classes
- Params
-
final
def
extractParamMap(extra: ParamMap): ParamMap
- Definition Classes
- Params
-
val
featuresCol: Param[String]
The name of the features column
The name of the features column
- Definition Classes
- HasFeaturesCol
-
def
fit(dataset: Dataset[_]): DoubleMLModel
Fits the DoubleML model.
Fits the DoubleML model.
- dataset
The input dataset to train.
- returns
The trained DoubleML model, from which you can get Ate and Ci values
- Definition Classes
- DoubleMLEstimator → Estimator
-
def
fit(dataset: Dataset[_], paramMaps: Seq[ParamMap]): Seq[DoubleMLModel]
- Definition Classes
- Estimator
- Annotations
- @Since( "2.0.0" )
-
def
fit(dataset: Dataset[_], paramMap: ParamMap): DoubleMLModel
- Definition Classes
- Estimator
- Annotations
- @Since( "2.0.0" )
-
def
fit(dataset: Dataset[_], firstParamPair: ParamPair[_], otherParamPairs: ParamPair[_]*): DoubleMLModel
- Definition Classes
- Estimator
- Annotations
- @Since( "2.0.0" ) @varargs()
-
final
def
get[T](param: Param[T]): Option[T]
- Definition Classes
- Params
-
def
getConfidenceLevel: Double
- Definition Classes
- DoubleMLParams
-
final
def
getDefault[T](param: Param[T]): Option[T]
- Definition Classes
- Params
-
def
getExecutionContextProxy: ExecutionContext
- Definition Classes
- HasParallelismInjected
-
def
getFeaturesCol: String
- Definition Classes
- HasFeaturesCol
-
final
def
getMaxIter: Int
- Definition Classes
- HasMaxIter
-
final
def
getOrDefault[T](param: Param[T]): T
- Definition Classes
- Params
-
def
getOutcomeCol: String
- Definition Classes
- HasOutcomeCol
-
def
getOutcomeModel: Estimator[_ <: Model[_]]
- Definition Classes
- DoubleMLParams
-
def
getParallelism: Int
- Definition Classes
- HasParallelism
-
def
getParam(paramName: String): Param[Any]
- Definition Classes
- Params
-
def
getParamInfo(p: Param[_]): ParamInfo[_]
- Definition Classes
- BaseWrappable
-
def
getSampleSplitRatio: Array[Double]
- Definition Classes
- DoubleMLParams
-
def
getTreatmentCol: String
- Definition Classes
- HasTreatmentCol
-
def
getTreatmentModel: Estimator[_ <: Model[_]]
- Definition Classes
- DoubleMLParams
-
def
getWeightCol: String
- Definition Classes
- HasWeightCol
-
final
def
hasDefault[T](param: Param[T]): Boolean
- Definition Classes
- Params
-
def
hasParam(paramName: String): Boolean
- Definition Classes
- Params
-
final
def
isDefined(param: Param[_]): Boolean
- Definition Classes
- Params
-
final
def
isSet(param: Param[_]): Boolean
- Definition Classes
- Params
-
def
logClass(featureName: String): Unit
- Definition Classes
- SynapseMLLogging
-
def
logFit[T](f: ⇒ T, columns: Int): T
- Definition Classes
- SynapseMLLogging
-
def
logTransform[T](f: ⇒ T, columns: Int): T
- Definition Classes
- SynapseMLLogging
-
def
logVerb[T](verb: String, f: ⇒ T, columns: Option[Int] = None): T
- Definition Classes
- SynapseMLLogging
-
def
makeDotnetFile(conf: CodegenConfig): Unit
- Definition Classes
- DotnetWrappable
-
def
makePyFile(conf: CodegenConfig): Unit
- Definition Classes
- PythonWrappable
-
def
makeRFile(conf: CodegenConfig): Unit
- Definition Classes
- RWrappable
-
final
val
maxIter: IntParam
- Definition Classes
- HasMaxIter
-
val
outcomeCol: Param[String]
- Definition Classes
- HasOutcomeCol
-
val
outcomeModel: EstimatorParam
- Definition Classes
- DoubleMLParams
-
val
parallelism: IntParam
- Definition Classes
- HasParallelism
-
lazy val
params: Array[Param[_]]
- Definition Classes
- Params
-
def
pyAdditionalMethods: String
- Definition Classes
- PythonWrappable
-
def
pyInitFunc(): String
- Definition Classes
- PythonWrappable
-
val
sampleSplitRatio: DoubleArrayParam
- Definition Classes
- DoubleMLParams
-
def
save(path: String): Unit
- Definition Classes
- MLWritable
- Annotations
- @Since( "1.6.0" ) @throws( ... )
-
final
def
set[T](param: Param[T], value: T): DoubleMLEstimator.this.type
- Definition Classes
- Params
-
def
setConfidenceLevel(value: Double): DoubleMLEstimator.this.type
Set the higher bound percentile of ATE distribution.
Set the higher bound percentile of ATE distribution. Default is 0.975. lower bound value will be automatically calculated as 100*(1-confidenceLevel) That means by default we compute 95% confidence interval, it is [2.5%, 97.5%] percentile of ATE distribution
- Definition Classes
- DoubleMLParams
-
def
setFeaturesCol(value: String): DoubleMLEstimator.this.type
- Definition Classes
- HasFeaturesCol
-
def
setMaxIter(value: Int): DoubleMLEstimator.this.type
Set the maximum number of confidence interval bootstrapping iterations.
Set the maximum number of confidence interval bootstrapping iterations. Default is 1, which means it does not calculate confidence interval. To get Ci values please set a meaningful value
- Definition Classes
- DoubleMLParams
-
def
setOutcomeCol(value: String): DoubleMLEstimator.this.type
Set name of the column which will be used as outcome
Set name of the column which will be used as outcome
- Definition Classes
- HasOutcomeCol
-
def
setOutcomeModel(value: Estimator[_ <: Model[_]]): DoubleMLEstimator.this.type
Set outcome model, it could be any model derived from 'org.apache.spark.ml.regression.Regressor' or 'org.apache.spark.ml.classification.ProbabilisticClassifier'
Set outcome model, it could be any model derived from 'org.apache.spark.ml.regression.Regressor' or 'org.apache.spark.ml.classification.ProbabilisticClassifier'
- Definition Classes
- DoubleMLParams
-
def
setParallelism(value: Int): DoubleMLEstimator.this.type
- Definition Classes
- DoubleMLParams
-
def
setSampleSplitRatio(value: Array[Double]): DoubleMLEstimator.this.type
Set the sample split ratio, default is Array(0.5, 0.5)
Set the sample split ratio, default is Array(0.5, 0.5)
- Definition Classes
- DoubleMLParams
-
def
setTreatmentCol(value: String): DoubleMLEstimator.this.type
Set name of the column which will be used as treatment
Set name of the column which will be used as treatment
- Definition Classes
- HasTreatmentCol
-
def
setTreatmentModel(value: Estimator[_ <: Model[_]]): DoubleMLEstimator.this.type
Set treatment model, it could be any model derived from 'org.apache.spark.ml.regression.Regressor' or 'org.apache.spark.ml.classification.ProbabilisticClassifier'
Set treatment model, it could be any model derived from 'org.apache.spark.ml.regression.Regressor' or 'org.apache.spark.ml.classification.ProbabilisticClassifier'
- Definition Classes
- DoubleMLParams
-
def
setWeightCol(value: String): DoubleMLEstimator.this.type
- Definition Classes
- HasWeightCol
-
def
toString(): String
- Definition Classes
- Identifiable → AnyRef → Any
-
def
transformSchema(schema: StructType): StructType
- Definition Classes
- DoubleMLEstimator → PipelineStage
- Annotations
- @DeveloperApi()
-
val
treatmentCol: Param[String]
- Definition Classes
- HasTreatmentCol
-
val
treatmentModel: EstimatorParam
- Definition Classes
- DoubleMLParams
-
val
uid: String
- Definition Classes
- DoubleMLEstimator → SynapseMLLogging → Identifiable
-
val
weightCol: Param[String]
The name of the weight column
The name of the weight column
- Definition Classes
- HasWeightCol
-
def
write: MLWriter
- Definition Classes
- ComplexParamsWritable → MLWritable