synapse.ml.automl package

Submodules

synapse.ml.automl.BestModel module

class synapse.ml.automl.BestModel.BestModel(java_obj=None, allModelMetrics=None, bestModel=None, bestModelMetrics=None, rocCurve=None, scoredDataset=None)[source]

Bases: pyspark.ml.util.MLReadable[pyspark.ml.util.RL]

getAllModelMetrics()[source]

Returns a table of metrics from all models compared from the evaluation comparison.

getBestModel()[source]

Returns the best model.

getBestModelMetrics()[source]

Returns all of the best model metrics results from the evaluator.

getEvaluationResults()[source]

Returns the ROC curve with TPR, FPR.

getScoredDataset()[source]

Returns scored dataset for the best model.

synapse.ml.automl.FindBestModel module

class synapse.ml.automl.FindBestModel.FindBestModel(java_obj=None, evaluationMetric='accuracy', models=None)[source]

Bases: pyspark.ml.util.MLReadable[pyspark.ml.util.RL]

Parameters
  • evaluationMetric (str) – Metric to evaluate models with

  • models (object) – List of models to be evaluated

evaluationMetric = Param(parent='undefined', name='evaluationMetric', doc='Metric to evaluate models with')
getEvaluationMetric()[source]
Returns

Metric to evaluate models with

Return type

evaluationMetric

static getJavaPackage()[source]

Returns package name String.

getModels()[source]
Returns

List of models to be evaluated

Return type

models

models = Param(parent='undefined', name='models', doc='List of models to be evaluated')
classmethod read()[source]

Returns an MLReader instance for this class.

setEvaluationMetric(value)[source]
Parameters

evaluationMetric – Metric to evaluate models with

setModels(value)[source]
Parameters

models – List of models to be evaluated

setParams(evaluationMetric='accuracy', models=None)[source]

Set the (keyword only) parameters

synapse.ml.automl.HyperparamBuilder module

class synapse.ml.automl.HyperparamBuilder.DiscreteHyperParam(values, seed=0)[source]

Bases: object

Specifies a discrete list of values.

get()[source]
class synapse.ml.automl.HyperparamBuilder.GridSpace(paramValues)[source]

Bases: object

Specifies a predetermined grid of values to search through.

space()[source]
class synapse.ml.automl.HyperparamBuilder.HyperparamBuilder[source]

Bases: object

Specifies the search space for hyperparameters.

addHyperparam(est, param, hyperParam)[source]

Add a hyperparam to the builder

Parameters
  • param (Param) – The param to tune

  • dist (Dist) – Distribution of values

build()[source]

Builds the search space of hyperparameters, returns the map of hyperparameters to search through.

class synapse.ml.automl.HyperparamBuilder.RandomSpace(paramDistributions)[source]

Bases: object

Specifies a random streaming range of values to search through.

space()[source]
class synapse.ml.automl.HyperparamBuilder.RangeHyperParam(min, max, seed=0)[source]

Bases: object

Specifies a range of values.

get()[source]

synapse.ml.automl.TuneHyperparameters module

class synapse.ml.automl.TuneHyperparameters.TuneHyperparameters(java_obj=None, evaluationMetric=None, models=None, numFolds=None, numRuns=None, parallelism=None, paramSpace=None, seed=0)[source]

Bases: pyspark.ml.util.MLReadable[pyspark.ml.util.RL]

Parameters
  • evaluationMetric (str) – Metric to evaluate models with

  • models (object) – Estimators to run

  • numFolds (int) – Number of folds

  • numRuns (int) – Termination criteria for randomized search

  • parallelism (int) – The number of models to run in parallel

  • paramSpace (object) – Parameter space for generating hyperparameters

  • seed (long) – Random number generator seed

evaluationMetric = Param(parent='undefined', name='evaluationMetric', doc='Metric to evaluate models with')
getEvaluationMetric()[source]
Returns

Metric to evaluate models with

Return type

evaluationMetric

static getJavaPackage()[source]

Returns package name String.

getModels()[source]
Returns

Estimators to run

Return type

models

getNumFolds()[source]
Returns

Number of folds

Return type

numFolds

getNumRuns()[source]
Returns

Termination criteria for randomized search

Return type

numRuns

getParallelism()[source]
Returns

The number of models to run in parallel

Return type

parallelism

getParamSpace()[source]
Returns

Parameter space for generating hyperparameters

Return type

paramSpace

getSeed()[source]
Returns

Random number generator seed

Return type

seed

models = Param(parent='undefined', name='models', doc='Estimators to run')
numFolds = Param(parent='undefined', name='numFolds', doc='Number of folds')
numRuns = Param(parent='undefined', name='numRuns', doc='Termination criteria for randomized search')
parallelism = Param(parent='undefined', name='parallelism', doc='The number of models to run in parallel')
paramSpace = Param(parent='undefined', name='paramSpace', doc='Parameter space for generating hyperparameters')
classmethod read()[source]

Returns an MLReader instance for this class.

seed = Param(parent='undefined', name='seed', doc='Random number generator seed')
setEvaluationMetric(value)[source]
Parameters

evaluationMetric – Metric to evaluate models with

setModels(value)[source]
Parameters

models – Estimators to run

setNumFolds(value)[source]
Parameters

numFolds – Number of folds

setNumRuns(value)[source]
Parameters

numRuns – Termination criteria for randomized search

setParallelism(value)[source]
Parameters

parallelism – The number of models to run in parallel

setParamSpace(value)[source]
Parameters

paramSpace – Parameter space for generating hyperparameters

setParams(evaluationMetric=None, models=None, numFolds=None, numRuns=None, parallelism=None, paramSpace=None, seed=0)[source]

Set the (keyword only) parameters

setSeed(value)[source]
Parameters

seed – Random number generator seed

synapse.ml.automl.TuneHyperparametersModel module

class synapse.ml.automl.TuneHyperparametersModel.TuneHyperparametersModel(java_obj=None, bestMetric=None, bestModel=None)[source]

Bases: pyspark.ml.util.MLReadable[pyspark.ml.util.RL]

getBestModel()[source]

Returns the best model.

getBestModelInfo()[source]

Returns the best model parameter info.

Module contents

SynapseML is an ecosystem of tools aimed towards expanding the distributed computing framework Apache Spark in several new directions. SynapseML adds many deep learning and data science tools to the Spark ecosystem, including seamless integration of Spark Machine Learning pipelines with Microsoft Cognitive Toolkit (CNTK), LightGBM and OpenCV. These tools enable powerful and highly-scalable predictive and analytical models for a variety of datasources.

SynapseML also brings new networking capabilities to the Spark Ecosystem. With the HTTP on Spark project, users can embed any web service into their SparkML models. In this vein, SynapseML provides easy to use SparkML transformers for a wide variety of Microsoft Cognitive Services. For production grade deployment, the Spark Serving project enables high throughput, sub-millisecond latency web services, backed by your Spark cluster.

SynapseML requires Scala 2.12, Spark 3.0+, and Python 3.6+.