synapse.ml.vw package

Submodules

synapse.ml.vw.VectorZipper module

class synapse.ml.vw.VectorZipper.VectorZipper(java_obj=None, inputCols=None, outputCol=None)[source]

Bases: pyspark.ml.util.MLReadable[pyspark.ml.util.RL]

Parameters
  • inputCols (list) – The names of the input columns

  • outputCol (str) – The name of the output column

getInputCols()[source]
Returns

The names of the input columns

Return type

inputCols

static getJavaPackage()[source]

Returns package name String.

getOutputCol()[source]
Returns

The name of the output column

Return type

outputCol

inputCols = Param(parent='undefined', name='inputCols', doc='The names of the input columns')
outputCol = Param(parent='undefined', name='outputCol', doc='The name of the output column')
classmethod read()[source]

Returns an MLReader instance for this class.

setInputCols(value)[source]
Parameters

inputCols – The names of the input columns

setOutputCol(value)[source]
Parameters

outputCol – The name of the output column

setParams(inputCols=None, outputCol=None)[source]

Set the (keyword only) parameters

synapse.ml.vw.VowpalWabbitCSETransformer module

class synapse.ml.vw.VowpalWabbitCSETransformer.VowpalWabbitCSETransformer(java_obj=None, maxImportanceWeight=100.0, metricsStratificationCols=[], minImportanceWeight=0.0)[source]

Bases: pyspark.ml.util.MLReadable[pyspark.ml.util.RL]

Parameters
  • maxImportanceWeight (float) – Clip importance weight at this upper bound. Defaults to 100.

  • metricsStratificationCols (list) – Optional list of column names to stratify rewards by.

  • minImportanceWeight (float) – Clip importance weight at this lower bound. Defaults to 0.

static getJavaPackage()[source]

Returns package name String.

getMaxImportanceWeight()[source]
Returns

Clip importance weight at this upper bound. Defaults to 100.

Return type

maxImportanceWeight

getMetricsStratificationCols()[source]
Returns

Optional list of column names to stratify rewards by.

Return type

metricsStratificationCols

getMinImportanceWeight()[source]
Returns

Clip importance weight at this lower bound. Defaults to 0.

Return type

minImportanceWeight

maxImportanceWeight = Param(parent='undefined', name='maxImportanceWeight', doc='Clip importance weight at this upper bound. Defaults to 100.')
metricsStratificationCols = Param(parent='undefined', name='metricsStratificationCols', doc='Optional list of column names to stratify rewards by.')
minImportanceWeight = Param(parent='undefined', name='minImportanceWeight', doc='Clip importance weight at this lower bound. Defaults to 0.')
classmethod read()[source]

Returns an MLReader instance for this class.

setMaxImportanceWeight(value)[source]
Parameters

maxImportanceWeight – Clip importance weight at this upper bound. Defaults to 100.

setMetricsStratificationCols(value)[source]
Parameters

metricsStratificationCols – Optional list of column names to stratify rewards by.

setMinImportanceWeight(value)[source]
Parameters

minImportanceWeight – Clip importance weight at this lower bound. Defaults to 0.

setParams(maxImportanceWeight=100.0, metricsStratificationCols=[], minImportanceWeight=0.0)[source]

Set the (keyword only) parameters

synapse.ml.vw.VowpalWabbitClassificationModel module

class synapse.ml.vw.VowpalWabbitClassificationModel.VowpalWabbitClassificationModel(java_obj=None, additionalFeatures=None, featuresCol='features', labelCol='label', model=None, numClassesModel=None, oneStepAheadPredictions=None, performanceStatistics=None, predictionCol='prediction', probabilityCol='probability', rawPredictionCol='rawPrediction', testArgs='', thresholds=None)[source]

Bases: pyspark.ml.util.MLReadable[pyspark.ml.util.RL]

synapse.ml.vw.VowpalWabbitClassifier module

class synapse.ml.vw.VowpalWabbitClassifier.VowpalWabbitClassifier(java_obj=None, additionalFeatures=[], featuresCol='features', hashSeed=0, ignoreNamespaces=None, initialModel=None, interactions=None, l1=None, l2=None, labelCol='label', labelConversion=False, learningRate=None, numBits=18, numClasses=2, numPasses=1, numSyncsPerPass=0, passThroughArgs='', powerT=None, predictionCol='prediction', predictionIdCol=None, probabilityCol='probability', rawPredictionCol='rawPrediction', splitCol=None, splitColValues=None, thresholds=None, useBarrierExecutionMode=True, weightCol=None)[source]

Bases: pyspark.ml.util.MLReadable[pyspark.ml.util.RL]

synapse.ml.vw.VowpalWabbitContextualBandit module

class synapse.ml.vw.VowpalWabbitContextualBandit.VowpalWabbitContextualBandit(java_obj=None, additionalFeatures=[], additionalSharedFeatures=[], chosenActionCol='chosenAction', epsilon=0.05, featuresCol='features', hashSeed=0, ignoreNamespaces=None, initialModel=None, interactions=None, l1=None, l2=None, labelCol='label', learningRate=None, numBits=18, numPasses=1, numSyncsPerPass=0, parallelism=1, passThroughArgs='', powerT=None, predictionCol='prediction', predictionIdCol=None, probabilityCol='probability', sharedCol='shared', splitCol=None, splitColValues=None, useBarrierExecutionMode=True, weightCol=None)[source]

Bases: pyspark.ml.util.MLReadable[pyspark.ml.util.RL]

synapse.ml.vw.VowpalWabbitContextualBanditModel module

class synapse.ml.vw.VowpalWabbitContextualBanditModel.VowpalWabbitContextualBanditModel(java_obj=None, additionalFeatures=[], additionalSharedFeatures=[], featuresCol='features', hashSeed=0, ignoreNamespaces=None, initialModel=None, interactions=None, l1=None, l2=None, labelCol='label', learningRate=None, model=None, numBits=18, numPasses=1, numSyncsPerPass=0, oneStepAheadPredictions=None, passThroughArgs='', performanceStatistics=None, powerT=None, predictionCol='prediction', predictionIdCol=None, rawPredictionCol='rawPrediction', sharedCol='shared', splitCol=None, splitColValues=None, testArgs='', useBarrierExecutionMode=True, weightCol=None)[source]

Bases: pyspark.ml.util.MLReadable[pyspark.ml.util.RL]

synapse.ml.vw.VowpalWabbitDSJsonTransformer module

class synapse.ml.vw.VowpalWabbitDSJsonTransformer.VowpalWabbitDSJsonTransformer(java_obj=None, dsJsonColumn='value', rewards={'reward': '_label_cost'})[source]

Bases: pyspark.ml.util.MLReadable[pyspark.ml.util.RL]

Parameters
  • dsJsonColumn (str) – Column containing ds-json. defaults to “value”.

  • rewards (dict) – Extract bandit reward(s) from DS json. Defaults to _label_cost.

dsJsonColumn = Param(parent='undefined', name='dsJsonColumn', doc='Column containing ds-json. defaults to "value".')
getDsJsonColumn()[source]
Returns

Column containing ds-json. defaults to “value”.

Return type

dsJsonColumn

static getJavaPackage()[source]

Returns package name String.

getRewards()[source]
Returns

Extract bandit reward(s) from DS json. Defaults to _label_cost.

Return type

rewards

classmethod read()[source]

Returns an MLReader instance for this class.

rewards = Param(parent='undefined', name='rewards', doc='Extract bandit reward(s) from DS json. Defaults to _label_cost.')
setDsJsonColumn(value)[source]
Parameters

dsJsonColumn – Column containing ds-json. defaults to “value”.

setParams(dsJsonColumn='value', rewards={'reward': '_label_cost'})[source]

Set the (keyword only) parameters

setRewards(value)[source]
Parameters

rewards – Extract bandit reward(s) from DS json. Defaults to _label_cost.

synapse.ml.vw.VowpalWabbitFeaturizer module

class synapse.ml.vw.VowpalWabbitFeaturizer.VowpalWabbitFeaturizer(java_obj=None, inputCols=[], numBits=30, outputCol='features', prefixStringsWithColumnName=True, preserveOrderNumBits=0, seed=0, stringSplitInputCols=[], sumCollisions=True)[source]

Bases: pyspark.ml.util.MLReadable[pyspark.ml.util.RL]

Parameters
  • inputCols (list) – The names of the input columns

  • numBits (int) – Number of bits used to mask

  • outputCol (str) – The name of the output column

  • prefixStringsWithColumnName (bool) – Prefix string features with column name

  • preserveOrderNumBits (int) – Number of bits used to preserve the feature order. This will reduce the hash size. Needs to be large enough to fit count the maximum number of words

  • seed (int) – Hash seed

  • stringSplitInputCols (list) – Input cols that should be split at word boundaries

  • sumCollisions (bool) – Sums collisions if true, otherwise removes them

getInputCols()[source]
Returns

The names of the input columns

Return type

inputCols

static getJavaPackage()[source]

Returns package name String.

getNumBits()[source]
Returns

Number of bits used to mask

Return type

numBits

getOutputCol()[source]
Returns

The name of the output column

Return type

outputCol

getPrefixStringsWithColumnName()[source]
Returns

Prefix string features with column name

Return type

prefixStringsWithColumnName

getPreserveOrderNumBits()[source]
Returns

Number of bits used to preserve the feature order. This will reduce the hash size. Needs to be large enough to fit count the maximum number of words

Return type

preserveOrderNumBits

getSeed()[source]
Returns

Hash seed

Return type

seed

getStringSplitInputCols()[source]
Returns

Input cols that should be split at word boundaries

Return type

stringSplitInputCols

getSumCollisions()[source]
Returns

Sums collisions if true, otherwise removes them

Return type

sumCollisions

inputCols = Param(parent='undefined', name='inputCols', doc='The names of the input columns')
numBits = Param(parent='undefined', name='numBits', doc='Number of bits used to mask')
outputCol = Param(parent='undefined', name='outputCol', doc='The name of the output column')
prefixStringsWithColumnName = Param(parent='undefined', name='prefixStringsWithColumnName', doc='Prefix string features with column name')
preserveOrderNumBits = Param(parent='undefined', name='preserveOrderNumBits', doc='Number of bits used to preserve the feature order. This will reduce the hash size. Needs to be large enough to fit count the maximum number of words')
classmethod read()[source]

Returns an MLReader instance for this class.

seed = Param(parent='undefined', name='seed', doc='Hash seed')
setInputCols(value)[source]
Parameters

inputCols – The names of the input columns

setNumBits(value)[source]
Parameters

numBits – Number of bits used to mask

setOutputCol(value)[source]
Parameters

outputCol – The name of the output column

setParams(inputCols=[], numBits=30, outputCol='features', prefixStringsWithColumnName=True, preserveOrderNumBits=0, seed=0, stringSplitInputCols=[], sumCollisions=True)[source]

Set the (keyword only) parameters

setPrefixStringsWithColumnName(value)[source]
Parameters

prefixStringsWithColumnName – Prefix string features with column name

setPreserveOrderNumBits(value)[source]
Parameters

preserveOrderNumBits – Number of bits used to preserve the feature order. This will reduce the hash size. Needs to be large enough to fit count the maximum number of words

setSeed(value)[source]
Parameters

seed – Hash seed

setStringSplitInputCols(value)[source]
Parameters

stringSplitInputCols – Input cols that should be split at word boundaries

setSumCollisions(value)[source]
Parameters

sumCollisions – Sums collisions if true, otherwise removes them

stringSplitInputCols = Param(parent='undefined', name='stringSplitInputCols', doc='Input cols that should be split at word boundaries')
sumCollisions = Param(parent='undefined', name='sumCollisions', doc='Sums collisions if true, otherwise removes them')

synapse.ml.vw.VowpalWabbitGeneric module

class synapse.ml.vw.VowpalWabbitGeneric.VowpalWabbitGeneric(java_obj=None, hashSeed=0, ignoreNamespaces=None, initialModel=None, inputCol='value', interactions=None, l1=None, l2=None, learningRate=None, numBits=18, numPasses=1, numSyncsPerPass=0, passThroughArgs='', powerT=None, predictionIdCol=None, splitCol=None, splitColValues=None, useBarrierExecutionMode=True)[source]

Bases: pyspark.ml.util.MLReadable[pyspark.ml.util.RL]

synapse.ml.vw.VowpalWabbitGenericModel module

class synapse.ml.vw.VowpalWabbitGenericModel.VowpalWabbitGenericModel(java_obj=None, inputCol=None, model=None, oneStepAheadPredictions=None, performanceStatistics=None, testArgs='')[source]

Bases: pyspark.ml.util.MLReadable[pyspark.ml.util.RL]

synapse.ml.vw.VowpalWabbitGenericProgressive module

class synapse.ml.vw.VowpalWabbitGenericProgressive.VowpalWabbitGenericProgressive(java_obj=None, hashSeed=0, ignoreNamespaces=None, initialModel=None, inputCol='input', interactions=None, l1=None, l2=None, learningRate=None, numBits=18, numPasses=1, numSyncsPerPass=0, passThroughArgs='', powerT=None, useBarrierExecutionMode=True)[source]

Bases: pyspark.ml.util.MLReadable[pyspark.ml.util.RL]

synapse.ml.vw.VowpalWabbitInteractions module

class synapse.ml.vw.VowpalWabbitInteractions.VowpalWabbitInteractions(java_obj=None, inputCols=None, numBits=30, outputCol=None, sumCollisions=True)[source]

Bases: pyspark.ml.util.MLReadable[pyspark.ml.util.RL]

Parameters
  • inputCols (list) – The names of the input columns

  • numBits (int) – Number of bits used to mask

  • outputCol (str) – The name of the output column

  • sumCollisions (bool) – Sums collisions if true, otherwise removes them

getInputCols()[source]
Returns

The names of the input columns

Return type

inputCols

static getJavaPackage()[source]

Returns package name String.

getNumBits()[source]
Returns

Number of bits used to mask

Return type

numBits

getOutputCol()[source]
Returns

The name of the output column

Return type

outputCol

getSumCollisions()[source]
Returns

Sums collisions if true, otherwise removes them

Return type

sumCollisions

inputCols = Param(parent='undefined', name='inputCols', doc='The names of the input columns')
numBits = Param(parent='undefined', name='numBits', doc='Number of bits used to mask')
outputCol = Param(parent='undefined', name='outputCol', doc='The name of the output column')
classmethod read()[source]

Returns an MLReader instance for this class.

setInputCols(value)[source]
Parameters

inputCols – The names of the input columns

setNumBits(value)[source]
Parameters

numBits – Number of bits used to mask

setOutputCol(value)[source]
Parameters

outputCol – The name of the output column

setParams(inputCols=None, numBits=30, outputCol=None, sumCollisions=True)[source]

Set the (keyword only) parameters

setSumCollisions(value)[source]
Parameters

sumCollisions – Sums collisions if true, otherwise removes them

sumCollisions = Param(parent='undefined', name='sumCollisions', doc='Sums collisions if true, otherwise removes them')

synapse.ml.vw.VowpalWabbitPythonBase module

class synapse.ml.vw.VowpalWabbitPythonBase.VowpalWabbitPythonBase[source]

Bases: object

parallelFit(dataset, param_maps)[source]
setInitialModel(model)[source]

Initialize the estimator with a previously trained model.

class synapse.ml.vw.VowpalWabbitPythonBase.VowpalWabbitPythonBaseModel[source]

Bases: object

getNativeModel()[source]

Get the binary native VW model.

getPerformanceStatistics()[source]
getReadableModel()[source]
saveNativeModel(filename)[source]

Save the native model to a local or WASB remote location.

synapse.ml.vw.VowpalWabbitPythonBase.to_java_params(sc, model, pyParamMap)[source]

synapse.ml.vw.VowpalWabbitRegressionModel module

class synapse.ml.vw.VowpalWabbitRegressionModel.VowpalWabbitRegressionModel(java_obj=None, additionalFeatures=None, featuresCol='features', labelCol='label', model=None, oneStepAheadPredictions=None, performanceStatistics=None, predictionCol='prediction', rawPredictionCol='rawPrediction', testArgs='')[source]

Bases: pyspark.ml.util.MLReadable[pyspark.ml.util.RL]

synapse.ml.vw.VowpalWabbitRegressor module

class synapse.ml.vw.VowpalWabbitRegressor.VowpalWabbitRegressor(java_obj=None, additionalFeatures=[], featuresCol='features', hashSeed=0, ignoreNamespaces=None, initialModel=None, interactions=None, l1=None, l2=None, labelCol='label', learningRate=None, numBits=18, numPasses=1, numSyncsPerPass=0, passThroughArgs='', powerT=None, predictionCol='prediction', predictionIdCol=None, splitCol=None, splitColValues=None, useBarrierExecutionMode=True, weightCol=None)[source]

Bases: pyspark.ml.util.MLReadable[pyspark.ml.util.RL]

Module contents

SynapseML is an ecosystem of tools aimed towards expanding the distributed computing framework Apache Spark in several new directions. SynapseML adds many deep learning and data science tools to the Spark ecosystem, including seamless integration of Spark Machine Learning pipelines with Microsoft Cognitive Toolkit (CNTK), LightGBM and OpenCV. These tools enable powerful and highly-scalable predictive and analytical models for a variety of datasources.

SynapseML also brings new networking capabilities to the Spark Ecosystem. With the HTTP on Spark project, users can embed any web service into their SparkML models. In this vein, SynapseML provides easy to use SparkML transformers for a wide variety of Microsoft Cognitive Services. For production grade deployment, the Spark Serving project enables high throughput, sub-millisecond latency web services, backed by your Spark cluster.

SynapseML requires Scala 2.12, Spark 3.0+, and Python 3.6+.