synapse.ml.vw package

Submodules

synapse.ml.vw.VectorZipper module

class synapse.ml.vw.VectorZipper.VectorZipper(java_obj=None, inputCols=None, outputCol=None)[source]

Bases: synapse.ml.core.schema.Utils.ComplexParamsMixin, pyspark.ml.util.JavaMLReadable, pyspark.ml.util.JavaMLWritable, pyspark.ml.wrapper.JavaTransformer

Parameters
  • inputCols (list) – The names of the input columns

  • outputCol (object) – The name of the output column

getInputCols()[source]
Returns

The names of the input columns

Return type

inputCols

static getJavaPackage()[source]

Returns package name String.

getOutputCol()[source]
Returns

The name of the output column

Return type

outputCol

inputCols = Param(parent='undefined', name='inputCols', doc='The names of the input columns')
outputCol = Param(parent='undefined', name='outputCol', doc='The name of the output column')
classmethod read()[source]

Returns an MLReader instance for this class.

setInputCols(value)[source]
Parameters

inputCols – The names of the input columns

setOutputCol(value)[source]
Parameters

outputCol – The name of the output column

setParams(inputCols=None, outputCol=None)[source]

Set the (keyword only) parameters

synapse.ml.vw.VowpalWabbitClassificationModel module

class synapse.ml.vw.VowpalWabbitClassificationModel.VowpalWabbitClassificationModel(java_obj=None, additionalFeatures=None, featuresCol='features', labelCol='label', model=None, performanceStatistics=None, predictionCol='prediction', probabilityCol='probability', rawPredictionCol='rawPrediction', testArgs='', thresholds=None)[source]

Bases: synapse.ml.vw._VowpalWabbitClassificationModel._VowpalWabbitClassificationModel

getNativeModel()[source]

Get the binary native VW model.

getPerformanceStatistics()[source]
Returns

Performance statistics collected during training

Return type

performanceStatistics

getReadableModel()[source]
saveNativeModel(filename)[source]

Save the native model to a local or WASB remote location.

synapse.ml.vw.VowpalWabbitClassifier module

class synapse.ml.vw.VowpalWabbitClassifier.VowpalWabbitClassifier(java_obj=None, additionalFeatures=[], args='', featuresCol='features', hashSeed=0, ignoreNamespaces=None, initialModel=None, interactions=None, l1=None, l2=None, labelCol='label', labelConversion=True, learningRate=None, numBits=18, numPasses=1, powerT=None, predictionCol='prediction', probabilityCol='probability', rawPredictionCol='rawPrediction', thresholds=None, useBarrierExecutionMode=True, weightCol=None)[source]

Bases: synapse.ml.vw._VowpalWabbitClassifier._VowpalWabbitClassifier

setInitialModel(model)[source]

Initialize the estimator with a previously trained model.

synapse.ml.vw.VowpalWabbitContextualBandit module

class synapse.ml.vw.VowpalWabbitContextualBandit.VowpalWabbitContextualBandit(java_obj=None, additionalFeatures=[], additionalSharedFeatures=[], args='', chosenActionCol='chosenAction', epsilon=0.05, featuresCol='features', hashSeed=0, ignoreNamespaces=None, initialModel=None, interactions=None, l1=None, l2=None, labelCol='label', learningRate=None, numBits=18, numPasses=1, parallelism=1, powerT=None, predictionCol='prediction', probabilityCol='probability', sharedCol='shared', useBarrierExecutionMode=True, weightCol=None)[source]

Bases: synapse.ml.vw._VowpalWabbitContextualBandit._VowpalWabbitContextualBandit

parallelFit(dataset, param_maps)[source]
setInitialModel(model)[source]

Initialize the estimator with a previously trained model.

synapse.ml.vw.VowpalWabbitContextualBandit.to_java_params(sc, model, pyParamMap)[source]

synapse.ml.vw.VowpalWabbitContextualBanditModel module

class synapse.ml.vw.VowpalWabbitContextualBanditModel.VowpalWabbitContextualBanditModel(java_obj=None, additionalFeatures=[], additionalSharedFeatures=[], args='', featuresCol='features', hashSeed=0, ignoreNamespaces=None, initialModel=None, interactions=None, l1=None, l2=None, labelCol='label', learningRate=None, model=None, numBits=18, numPasses=1, performanceStatistics=None, powerT=None, predictionCol='prediction', rawPredictionCol='rawPrediction', sharedCol='shared', testArgs='', useBarrierExecutionMode=True, weightCol=None)[source]

Bases: synapse.ml.vw._VowpalWabbitContextualBanditModel._VowpalWabbitContextualBanditModel

getNativeModel()[source]

Get the binary native VW model.

getPerformanceStatistics()[source]
Returns

Performance statistics collected during training

Return type

performanceStatistics

getReadableModel()[source]
saveNativeModel(filename)[source]

Save the native model to a local or WASB remote location.

synapse.ml.vw.VowpalWabbitFeaturizer module

class synapse.ml.vw.VowpalWabbitFeaturizer.VowpalWabbitFeaturizer(java_obj=None, inputCols=[], numBits=30, outputCol='features', prefixStringsWithColumnName=True, preserveOrderNumBits=0, seed=0, stringSplitInputCols=[], sumCollisions=True)[source]

Bases: synapse.ml.core.schema.Utils.ComplexParamsMixin, pyspark.ml.util.JavaMLReadable, pyspark.ml.util.JavaMLWritable, pyspark.ml.wrapper.JavaTransformer

Parameters
  • inputCols (list) – The names of the input columns

  • numBits (int) – Number of bits used to mask

  • outputCol (object) – The name of the output column

  • prefixStringsWithColumnName (bool) – Prefix string features with column name

  • preserveOrderNumBits (int) – Number of bits used to preserve the feature order. This will reduce the hash size. Needs to be large enough to fit count the maximum number of words

  • seed (int) – Hash seed

  • stringSplitInputCols (list) – Input cols that should be split at word boundaries

  • sumCollisions (bool) – Sums collisions if true, otherwise removes them

getInputCols()[source]
Returns

The names of the input columns

Return type

inputCols

static getJavaPackage()[source]

Returns package name String.

getNumBits()[source]
Returns

Number of bits used to mask

Return type

numBits

getOutputCol()[source]
Returns

The name of the output column

Return type

outputCol

getPrefixStringsWithColumnName()[source]
Returns

Prefix string features with column name

Return type

prefixStringsWithColumnName

getPreserveOrderNumBits()[source]
Returns

Number of bits used to preserve the feature order. This will reduce the hash size. Needs to be large enough to fit count the maximum number of words

Return type

preserveOrderNumBits

getSeed()[source]
Returns

Hash seed

Return type

seed

getStringSplitInputCols()[source]
Returns

Input cols that should be split at word boundaries

Return type

stringSplitInputCols

getSumCollisions()[source]
Returns

Sums collisions if true, otherwise removes them

Return type

sumCollisions

inputCols = Param(parent='undefined', name='inputCols', doc='The names of the input columns')
numBits = Param(parent='undefined', name='numBits', doc='Number of bits used to mask')
outputCol = Param(parent='undefined', name='outputCol', doc='The name of the output column')
prefixStringsWithColumnName = Param(parent='undefined', name='prefixStringsWithColumnName', doc='Prefix string features with column name')
preserveOrderNumBits = Param(parent='undefined', name='preserveOrderNumBits', doc='Number of bits used to preserve the feature order. This will reduce the hash size. Needs to be large enough to fit count the maximum number of words')
classmethod read()[source]

Returns an MLReader instance for this class.

seed = Param(parent='undefined', name='seed', doc='Hash seed')
setInputCols(value)[source]
Parameters

inputCols – The names of the input columns

setNumBits(value)[source]
Parameters

numBits – Number of bits used to mask

setOutputCol(value)[source]
Parameters

outputCol – The name of the output column

setParams(inputCols=[], numBits=30, outputCol='features', prefixStringsWithColumnName=True, preserveOrderNumBits=0, seed=0, stringSplitInputCols=[], sumCollisions=True)[source]

Set the (keyword only) parameters

setPrefixStringsWithColumnName(value)[source]
Parameters

prefixStringsWithColumnName – Prefix string features with column name

setPreserveOrderNumBits(value)[source]
Parameters

preserveOrderNumBits – Number of bits used to preserve the feature order. This will reduce the hash size. Needs to be large enough to fit count the maximum number of words

setSeed(value)[source]
Parameters

seed – Hash seed

setStringSplitInputCols(value)[source]
Parameters

stringSplitInputCols – Input cols that should be split at word boundaries

setSumCollisions(value)[source]
Parameters

sumCollisions – Sums collisions if true, otherwise removes them

stringSplitInputCols = Param(parent='undefined', name='stringSplitInputCols', doc='Input cols that should be split at word boundaries')
sumCollisions = Param(parent='undefined', name='sumCollisions', doc='Sums collisions if true, otherwise removes them')

synapse.ml.vw.VowpalWabbitInteractions module

class synapse.ml.vw.VowpalWabbitInteractions.VowpalWabbitInteractions(java_obj=None, inputCols=None, numBits=30, outputCol=None, sumCollisions=True)[source]

Bases: synapse.ml.core.schema.Utils.ComplexParamsMixin, pyspark.ml.util.JavaMLReadable, pyspark.ml.util.JavaMLWritable, pyspark.ml.wrapper.JavaTransformer

Parameters
  • inputCols (list) – The names of the input columns

  • numBits (int) – Number of bits used to mask

  • outputCol (object) – The name of the output column

  • sumCollisions (bool) – Sums collisions if true, otherwise removes them

getInputCols()[source]
Returns

The names of the input columns

Return type

inputCols

static getJavaPackage()[source]

Returns package name String.

getNumBits()[source]
Returns

Number of bits used to mask

Return type

numBits

getOutputCol()[source]
Returns

The name of the output column

Return type

outputCol

getSumCollisions()[source]
Returns

Sums collisions if true, otherwise removes them

Return type

sumCollisions

inputCols = Param(parent='undefined', name='inputCols', doc='The names of the input columns')
numBits = Param(parent='undefined', name='numBits', doc='Number of bits used to mask')
outputCol = Param(parent='undefined', name='outputCol', doc='The name of the output column')
classmethod read()[source]

Returns an MLReader instance for this class.

setInputCols(value)[source]
Parameters

inputCols – The names of the input columns

setNumBits(value)[source]
Parameters

numBits – Number of bits used to mask

setOutputCol(value)[source]
Parameters

outputCol – The name of the output column

setParams(inputCols=None, numBits=30, outputCol=None, sumCollisions=True)[source]

Set the (keyword only) parameters

setSumCollisions(value)[source]
Parameters

sumCollisions – Sums collisions if true, otherwise removes them

sumCollisions = Param(parent='undefined', name='sumCollisions', doc='Sums collisions if true, otherwise removes them')

synapse.ml.vw.VowpalWabbitRegressionModel module

class synapse.ml.vw.VowpalWabbitRegressionModel.VowpalWabbitRegressionModel(java_obj=None, additionalFeatures=None, featuresCol='features', labelCol='label', model=None, performanceStatistics=None, predictionCol='prediction', rawPredictionCol='rawPrediction', testArgs='')[source]

Bases: synapse.ml.vw._VowpalWabbitRegressionModel._VowpalWabbitRegressionModel

getNativeModel()[source]

Get the binary native VW model.

getPerformanceStatistics()[source]
Returns

Performance statistics collected during training

Return type

performanceStatistics

getReadableModel()[source]
saveNativeModel(filename)[source]

Save the native model to a local or WASB remote location.

synapse.ml.vw.VowpalWabbitRegressor module

class synapse.ml.vw.VowpalWabbitRegressor.VowpalWabbitRegressor(java_obj=None, additionalFeatures=[], args='', featuresCol='features', hashSeed=0, ignoreNamespaces=None, initialModel=None, interactions=None, l1=None, l2=None, labelCol='label', learningRate=None, numBits=18, numPasses=1, powerT=None, predictionCol='prediction', useBarrierExecutionMode=True, weightCol=None)[source]

Bases: synapse.ml.vw._VowpalWabbitRegressor._VowpalWabbitRegressor

setInitialModel(model)[source]

Initialize the estimator with a previously trained model.

Module contents

SynapseML is an ecosystem of tools aimed towards expanding the distributed computing framework Apache Spark in several new directions. SynapseML adds many deep learning and data science tools to the Spark ecosystem, including seamless integration of Spark Machine Learning pipelines with Microsoft Cognitive Toolkit (CNTK), LightGBM and OpenCV. These tools enable powerful and highly-scalable predictive and analytical models for a variety of datasources.

SynapseML also brings new networking capabilities to the Spark Ecosystem. With the HTTP on Spark project, users can embed any web service into their SparkML models. In this vein, SynapseML provides easy to use SparkML transformers for a wide variety of Microsoft Cognitive Services. For production grade deployment, the Spark Serving project enables high throughput, sub-millisecond latency web services, backed by your Spark cluster.

SynapseML requires Scala 2.12, Spark 3.0+, and Python 3.6+.