synapse.ml.vw package
Submodules
synapse.ml.vw.VectorZipper module
- class synapse.ml.vw.VectorZipper.VectorZipper(java_obj=None, inputCols=None, outputCol=None)[source]
Bases:
pyspark.ml.util.MLReadable
[pyspark.ml.util.RL
]- Parameters
- inputCols = Param(parent='undefined', name='inputCols', doc='The names of the input columns')
- outputCol = Param(parent='undefined', name='outputCol', doc='The name of the output column')
synapse.ml.vw.VowpalWabbitCSETransformer module
- class synapse.ml.vw.VowpalWabbitCSETransformer.VowpalWabbitCSETransformer(java_obj=None, maxImportanceWeight=100.0, metricsStratificationCols=[], minImportanceWeight=0.0)[source]
Bases:
pyspark.ml.util.MLReadable
[pyspark.ml.util.RL
]- Parameters
- getMaxImportanceWeight()[source]
- Returns
Clip importance weight at this upper bound. Defaults to 100.
- Return type
maxImportanceWeight
- getMetricsStratificationCols()[source]
- Returns
Optional list of column names to stratify rewards by.
- Return type
metricsStratificationCols
- getMinImportanceWeight()[source]
- Returns
Clip importance weight at this lower bound. Defaults to 0.
- Return type
minImportanceWeight
- maxImportanceWeight = Param(parent='undefined', name='maxImportanceWeight', doc='Clip importance weight at this upper bound. Defaults to 100.')
- metricsStratificationCols = Param(parent='undefined', name='metricsStratificationCols', doc='Optional list of column names to stratify rewards by.')
- minImportanceWeight = Param(parent='undefined', name='minImportanceWeight', doc='Clip importance weight at this lower bound. Defaults to 0.')
- setMaxImportanceWeight(value)[source]
- Parameters
maxImportanceWeight¶ – Clip importance weight at this upper bound. Defaults to 100.
- setMetricsStratificationCols(value)[source]
- Parameters
metricsStratificationCols¶ – Optional list of column names to stratify rewards by.
synapse.ml.vw.VowpalWabbitClassificationModel module
- class synapse.ml.vw.VowpalWabbitClassificationModel.VowpalWabbitClassificationModel(java_obj=None, additionalFeatures=None, featuresCol='features', labelCol='label', model=None, numClassesModel=None, oneStepAheadPredictions=None, performanceStatistics=None, predictionCol='prediction', probabilityCol='probability', rawPredictionCol='rawPrediction', testArgs='', thresholds=None)[source]
Bases:
pyspark.ml.util.MLReadable
[pyspark.ml.util.RL
]
synapse.ml.vw.VowpalWabbitClassifier module
- class synapse.ml.vw.VowpalWabbitClassifier.VowpalWabbitClassifier(java_obj=None, additionalFeatures=[], featuresCol='features', hashSeed=0, ignoreNamespaces=None, initialModel=None, interactions=None, l1=None, l2=None, labelCol='label', labelConversion=False, learningRate=None, numBits=18, numClasses=2, numPasses=1, numSyncsPerPass=0, passThroughArgs='', powerT=None, predictionCol='prediction', predictionIdCol=None, probabilityCol='probability', rawPredictionCol='rawPrediction', splitCol=None, splitColValues=None, thresholds=None, useBarrierExecutionMode=True, weightCol=None)[source]
Bases:
pyspark.ml.util.MLReadable
[pyspark.ml.util.RL
]
synapse.ml.vw.VowpalWabbitContextualBandit module
- class synapse.ml.vw.VowpalWabbitContextualBandit.VowpalWabbitContextualBandit(java_obj=None, additionalFeatures=[], additionalSharedFeatures=[], chosenActionCol='chosenAction', epsilon=0.05, featuresCol='features', hashSeed=0, ignoreNamespaces=None, initialModel=None, interactions=None, l1=None, l2=None, labelCol='label', learningRate=None, numBits=18, numPasses=1, numSyncsPerPass=0, parallelism=1, passThroughArgs='', powerT=None, predictionCol='prediction', predictionIdCol=None, probabilityCol='probability', sharedCol='shared', splitCol=None, splitColValues=None, useBarrierExecutionMode=True, weightCol=None)[source]
Bases:
pyspark.ml.util.MLReadable
[pyspark.ml.util.RL
]
synapse.ml.vw.VowpalWabbitContextualBanditModel module
- class synapse.ml.vw.VowpalWabbitContextualBanditModel.VowpalWabbitContextualBanditModel(java_obj=None, additionalFeatures=[], additionalSharedFeatures=[], featuresCol='features', hashSeed=0, ignoreNamespaces=None, initialModel=None, interactions=None, l1=None, l2=None, labelCol='label', learningRate=None, model=None, numBits=18, numPasses=1, numSyncsPerPass=0, oneStepAheadPredictions=None, passThroughArgs='', performanceStatistics=None, powerT=None, predictionCol='prediction', predictionIdCol=None, rawPredictionCol='rawPrediction', sharedCol='shared', splitCol=None, splitColValues=None, testArgs='', useBarrierExecutionMode=True, weightCol=None)[source]
Bases:
pyspark.ml.util.MLReadable
[pyspark.ml.util.RL
]
synapse.ml.vw.VowpalWabbitDSJsonTransformer module
- class synapse.ml.vw.VowpalWabbitDSJsonTransformer.VowpalWabbitDSJsonTransformer(java_obj=None, dsJsonColumn='value', rewards={'reward': '_label_cost'})[source]
Bases:
pyspark.ml.util.MLReadable
[pyspark.ml.util.RL
]- Parameters
- dsJsonColumn = Param(parent='undefined', name='dsJsonColumn', doc='Column containing ds-json. defaults to "value".')
- getDsJsonColumn()[source]
- Returns
Column containing ds-json. defaults to “value”.
- Return type
dsJsonColumn
- getRewards()[source]
- Returns
Extract bandit reward(s) from DS json. Defaults to _label_cost.
- Return type
rewards
- rewards = Param(parent='undefined', name='rewards', doc='Extract bandit reward(s) from DS json. Defaults to _label_cost.')
- setDsJsonColumn(value)[source]
- Parameters
dsJsonColumn¶ – Column containing ds-json. defaults to “value”.
synapse.ml.vw.VowpalWabbitFeaturizer module
- class synapse.ml.vw.VowpalWabbitFeaturizer.VowpalWabbitFeaturizer(java_obj=None, inputCols=[], numBits=30, outputCol='features', prefixStringsWithColumnName=True, preserveOrderNumBits=0, seed=0, stringSplitInputCols=[], sumCollisions=True)[source]
Bases:
pyspark.ml.util.MLReadable
[pyspark.ml.util.RL
]- Parameters
prefixStringsWithColumnName¶ (bool) – Prefix string features with column name
preserveOrderNumBits¶ (int) – Number of bits used to preserve the feature order. This will reduce the hash size. Needs to be large enough to fit count the maximum number of words
stringSplitInputCols¶ (list) – Input cols that should be split at word boundaries
sumCollisions¶ (bool) – Sums collisions if true, otherwise removes them
- getPrefixStringsWithColumnName()[source]
- Returns
Prefix string features with column name
- Return type
prefixStringsWithColumnName
- getPreserveOrderNumBits()[source]
- Returns
Number of bits used to preserve the feature order. This will reduce the hash size. Needs to be large enough to fit count the maximum number of words
- Return type
preserveOrderNumBits
- getStringSplitInputCols()[source]
- Returns
Input cols that should be split at word boundaries
- Return type
stringSplitInputCols
- getSumCollisions()[source]
- Returns
Sums collisions if true, otherwise removes them
- Return type
sumCollisions
- inputCols = Param(parent='undefined', name='inputCols', doc='The names of the input columns')
- numBits = Param(parent='undefined', name='numBits', doc='Number of bits used to mask')
- outputCol = Param(parent='undefined', name='outputCol', doc='The name of the output column')
- prefixStringsWithColumnName = Param(parent='undefined', name='prefixStringsWithColumnName', doc='Prefix string features with column name')
- preserveOrderNumBits = Param(parent='undefined', name='preserveOrderNumBits', doc='Number of bits used to preserve the feature order. This will reduce the hash size. Needs to be large enough to fit count the maximum number of words')
- seed = Param(parent='undefined', name='seed', doc='Hash seed')
- setParams(inputCols=[], numBits=30, outputCol='features', prefixStringsWithColumnName=True, preserveOrderNumBits=0, seed=0, stringSplitInputCols=[], sumCollisions=True)[source]
Set the (keyword only) parameters
- setPrefixStringsWithColumnName(value)[source]
- Parameters
prefixStringsWithColumnName¶ – Prefix string features with column name
- setPreserveOrderNumBits(value)[source]
- Parameters
preserveOrderNumBits¶ – Number of bits used to preserve the feature order. This will reduce the hash size. Needs to be large enough to fit count the maximum number of words
- setStringSplitInputCols(value)[source]
- Parameters
stringSplitInputCols¶ – Input cols that should be split at word boundaries
- setSumCollisions(value)[source]
- Parameters
sumCollisions¶ – Sums collisions if true, otherwise removes them
- stringSplitInputCols = Param(parent='undefined', name='stringSplitInputCols', doc='Input cols that should be split at word boundaries')
- sumCollisions = Param(parent='undefined', name='sumCollisions', doc='Sums collisions if true, otherwise removes them')
synapse.ml.vw.VowpalWabbitGeneric module
- class synapse.ml.vw.VowpalWabbitGeneric.VowpalWabbitGeneric(java_obj=None, hashSeed=0, ignoreNamespaces=None, initialModel=None, inputCol='value', interactions=None, l1=None, l2=None, learningRate=None, numBits=18, numPasses=1, numSyncsPerPass=0, passThroughArgs='', powerT=None, predictionIdCol=None, splitCol=None, splitColValues=None, useBarrierExecutionMode=True)[source]
Bases:
pyspark.ml.util.MLReadable
[pyspark.ml.util.RL
]
synapse.ml.vw.VowpalWabbitGenericModel module
synapse.ml.vw.VowpalWabbitGenericProgressive module
- class synapse.ml.vw.VowpalWabbitGenericProgressive.VowpalWabbitGenericProgressive(java_obj=None, hashSeed=0, ignoreNamespaces=None, initialModel=None, inputCol='input', interactions=None, l1=None, l2=None, learningRate=None, numBits=18, numPasses=1, numSyncsPerPass=0, passThroughArgs='', powerT=None, useBarrierExecutionMode=True)[source]
Bases:
pyspark.ml.util.MLReadable
[pyspark.ml.util.RL
]
synapse.ml.vw.VowpalWabbitInteractions module
- class synapse.ml.vw.VowpalWabbitInteractions.VowpalWabbitInteractions(java_obj=None, inputCols=None, numBits=30, outputCol=None, sumCollisions=True)[source]
Bases:
pyspark.ml.util.MLReadable
[pyspark.ml.util.RL
]- Parameters
- getSumCollisions()[source]
- Returns
Sums collisions if true, otherwise removes them
- Return type
sumCollisions
- inputCols = Param(parent='undefined', name='inputCols', doc='The names of the input columns')
- numBits = Param(parent='undefined', name='numBits', doc='Number of bits used to mask')
- outputCol = Param(parent='undefined', name='outputCol', doc='The name of the output column')
- setParams(inputCols=None, numBits=30, outputCol=None, sumCollisions=True)[source]
Set the (keyword only) parameters
- setSumCollisions(value)[source]
- Parameters
sumCollisions¶ – Sums collisions if true, otherwise removes them
- sumCollisions = Param(parent='undefined', name='sumCollisions', doc='Sums collisions if true, otherwise removes them')
synapse.ml.vw.VowpalWabbitPythonBase module
synapse.ml.vw.VowpalWabbitRegressionModel module
- class synapse.ml.vw.VowpalWabbitRegressionModel.VowpalWabbitRegressionModel(java_obj=None, additionalFeatures=None, featuresCol='features', labelCol='label', model=None, oneStepAheadPredictions=None, performanceStatistics=None, predictionCol='prediction', rawPredictionCol='rawPrediction', testArgs='')[source]
Bases:
pyspark.ml.util.MLReadable
[pyspark.ml.util.RL
]
synapse.ml.vw.VowpalWabbitRegressor module
- class synapse.ml.vw.VowpalWabbitRegressor.VowpalWabbitRegressor(java_obj=None, additionalFeatures=[], featuresCol='features', hashSeed=0, ignoreNamespaces=None, initialModel=None, interactions=None, l1=None, l2=None, labelCol='label', learningRate=None, numBits=18, numPasses=1, numSyncsPerPass=0, passThroughArgs='', powerT=None, predictionCol='prediction', predictionIdCol=None, splitCol=None, splitColValues=None, useBarrierExecutionMode=True, weightCol=None)[source]
Bases:
pyspark.ml.util.MLReadable
[pyspark.ml.util.RL
]
Module contents
SynapseML is an ecosystem of tools aimed towards expanding the distributed computing framework Apache Spark in several new directions. SynapseML adds many deep learning and data science tools to the Spark ecosystem, including seamless integration of Spark Machine Learning pipelines with Microsoft Cognitive Toolkit (CNTK), LightGBM and OpenCV. These tools enable powerful and highly-scalable predictive and analytical models for a variety of datasources.
SynapseML also brings new networking capabilities to the Spark Ecosystem. With the HTTP on Spark project, users can embed any web service into their SparkML models. In this vein, SynapseML provides easy to use SparkML transformers for a wide variety of Microsoft Cognitive Services. For production grade deployment, the Spark Serving project enables high throughput, sub-millisecond latency web services, backed by your Spark cluster.
SynapseML requires Scala 2.12, Spark 3.0+, and Python 3.6+.