synapse.ml.featurize.text package

Submodules

synapse.ml.featurize.text.MultiNGram module

class synapse.ml.featurize.text.MultiNGram.MultiNGram(java_obj=None, inputCol=None, lengths=None, outputCol='MultiNGram_9617b7a20123_output')[source]

Bases: synapse.ml.core.schema.Utils.ComplexParamsMixin, pyspark.ml.util.JavaMLReadable, pyspark.ml.util.JavaMLWritable, pyspark.ml.wrapper.JavaTransformer

Parameters
  • inputCol (object) – The name of the input column

  • lengths (object) – the collection of lengths to use for ngram extraction

  • outputCol (object) – The name of the output column

getInputCol()[source]
Returns

The name of the input column

Return type

inputCol

static getJavaPackage()[source]

Returns package name String.

getLengths()[source]
Returns

the collection of lengths to use for ngram extraction

Return type

lengths

getOutputCol()[source]
Returns

The name of the output column

Return type

outputCol

inputCol = Param(parent='undefined', name='inputCol', doc='The name of the input column')
lengths = Param(parent='undefined', name='lengths', doc='the collection of lengths to use for ngram extraction')
outputCol = Param(parent='undefined', name='outputCol', doc='The name of the output column')
classmethod read()[source]

Returns an MLReader instance for this class.

setInputCol(value)[source]
Parameters

inputCol – The name of the input column

setLengths(value)[source]
Parameters

lengths – the collection of lengths to use for ngram extraction

setOutputCol(value)[source]
Parameters

outputCol – The name of the output column

setParams(inputCol=None, lengths=None, outputCol='MultiNGram_9617b7a20123_output')[source]

Set the (keyword only) parameters

synapse.ml.featurize.text.PageSplitter module

class synapse.ml.featurize.text.PageSplitter.PageSplitter(java_obj=None, boundaryRegex='\\s', inputCol=None, maximumPageLength=5000, minimumPageLength=4500, outputCol='PageSplitter_f5af9447952f_output')[source]

Bases: synapse.ml.core.schema.Utils.ComplexParamsMixin, pyspark.ml.util.JavaMLReadable, pyspark.ml.util.JavaMLWritable, pyspark.ml.wrapper.JavaTransformer

Parameters
  • boundaryRegex (object) – how to split into words

  • inputCol (object) – The name of the input column

  • maximumPageLength (int) – the maximum number of characters to be in a page

  • minimumPageLength (int) – the the minimum number of characters to have on a page in order to preserve work boundaries

  • outputCol (object) – The name of the output column

boundaryRegex = Param(parent='undefined', name='boundaryRegex', doc='how to split into words')
getBoundaryRegex()[source]
Returns

how to split into words

Return type

boundaryRegex

getInputCol()[source]
Returns

The name of the input column

Return type

inputCol

static getJavaPackage()[source]

Returns package name String.

getMaximumPageLength()[source]
Returns

the maximum number of characters to be in a page

Return type

maximumPageLength

getMinimumPageLength()[source]
Returns

the the minimum number of characters to have on a page in order to preserve work boundaries

Return type

minimumPageLength

getOutputCol()[source]
Returns

The name of the output column

Return type

outputCol

inputCol = Param(parent='undefined', name='inputCol', doc='The name of the input column')
maximumPageLength = Param(parent='undefined', name='maximumPageLength', doc='the maximum number of characters to be in a page')
minimumPageLength = Param(parent='undefined', name='minimumPageLength', doc='the the minimum number of characters to have on a page in order to preserve work boundaries')
outputCol = Param(parent='undefined', name='outputCol', doc='The name of the output column')
classmethod read()[source]

Returns an MLReader instance for this class.

setBoundaryRegex(value)[source]
Parameters

boundaryRegex – how to split into words

setInputCol(value)[source]
Parameters

inputCol – The name of the input column

setMaximumPageLength(value)[source]
Parameters

maximumPageLength – the maximum number of characters to be in a page

setMinimumPageLength(value)[source]
Parameters

minimumPageLength – the the minimum number of characters to have on a page in order to preserve work boundaries

setOutputCol(value)[source]
Parameters

outputCol – The name of the output column

setParams(boundaryRegex='\\s', inputCol=None, maximumPageLength=5000, minimumPageLength=4500, outputCol='PageSplitter_f5af9447952f_output')[source]

Set the (keyword only) parameters

synapse.ml.featurize.text.TextFeaturizer module

class synapse.ml.featurize.text.TextFeaturizer.TextFeaturizer(java_obj=None, binary=False, caseSensitiveStopWords=False, defaultStopWordLanguage='english', inputCol=None, minDocFreq=1, minTokenLength=0, nGramLength=2, numFeatures=262144, outputCol='TextFeaturizer_312f3a2bf3bf_output', stopWords=None, toLowercase=True, tokenizerGaps=True, tokenizerPattern='\\s+', useIDF=True, useNGram=False, useStopWordsRemover=False, useTokenizer=True)[source]

Bases: synapse.ml.core.schema.Utils.ComplexParamsMixin, pyspark.ml.util.JavaMLReadable, pyspark.ml.util.JavaMLWritable, pyspark.ml.wrapper.JavaEstimator

Parameters
  • binary (bool) – If true, all nonegative word counts are set to 1

  • caseSensitiveStopWords (bool) – Whether to do a case sensitive comparison over the stop words

  • defaultStopWordLanguage (object) – Which language to use for the stop word remover, set this to custom to use the stopWords input

  • inputCol (object) – The name of the input column

  • minDocFreq (int) – The minimum number of documents in which a term should appear.

  • minTokenLength (int) – Minimum token length, >= 0.

  • nGramLength (int) – The size of the Ngrams

  • numFeatures (int) – Set the number of features to hash each document to

  • outputCol (object) – The name of the output column

  • stopWords (object) – The words to be filtered out.

  • toLowercase (bool) – Indicates whether to convert all characters to lowercase before tokenizing.

  • tokenizerGaps (bool) – Indicates whether regex splits on gaps (true) or matches tokens (false).

  • tokenizerPattern (object) – Regex pattern used to match delimiters if gaps is true or tokens if gaps is false.

  • useIDF (bool) – Whether to scale the Term Frequencies by IDF

  • useNGram (bool) – Whether to enumerate N grams

  • useStopWordsRemover (bool) – Whether to remove stop words from tokenized data

  • useTokenizer (bool) – Whether to tokenize the input

binary = Param(parent='undefined', name='binary', doc='If true, all nonegative word counts are set to 1')
caseSensitiveStopWords = Param(parent='undefined', name='caseSensitiveStopWords', doc=' Whether to do a case sensitive comparison over the stop words')
defaultStopWordLanguage = Param(parent='undefined', name='defaultStopWordLanguage', doc='Which language to use for the stop word remover, set this to custom to use the stopWords input')
getBinary()[source]
Returns

If true, all nonegative word counts are set to 1

Return type

binary

getCaseSensitiveStopWords()[source]
Returns

Whether to do a case sensitive comparison over the stop words

Return type

caseSensitiveStopWords

getDefaultStopWordLanguage()[source]
Returns

Which language to use for the stop word remover, set this to custom to use the stopWords input

Return type

defaultStopWordLanguage

getInputCol()[source]
Returns

The name of the input column

Return type

inputCol

static getJavaPackage()[source]

Returns package name String.

getMinDocFreq()[source]
Returns

The minimum number of documents in which a term should appear.

Return type

minDocFreq

getMinTokenLength()[source]
Returns

Minimum token length, >= 0.

Return type

minTokenLength

getNGramLength()[source]
Returns

The size of the Ngrams

Return type

nGramLength

getNumFeatures()[source]
Returns

Set the number of features to hash each document to

Return type

numFeatures

getOutputCol()[source]
Returns

The name of the output column

Return type

outputCol

getStopWords()[source]
Returns

The words to be filtered out.

Return type

stopWords

getToLowercase()[source]
Returns

Indicates whether to convert all characters to lowercase before tokenizing.

Return type

toLowercase

getTokenizerGaps()[source]
Returns

Indicates whether regex splits on gaps (true) or matches tokens (false).

Return type

tokenizerGaps

getTokenizerPattern()[source]
Returns

Regex pattern used to match delimiters if gaps is true or tokens if gaps is false.

Return type

tokenizerPattern

getUseIDF()[source]
Returns

Whether to scale the Term Frequencies by IDF

Return type

useIDF

getUseNGram()[source]
Returns

Whether to enumerate N grams

Return type

useNGram

getUseStopWordsRemover()[source]
Returns

Whether to remove stop words from tokenized data

Return type

useStopWordsRemover

getUseTokenizer()[source]
Returns

Whether to tokenize the input

Return type

useTokenizer

inputCol = Param(parent='undefined', name='inputCol', doc='The name of the input column')
minDocFreq = Param(parent='undefined', name='minDocFreq', doc='The minimum number of documents in which a term should appear.')
minTokenLength = Param(parent='undefined', name='minTokenLength', doc='Minimum token length, >= 0.')
nGramLength = Param(parent='undefined', name='nGramLength', doc='The size of the Ngrams')
numFeatures = Param(parent='undefined', name='numFeatures', doc='Set the number of features to hash each document to')
outputCol = Param(parent='undefined', name='outputCol', doc='The name of the output column')
classmethod read()[source]

Returns an MLReader instance for this class.

setBinary(value)[source]
Parameters

binary – If true, all nonegative word counts are set to 1

setCaseSensitiveStopWords(value)[source]
Parameters

caseSensitiveStopWords – Whether to do a case sensitive comparison over the stop words

setDefaultStopWordLanguage(value)[source]
Parameters

defaultStopWordLanguage – Which language to use for the stop word remover, set this to custom to use the stopWords input

setInputCol(value)[source]
Parameters

inputCol – The name of the input column

setMinDocFreq(value)[source]
Parameters

minDocFreq – The minimum number of documents in which a term should appear.

setMinTokenLength(value)[source]
Parameters

minTokenLength – Minimum token length, >= 0.

setNGramLength(value)[source]
Parameters

nGramLength – The size of the Ngrams

setNumFeatures(value)[source]
Parameters

numFeatures – Set the number of features to hash each document to

setOutputCol(value)[source]
Parameters

outputCol – The name of the output column

setParams(binary=False, caseSensitiveStopWords=False, defaultStopWordLanguage='english', inputCol=None, minDocFreq=1, minTokenLength=0, nGramLength=2, numFeatures=262144, outputCol='TextFeaturizer_312f3a2bf3bf_output', stopWords=None, toLowercase=True, tokenizerGaps=True, tokenizerPattern='\\s+', useIDF=True, useNGram=False, useStopWordsRemover=False, useTokenizer=True)[source]

Set the (keyword only) parameters

setStopWords(value)[source]
Parameters

stopWords – The words to be filtered out.

setToLowercase(value)[source]
Parameters

toLowercase – Indicates whether to convert all characters to lowercase before tokenizing.

setTokenizerGaps(value)[source]
Parameters

tokenizerGaps – Indicates whether regex splits on gaps (true) or matches tokens (false).

setTokenizerPattern(value)[source]
Parameters

tokenizerPattern – Regex pattern used to match delimiters if gaps is true or tokens if gaps is false.

setUseIDF(value)[source]
Parameters

useIDF – Whether to scale the Term Frequencies by IDF

setUseNGram(value)[source]
Parameters

useNGram – Whether to enumerate N grams

setUseStopWordsRemover(value)[source]
Parameters

useStopWordsRemover – Whether to remove stop words from tokenized data

setUseTokenizer(value)[source]
Parameters

useTokenizer – Whether to tokenize the input

stopWords = Param(parent='undefined', name='stopWords', doc='The words to be filtered out.')
toLowercase = Param(parent='undefined', name='toLowercase', doc='Indicates whether to convert all characters to lowercase before tokenizing.')
tokenizerGaps = Param(parent='undefined', name='tokenizerGaps', doc='Indicates whether regex splits on gaps (true) or matches tokens (false).')
tokenizerPattern = Param(parent='undefined', name='tokenizerPattern', doc='Regex pattern used to match delimiters if gaps is true or tokens if gaps is false.')
useIDF = Param(parent='undefined', name='useIDF', doc='Whether to scale the Term Frequencies by IDF')
useNGram = Param(parent='undefined', name='useNGram', doc='Whether to enumerate N grams')
useStopWordsRemover = Param(parent='undefined', name='useStopWordsRemover', doc='Whether to remove stop words from tokenized data')
useTokenizer = Param(parent='undefined', name='useTokenizer', doc='Whether to tokenize the input')

Module contents

SynapseML is an ecosystem of tools aimed towards expanding the distributed computing framework Apache Spark in several new directions. SynapseML adds many deep learning and data science tools to the Spark ecosystem, including seamless integration of Spark Machine Learning pipelines with Microsoft Cognitive Toolkit (CNTK), LightGBM and OpenCV. These tools enable powerful and highly-scalable predictive and analytical models for a variety of datasources.

SynapseML also brings new networking capabilities to the Spark Ecosystem. With the HTTP on Spark project, users can embed any web service into their SparkML models. In this vein, SynapseML provides easy to use SparkML transformers for a wide variety of Microsoft Cognitive Services. For production grade deployment, the Spark Serving project enables high throughput, sub-millisecond latency web services, backed by your Spark cluster.

SynapseML requires Scala 2.12, Spark 3.0+, and Python 3.6+.