synapse.ml.featurize.text package
Submodules
synapse.ml.featurize.text.MultiNGram module
- class synapse.ml.featurize.text.MultiNGram.MultiNGram(java_obj=None, inputCol=None, lengths=None, outputCol='MultiNGram_e8542b60ab40_output')[source]
Bases:
pyspark.ml.util.MLReadable
[pyspark.ml.util.RL
]- Parameters
- getLengths()[source]
- Returns
the collection of lengths to use for ngram extraction
- Return type
lengths
- inputCol = Param(parent='undefined', name='inputCol', doc='The name of the input column')
- lengths = Param(parent='undefined', name='lengths', doc='the collection of lengths to use for ngram extraction')
- outputCol = Param(parent='undefined', name='outputCol', doc='The name of the output column')
synapse.ml.featurize.text.PageSplitter module
- class synapse.ml.featurize.text.PageSplitter.PageSplitter(java_obj=None, boundaryRegex='\\s', inputCol=None, maximumPageLength=5000, minimumPageLength=4500, outputCol='PageSplitter_8b77c68ea8c8_output')[source]
Bases:
pyspark.ml.util.MLReadable
[pyspark.ml.util.RL
]- Parameters
- boundaryRegex = Param(parent='undefined', name='boundaryRegex', doc='how to split into words')
- getMaximumPageLength()[source]
- Returns
the maximum number of characters to be in a page
- Return type
maximumPageLength
- getMinimumPageLength()[source]
- Returns
the the minimum number of characters to have on a page in order to preserve work boundaries
- Return type
minimumPageLength
- inputCol = Param(parent='undefined', name='inputCol', doc='The name of the input column')
- maximumPageLength = Param(parent='undefined', name='maximumPageLength', doc='the maximum number of characters to be in a page')
- minimumPageLength = Param(parent='undefined', name='minimumPageLength', doc='the the minimum number of characters to have on a page in order to preserve work boundaries')
- outputCol = Param(parent='undefined', name='outputCol', doc='The name of the output column')
- setMaximumPageLength(value)[source]
- Parameters
maximumPageLength¶ – the maximum number of characters to be in a page
synapse.ml.featurize.text.TextFeaturizer module
- class synapse.ml.featurize.text.TextFeaturizer.TextFeaturizer(java_obj=None, binary=False, caseSensitiveStopWords=False, defaultStopWordLanguage='english', inputCol=None, minDocFreq=1, minTokenLength=0, nGramLength=2, numFeatures=262144, outputCol='TextFeaturizer_78e5a42b98f0_output', stopWords=None, toLowercase=True, tokenizerGaps=True, tokenizerPattern='\\s+', useIDF=True, useNGram=False, useStopWordsRemover=False, useTokenizer=True)[source]
Bases:
pyspark.ml.util.MLReadable
[pyspark.ml.util.RL
]- Parameters
binary¶ (bool) – If true, all nonegative word counts are set to 1
caseSensitiveStopWords¶ (bool) – Whether to do a case sensitive comparison over the stop words
defaultStopWordLanguage¶ (str) – Which language to use for the stop word remover, set this to custom to use the stopWords input
minDocFreq¶ (int) – The minimum number of documents in which a term should appear.
numFeatures¶ (int) – Set the number of features to hash each document to
toLowercase¶ (bool) – Indicates whether to convert all characters to lowercase before tokenizing.
tokenizerGaps¶ (bool) – Indicates whether regex splits on gaps (true) or matches tokens (false).
tokenizerPattern¶ (str) – Regex pattern used to match delimiters if gaps is true or tokens if gaps is false.
useIDF¶ (bool) – Whether to scale the Term Frequencies by IDF
useStopWordsRemover¶ (bool) – Whether to remove stop words from tokenized data
- binary = Param(parent='undefined', name='binary', doc='If true, all nonegative word counts are set to 1')
- caseSensitiveStopWords = Param(parent='undefined', name='caseSensitiveStopWords', doc=' Whether to do a case sensitive comparison over the stop words')
- defaultStopWordLanguage = Param(parent='undefined', name='defaultStopWordLanguage', doc='Which language to use for the stop word remover, set this to custom to use the stopWords input')
- getCaseSensitiveStopWords()[source]
- Returns
Whether to do a case sensitive comparison over the stop words
- Return type
caseSensitiveStopWords
- getDefaultStopWordLanguage()[source]
- Returns
Which language to use for the stop word remover, set this to custom to use the stopWords input
- Return type
defaultStopWordLanguage
- getMinDocFreq()[source]
- Returns
The minimum number of documents in which a term should appear.
- Return type
minDocFreq
- getNumFeatures()[source]
- Returns
Set the number of features to hash each document to
- Return type
numFeatures
- getToLowercase()[source]
- Returns
Indicates whether to convert all characters to lowercase before tokenizing.
- Return type
toLowercase
- getTokenizerGaps()[source]
- Returns
Indicates whether regex splits on gaps (true) or matches tokens (false).
- Return type
tokenizerGaps
- getTokenizerPattern()[source]
- Returns
Regex pattern used to match delimiters if gaps is true or tokens if gaps is false.
- Return type
tokenizerPattern
- getUseStopWordsRemover()[source]
- Returns
Whether to remove stop words from tokenized data
- Return type
useStopWordsRemover
- inputCol = Param(parent='undefined', name='inputCol', doc='The name of the input column')
- minDocFreq = Param(parent='undefined', name='minDocFreq', doc='The minimum number of documents in which a term should appear.')
- minTokenLength = Param(parent='undefined', name='minTokenLength', doc='Minimum token length, >= 0.')
- nGramLength = Param(parent='undefined', name='nGramLength', doc='The size of the Ngrams')
- numFeatures = Param(parent='undefined', name='numFeatures', doc='Set the number of features to hash each document to')
- outputCol = Param(parent='undefined', name='outputCol', doc='The name of the output column')
- setCaseSensitiveStopWords(value)[source]
- Parameters
caseSensitiveStopWords¶ – Whether to do a case sensitive comparison over the stop words
- setDefaultStopWordLanguage(value)[source]
- Parameters
defaultStopWordLanguage¶ – Which language to use for the stop word remover, set this to custom to use the stopWords input
- setMinDocFreq(value)[source]
- Parameters
minDocFreq¶ – The minimum number of documents in which a term should appear.
- setNumFeatures(value)[source]
- Parameters
numFeatures¶ – Set the number of features to hash each document to
- setParams(binary=False, caseSensitiveStopWords=False, defaultStopWordLanguage='english', inputCol=None, minDocFreq=1, minTokenLength=0, nGramLength=2, numFeatures=262144, outputCol='TextFeaturizer_78e5a42b98f0_output', stopWords=None, toLowercase=True, tokenizerGaps=True, tokenizerPattern='\\s+', useIDF=True, useNGram=False, useStopWordsRemover=False, useTokenizer=True)[source]
Set the (keyword only) parameters
- setToLowercase(value)[source]
- Parameters
toLowercase¶ – Indicates whether to convert all characters to lowercase before tokenizing.
- setTokenizerGaps(value)[source]
- Parameters
tokenizerGaps¶ – Indicates whether regex splits on gaps (true) or matches tokens (false).
- setTokenizerPattern(value)[source]
- Parameters
tokenizerPattern¶ – Regex pattern used to match delimiters if gaps is true or tokens if gaps is false.
- setUseStopWordsRemover(value)[source]
- Parameters
useStopWordsRemover¶ – Whether to remove stop words from tokenized data
- stopWords = Param(parent='undefined', name='stopWords', doc='The words to be filtered out.')
- toLowercase = Param(parent='undefined', name='toLowercase', doc='Indicates whether to convert all characters to lowercase before tokenizing.')
- tokenizerGaps = Param(parent='undefined', name='tokenizerGaps', doc='Indicates whether regex splits on gaps (true) or matches tokens (false).')
- tokenizerPattern = Param(parent='undefined', name='tokenizerPattern', doc='Regex pattern used to match delimiters if gaps is true or tokens if gaps is false.')
- useIDF = Param(parent='undefined', name='useIDF', doc='Whether to scale the Term Frequencies by IDF')
- useNGram = Param(parent='undefined', name='useNGram', doc='Whether to enumerate N grams')
- useStopWordsRemover = Param(parent='undefined', name='useStopWordsRemover', doc='Whether to remove stop words from tokenized data')
- useTokenizer = Param(parent='undefined', name='useTokenizer', doc='Whether to tokenize the input')
Module contents
SynapseML is an ecosystem of tools aimed towards expanding the distributed computing framework Apache Spark in several new directions. SynapseML adds many deep learning and data science tools to the Spark ecosystem, including seamless integration of Spark Machine Learning pipelines with Microsoft Cognitive Toolkit (CNTK), LightGBM and OpenCV. These tools enable powerful and highly-scalable predictive and analytical models for a variety of datasources.
SynapseML also brings new networking capabilities to the Spark Ecosystem. With the HTTP on Spark project, users can embed any web service into their SparkML models. In this vein, SynapseML provides easy to use SparkML transformers for a wide variety of Microsoft Cognitive Services. For production grade deployment, the Spark Serving project enables high throughput, sub-millisecond latency web services, backed by your Spark cluster.
SynapseML requires Scala 2.12, Spark 3.0+, and Python 3.6+.