synapse.ml.cyber.feature package
Submodules
synapse.ml.cyber.feature.indexers module
- class synapse.ml.cyber.feature.indexers.IdIndexer(input_col: str, partition_key: str, output_col: str, reset_per_partition: bool)[source]
Bases:
pyspark.ml.base.Estimator
,synapse.ml.cyber.utils.spark_utils.HasSetInputCol
,synapse.ml.cyber.utils.spark_utils.HasSetOutputCol
- partitionKey = Param(parent='undefined', name='partitionKey', doc='The name of the column to partition by, i.e., make sure the indexing takes the partition into account. This is exemplified in reset_per_partition.')
- resetPerPartition = Param(parent='undefined', name='resetPerPartition', doc='When set to True then indexing is consecutive from [1..n] for each value of the partition column. When set to False then indexing is consecutive for all partition and column values.')
- class synapse.ml.cyber.feature.indexers.IdIndexerModel(input_col: str, partition_key: str, output_col: str, vocab_df: pyspark.sql.dataframe.DataFrame)[source]
Bases:
pyspark.ml.base.Transformer
,synapse.ml.cyber.utils.spark_utils.HasSetInputCol
,synapse.ml.cyber.utils.spark_utils.HasSetOutputCol
- partitionKey = Param(parent='undefined', name='partitionKey', doc='The name of the column to partition by, i.e., make sure the indexing takes the partition into account. This is exemplified in reset_per_partition.')
- class synapse.ml.cyber.feature.indexers.MultiIndexer(indexers: List[synapse.ml.cyber.feature.indexers.IdIndexer])[source]
Bases:
pyspark.ml.base.Estimator
- class synapse.ml.cyber.feature.indexers.MultiIndexerModel(models: List[synapse.ml.cyber.feature.indexers.IdIndexerModel])[source]
Bases:
pyspark.ml.base.Transformer
synapse.ml.cyber.feature.scalers module
- class synapse.ml.cyber.feature.scalers.LinearScalarScaler(input_col: str, partition_key: Optional[str], output_col: str, min_required_value: float = 0.0, max_required_value: float = 1.0, use_pandas: bool = True)[source]
Bases:
synapse.ml.cyber.feature.scalers.PerPartitionScalarScalerEstimator
- maxRequiredValue = Param(parent='undefined', name='maxRequiredValue', doc='Scale the outputCol to have a value between [minRequiredValue, maxRequiredValue].')
- minRequiredValue = Param(parent='undefined', name='minRequiredValue', doc='Scale the outputCol to have a value between [minRequiredValue, maxRequiredValue].')
- class synapse.ml.cyber.feature.scalers.LinearScalarScalerConfig[source]
Bases:
object
- max_actual_value_token = '__max_actual_value__'
- min_actual_value_token = '__min_actual_value__'
- class synapse.ml.cyber.feature.scalers.LinearScalarScalerModel(input_col: str, partition_key: Optional[str], output_col: str, per_group_stats: Union[pyspark.sql.dataframe.DataFrame, Dict[str, float]], min_required_value: float, max_required_value: float, use_pandas: bool = True)[source]
Bases:
synapse.ml.cyber.feature.scalers.PerPartitionScalarScalerModel
- class synapse.ml.cyber.feature.scalers.PerPartitionScalarScalerEstimator(input_col: str, partition_key: Optional[str], output_col: str, use_pandas: bool = True)[source]
Bases:
abc.ABC
,pyspark.ml.base.Estimator
,synapse.ml.cyber.utils.spark_utils.HasSetInputCol
,synapse.ml.cyber.utils.spark_utils.HasSetOutputCol
- partitionKey = Param(parent='undefined', name='partitionKey', doc='The name of the column to partition by, i.e., scale the values of inputCol within each partition. ')
- property use_pandas
- class synapse.ml.cyber.feature.scalers.PerPartitionScalarScalerModel(input_col: str, partition_key: Optional[str], output_col: str, per_group_stats: Union[pyspark.sql.dataframe.DataFrame, Dict[str, float]], use_pandas: bool = True)[source]
Bases:
abc.ABC
,pyspark.ml.base.Transformer
,synapse.ml.cyber.utils.spark_utils.HasSetInputCol
,synapse.ml.cyber.utils.spark_utils.HasSetOutputCol
- partitionKey = Param(parent='undefined', name='partitionKey', doc='The name of the column to partition by, i.e., scale the values of inputCol within each partition. ')
- property per_group_stats
- property use_pandas
- class synapse.ml.cyber.feature.scalers.StandardScalarScaler(input_col: str, partition_key: Optional[str], output_col: str, coefficient_factor: float = 1.0, use_pandas: bool = True)[source]
Bases:
synapse.ml.cyber.feature.scalers.PerPartitionScalarScalerEstimator
- coefficientFactor = Param(parent='undefined', name='coefficientFactor', doc='After scaling values of outputCol are multiplied by coefficient (defaults to 1.0). ')
- class synapse.ml.cyber.feature.scalers.StandardScalarScalerConfig[source]
Bases:
object
The tokens to use for temporary representation of mean and standard deviation
- mean_token = '__mean__'
- std_token = '__std__'
- class synapse.ml.cyber.feature.scalers.StandardScalarScalerModel(input_col: str, partition_key: Optional[str], output_col: str, per_group_stats: Union[pyspark.sql.dataframe.DataFrame, Dict[str, float]], coefficient_factor: float = 1.0, use_pandas: bool = True)[source]
Bases:
synapse.ml.cyber.feature.scalers.PerPartitionScalarScalerModel
- coefficientFactor = Param(parent='undefined', name='coefficientFactor', doc='After scaling values of outputCol are multiplied by coefficient (defaults to 1.0). ')
Module contents
SynapseML is an ecosystem of tools aimed towards expanding the distributed computing framework Apache Spark in several new directions. SynapseML adds many deep learning and data science tools to the Spark ecosystem, including seamless integration of Spark Machine Learning pipelines with Microsoft Cognitive Toolkit (CNTK), LightGBM and OpenCV. These tools enable powerful and highly-scalable predictive and analytical models for a variety of datasources.
SynapseML also brings new networking capabilities to the Spark Ecosystem. With the HTTP on Spark project, users can embed any web service into their SparkML models. In this vein, SynapseML provides easy to use SparkML transformers for a wide variety of Microsoft Cognitive Services. For production grade deployment, the Spark Serving project enables high throughput, sub-millisecond latency web services, backed by your Spark cluster.
SynapseML requires Scala 2.12, Spark 3.0+, and Python 3.6+.