synapse.ml.cyber.utils package

Submodules

synapse.ml.cyber.utils.spark_utils module

class synapse.ml.cyber.utils.spark_utils.DataFrameUtils[source]

Bases: object

Extension methods over Spark DataFrame

static get_spark_session(df: pyspark.sql.dataframe.DataFrame) pyspark.sql.session.SparkSession[source]

get the associated Spark session

Parameters

df (DataFrame) – the dataframe of which we want to get its Spark session

static make_empty(df: pyspark.sql.dataframe.DataFrame) pyspark.sql.dataframe.DataFrame[source]

make an empty dataframe with the same schema

Parameters
  • use (df _sphinx_paramlinks_synapse.ml.cyber.utils.spark_utils.DataFrameUtils.make_empty.the dataframe whose schema we wish to) –

  • dataframe (Returns _sphinx_paramlinks_synapse.ml.cyber.utils.spark_utils.DataFrameUtils.make_empty.an empty) –

  • -------

static zip_with_index(df: pyspark.sql.dataframe.DataFrame, start_index: int = 0, col_name: str = 'rowId', partition_col: Union[List[str], str] = [], order_by_col: Union[List[str], str] = []) pyspark.sql.dataframe.DataFrame[source]

add an index to the given dataframe

Parameters
  • df (dataframe) – the dataframe to add the index to

  • start_index (int) – the value to start the count from

  • col_name (str) – the name of the index column which will be added as last column in the output data frame

  • partition_col (Union[List[str], str]) – optional column name or list of columns names that define a partitioning to assign indices independently to, e.g., assign sequential indices separately to each distinct tenant

  • order_by_col (Union[List[str], str]) – optional order by column name or list of columns that are used for sorting the data frame or partitions before indexing

class synapse.ml.cyber.utils.spark_utils.ExplainBuilder[source]

Bases: object

static build(explainable: Any, **kwargs)[source]
static copy_params(from_explainable: Any, to_explainable: Any)[source]
static get_method(the_explainable, the_method_name)[source]
static get_methods(the_explainable)[source]

Module contents

SynapseML is an ecosystem of tools aimed towards expanding the distributed computing framework Apache Spark in several new directions. SynapseML adds many deep learning and data science tools to the Spark ecosystem, including seamless integration of Spark Machine Learning pipelines with Microsoft Cognitive Toolkit (CNTK), LightGBM and OpenCV. These tools enable powerful and highly-scalable predictive and analytical models for a variety of datasources.

SynapseML also brings new networking capabilities to the Spark Ecosystem. With the HTTP on Spark project, users can embed any web service into their SparkML models. In this vein, SynapseML provides easy to use SparkML transformers for a wide variety of Microsoft Cognitive Services. For production grade deployment, the Spark Serving project enables high throughput, sub-millisecond latency web services, backed by your Spark cluster.

SynapseML requires Scala 2.12, Spark 3.0+, and Python 3.6+.