mmlspark.core.schema package

Submodules

mmlspark.core.schema.TypeConversionUtils module

mmlspark.core.schema.TypeConversionUtils.complexTypeConverter(name, value, cache)[source]

Type conversion for complex types

Parameters
  • name

  • value

  • cache

Returns

Return type

_java_obj

mmlspark.core.schema.TypeConversionUtils.generateTypeConverter(name, cache, typeConverter)[source]

Type converter

Parameters
  • name (str) –

  • cache

  • typeConverter

Returns

Function to convert the type

Return type

lambda

mmlspark.core.schema.Utils module

class mmlspark.core.schema.Utils.ComplexParamsMixin[source]

Bases: pyspark.ml.util.MLReadable

class mmlspark.core.schema.Utils.JavaMMLReadable[source]

Bases: pyspark.ml.util.MLReadable

(Private) Mixin for instances that provide JavaMLReader.

classmethod read()[source]

Returns an MLReader instance for this class.

class mmlspark.core.schema.Utils.JavaMMLReader(clazz)[source]

Bases: pyspark.ml.util.JavaMLReader

(Private) Specialization of MLReader for JavaParams types

mmlspark.core.schema.Utils.from_java(java_stage, stage_name)[source]

Given a Java object, create and return a Python wrapper of it. Used for ML persistence. Meta-algorithms such as Pipeline should override this method as a classmethod.

Parameters
  • java_stage (str) –

  • stage_name (str) –

Returns

The python wrapper

Return type

object

Module contents

MMLSpark is an ecosystem of tools aimed towards expanding the distributed computing framework Apache Spark in several new directions. MMLSpark adds many deep learning and data science tools to the Spark ecosystem, including seamless integration of Spark Machine Learning pipelines with Microsoft Cognitive Toolkit (CNTK), LightGBM and OpenCV. These tools enable powerful and highly-scalable predictive and analytical models for a variety of datasources.

MMLSpark also brings new networking capabilities to the Spark Ecosystem. With the HTTP on Spark project, users can embed any web service into their SparkML models. In this vein, MMLSpark provides easy to use SparkML transformers for a wide variety of Microsoft Cognitive Services. For production grade deployment, the Spark Serving project enables high throughput, sub-millisecond latency web services, backed by your Spark cluster.

MMLSpark requires Scala 2.11, Spark 2.4+, and Python 3.5+.