mmlspark.io.binary package

Submodules

mmlspark.io.binary.BinaryFileReader module

mmlspark.io.binary.BinaryFileReader.BinaryFileFields = ['path', 'bytes']

Names of Binary File Schema field names.

mmlspark.io.binary.BinaryFileReader.BinaryFileSchema = StructType(List(StructField(path,StringType,true),StructField(bytes,BinaryType,true)))

Schema for Binary Files.

Schema records consist of BinaryFileFields name, Type, and ??

path bytes

mmlspark.io.binary.BinaryFileReader.isBinaryFile(df, column)[source]

Returns True if the column contains binary files

Parameters
  • df (DataFrame) – The DataFrame to be processed

  • column (bool) – The name of the column being inspected

Returns

True if the colum is a binary files column

Return type

bool

mmlspark.io.binary.BinaryFileReader.readBinaryFiles(self, path, recursive=False, sampleRatio=1.0, inspectZip=True, seed=0)[source]

Reads the directory of binary files from the local or remote (WASB) source This function is attached to SparkSession class.

Example

>>> spark.readBinaryFiles(path, recursive, sampleRatio = 1.0, inspectZip = True)
Parameters
  • path (str) – Path to the file directory

  • recursive (b (double) – Fraction of the files loaded into the dataframe

Returns

DataFrame with a single column “value”; see binaryFileSchema for details

Return type

DataFrame

mmlspark.io.binary.BinaryFileReader.streamBinaryFiles(self, path, sampleRatio=1.0, inspectZip=True, seed=0)[source]

Streams the directory of binary files from the local or remote (WASB) source This function is attached to SparkSession class.

Example

>>> spark.streamBinaryFiles(path, sampleRatio = 1.0, inspectZip = True)
Parameters

path (str) – Path to the file directory

Returns

DataFrame with a single column “value”; see binaryFileSchema for details

Return type

DataFrame

Module contents

MMLSpark is an ecosystem of tools aimed towards expanding the distributed computing framework Apache Spark in several new directions. MMLSpark adds many deep learning and data science tools to the Spark ecosystem, including seamless integration of Spark Machine Learning pipelines with Microsoft Cognitive Toolkit (CNTK), LightGBM and OpenCV. These tools enable powerful and highly-scalable predictive and analytical models for a variety of datasources.

MMLSpark also brings new networking capabilities to the Spark Ecosystem. With the HTTP on Spark project, users can embed any web service into their SparkML models. In this vein, MMLSpark provides easy to use SparkML transformers for a wide variety of Microsoft Cognitive Services. For production grade deployment, the Spark Serving project enables high throughput, sub-millisecond latency web services, backed by your Spark cluster.

MMLSpark requires Scala 2.11, Spark 2.4+, and Python 3.5+.