synapse.ml.hf package

Submodules

synapse.ml.hf.HuggingFaceCausalLMTransform module

class synapse.ml.hf.HuggingFaceCausalLMTransform.HuggingFaceCausalLM(modelName=None, inputCol=None, outputCol=None, task='chat', cachePath=None, deviceMap=None, torchDtype=None)[source]

Bases: Transformer, HasInputCol, HasOutputCol, DefaultParamsReadable, DefaultParamsWritable

cachePath = Param(parent='undefined', name='cachePath', doc='cache path for the model. A shared location between the workers, could be a lakehouse path')
deviceMap = Param(parent='undefined', name='deviceMap', doc="Specifies a model parameter for the device map. It can also be set with modelParam. Commonly used values include 'auto', 'cuda', or 'cpu'. You may want to check your model documentation for device map")
getBCObject()[source]
getCachePath()[source]
getDeviceMap()[source]
getInputCol()[source]

Gets the value of inputCol or its default value.

getModelConfig()[source]
getModelName()[source]
getModelParam()[source]
getOutputCol()[source]

Gets the value of outputCol or its default value.

getTask()[source]
getTorchDtype()[source]
inputCol: Param[str] = Param(parent='undefined', name='inputCol', doc='input column')
modelConfig = Param(parent='undefined', name='modelConfig', doc='Model configuration, passed to AutoModelForCausalLM.from_pretrained(). For more details, check https://huggingface.co/docs/transformers/en/model_doc/auto#transformers.AutoModelForCausalLM')
modelName = Param(parent='undefined', name='modelName', doc='huggingface causal lm model name')
modelParam = Param(parent='undefined', name='modelParam', doc='Model Parameters, passed to .generate(). For more details, check https://huggingface.co/docs/transformers/en/main_classes/text_generation#transformers.GenerationConfig')
outputCol: Param[str] = Param(parent='undefined', name='outputCol', doc='output column')
setCachePath(value)[source]
setDeviceMap(value)[source]
setInputCol(value)[source]
setModelConfig(**kwargs)[source]
setModelName(value)[source]
setModelParam(**kwargs)[source]
setOutputCol(value)[source]
setParams()[source]
setTask(value)[source]
setTorchDtype(value)[source]
task = Param(parent='undefined', name='task', doc='Specifies the task, can be chat or completion.')
torchDtype = Param(parent='undefined', name='torchDtype', doc="Specifies a model parameter for the torch dtype. It can be set with modelParam. The most commonly used value is 'auto'. You may want to check your model documentation for torch dtype.")
synapse.ml.hf.HuggingFaceCausalLMTransform.broadcast_model(cachePath, modelConfig)[source]

synapse.ml.hf.HuggingFaceSentenceEmbedder module

class synapse.ml.hf.HuggingFaceSentenceEmbedder.HuggingFaceSentenceEmbedder(inputCol=None, outputCol=None, runtime=None, batchSize=None, modelName=None)[source]

Bases: Transformer, HasInputCol, HasOutputCol

Custom transformer that extends PySpark’s Transformer class to perform sentence embedding using a model with optional TensorRT acceleration.

BATCH_SIZE_DEFAULT = 64
NUM_OPT_ROWS = 100
batchSize = Param(parent='undefined', name='batchSize', doc='Batch size for embeddings')
getBatchSize()[source]
getModelName()[source]
getRuntime()[source]
modelName = Param(parent='undefined', name='modelName', doc='Full Model Name parameter')
runtime = Param(parent='undefined', name='runtime', doc='Specifies the runtime environment: cpu, cuda, or tensorrt')
setBatchSize(value)[source]
setModelName(value)[source]
setRowCount(row_count)[source]
setRuntime(value)[source]

Sets the runtime environment for the model. Supported values: ‘cpu’, ‘cuda’, ‘tensorrt’

transform(dataset, spark=None)[source]

Public method to transform the dataset.

Module contents

SynapseML is an ecosystem of tools aimed towards expanding the distributed computing framework Apache Spark in several new directions. SynapseML adds many deep learning and data science tools to the Spark ecosystem, including seamless integration of Spark Machine Learning pipelines with Microsoft Cognitive Toolkit (CNTK), LightGBM and OpenCV. These tools enable powerful and highly-scalable predictive and analytical models for a variety of datasources.

SynapseML also brings new networking capabilities to the Spark Ecosystem. With the HTTP on Spark project, users can embed any web service into their SparkML models. In this vein, SynapseML provides easy to use SparkML transformers for a wide variety of Microsoft Cognitive Services. For production grade deployment, the Spark Serving project enables high throughput, sub-millisecond latency web services, backed by your Spark cluster.

SynapseML requires Scala 2.12, Spark 3.0+, and Python 3.6+.