Alibi-Detect runtime for MLServer

This package provides a MLServer runtime compatible with alibi-detect models.


You can install the mlserver-alibi-detect runtime, alongside mlserver, as:

pip install mlserver mlserver-alibi-detect

For further information on how to use MLServer with Alibi-Detect, you can check out this worked out example.

Content Types

If no content type is present on the request or metadata, the Alibi-Detect runtime will try to decode the payload as a NumPy Array. To avoid this, either send a different content type explicitly, or define the correct one as part of your model’s metadata.


The Alibi Detect runtime exposes a couple setting flags which can be used to customise how the runtime behaves. These settings can be added under the parameters.extra section of your model-settings.json file, e.g.

  "name": "drift-detector",
  "implementation": "mlserver_alibi_detect.AlibiDetectRuntime",
  "parameters": {
    "uri": "./alibi-detect-artifact/",
    "extra": {
      "batch_size": 5


You can find the full reference of the accepted extra settings for the Alibi Detect runtime below:

pydantic settings mlserver_alibi_detect.runtime.AlibiDetectSettings

Parameters that apply only to alibi detect models

  • env_prefix: str = MLSERVER_MODEL_ALIBI_DETECT_

field batch_size: int | None = None

Run the detector after accumulating a batch of size batch_size. Note that this is different to MLServer’s adaptive batching, since the rest of requests will just return empty (i.e. instead of being hold until inference runs for all of them).

field predict_parameters: dict = {}

Keyword parameters to pass to the detector’s predict method.

field state_save_freq: int | None = 100

Save the detector state after every state_save_freq predictions. Only applicable to detectors with a save_state method.

  • exclusiveMinimum = 0