To see MLServer in action you can check out the examples below. These are end-to-end notebooks, showing how to serve models with MLServer.
If you are interested in how MLServer interacts with particular model frameworks, you can check the following examples. These focus on showcasing the different inference runtimes that ship with MLServer out of the box. Note that, for advanced use cases, you can also write your own custom inference runtime (see the example below on custom models).
To see some of the advanced features included in MLServer (e.g. multi-model serving), check out the examples below.