Project post by Yufei Chen, Miao Hao, and Min Huang, Dragonfly project

This document will help you experience how to use dragonfly with TritonServe. During the downloading of models, the file size is large and there are many services downloading the files at the same time. The bandwidth of the storage will reach the limit and the download will be slow. 

Diagram flow showing nodes in Triton Server in Cluster A and Cluster B to Model Registry

Dragonfly can be used to eliminate the bandwidth limit of the storage through P2P technology, thereby accelerating file downloading.

Diagram flow showing Cluster A and Cluster B Peer to Root Peer to Model Registry


By integrating Dragonfly Repository Agent into Triton, download traffic through Dragonfly to pull models stored in S3, OSS, GCS, and ABS, and register models in Triton. The Dragonfly Repository Agent is in the dragonfly-repository-agent repository.


Triton Server23.08-py3Triton Server

Notice: Kind is recommended if no kubernetes cluster is available for testing.

Dragonfly Kubernetes Cluster Setup

For detailed installation documentation, please refer to  quick-start-kubernetes.

Prepare Kubernetes Cluster

Create kind multi-node cluster configuration file kind-config.yaml, configuration content is as follows:

kind: Cluster
  - role: control-plane
  - role: worker
  - role: worker

Create a kind multi-node cluster using the configuration file:

kind create cluster --config kind-config.yaml

Switch the context of kubectl to kind cluster:

kubectl config use-context kind-kind

Kind loads dragonfly image

Pull dragonfly latest images:

docker pull dragonflyoss/scheduler:latest
docker pull dragonflyoss/manager:latest
docker pull dragonflyoss/dfdaemon:latest

Kind cluster loads dragonfly latest images:

kind load docker-image dragonflyoss/scheduler:latest
kind load docker-image dragonflyoss/manager:latest
kind load docker-image dragonflyoss/dfdaemon:latest

Create dragonfly cluster based on helm charts

Create helm charts configuration file charts-config.yamland set dfdaemon.config.agents.regx to match the download path of the object storage. Example: add regx:.*models.* to match download request from object storage bucket models. Configuration content is as follows:

  image: dragonflyoss/scheduler
  tag: latest
  replicas: 1
    enable: true
    verbose: true
    pprofPort: 18066

  image: dragonflyoss/dfdaemon
  tag: latest
  replicas: 1
    enable: true
    verbose: true
    pprofPort: 18066

  image: dragonflyoss/dfdaemon
  tag: latest
    enable: true
    verbose: true
    pprofPort: 18066
      defaultFilter: 'Expires&Signature&ns'
        insecure: true
        cacert: ''
        cert: ''
        key: ''
        namespace: ''
        port: 65001
        insecure: true
        certs: []
        direct: false
        - regx: blobs/sha256.*
        # Proxy all http downlowd requests of model bucket path.
        - regx: .*models.*

  image: dragonflyoss/manager
  tag: latest
  replicas: 1
    enable: true
    verbose: true
    pprofPort: 18066

  enable: true

Create a dragonfly cluster using the configuration file:

$ helm repo add dragonfly
$ helm install --wait --create-namespace --namespace dragonfly-system dragonfly dragonfly/dragonfly -f charts-config.yaml
LAST DEPLOYED: Wed Nov 29 21:23:48 2023
NAMESPACE: dragonfly-system
STATUS: deployed
1. Get the scheduler address by running these commands:
  export SCHEDULER_POD_NAME=$(kubectl get pods --namespace dragonfly-system -l "app=dragonfly,release=dragonfly,component=scheduler" -o jsonpath={.items[0]})
  export SCHEDULER_CONTAINER_PORT=$(kubectl get pod --namespace dragonfly-system $SCHEDULER_POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
  kubectl --namespace dragonfly-system port-forward $SCHEDULER_POD_NAME 8002:$SCHEDULER_CONTAINER_PORT
  echo "Visit to use your scheduler"

2. Get the dfdaemon port by running these commands:
  export DFDAEMON_POD_NAME=$(kubectl get pods --namespace dragonfly-system -l "app=dragonfly,release=dragonfly,component=dfdaemon" -o jsonpath={.items[0]})
  export DFDAEMON_CONTAINER_PORT=$(kubectl get pod --namespace dragonfly-system $DFDAEMON_POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
  You can use $DFDAEMON_CONTAINER_PORT as a proxy port in Node.

3. Configure runtime to use dragonfly:

4. Get Jaeger query URL by running these commands:
  export JAEGER_QUERY_PORT=$(kubectl --namespace dragonfly-system get services dragonfly-jaeger-query -o jsonpath="{.spec.ports[0].port}")
  kubectl --namespace dragonfly-system port-forward service/dragonfly-jaeger-query 16686:$JAEGER_QUERY_PORT
  echo "Visit to query download events"

Check that dragonfly is deployed successfully:

$ kubectl get pods -n dragonfly-system
NAME                                 READY   STATUS    RESTARTS       AGE
dragonfly-dfdaemon-8qcpd             1/1     Running   4 (118s ago)   2m45s
dragonfly-dfdaemon-qhkn8             1/1     Running   4 (108s ago)   2m45s
dragonfly-jaeger-6c44dc44b9-dfjfv    1/1     Running   0              2m45s
dragonfly-manager-549cd546b9-ps5tf   1/1     Running   0              2m45s
dragonfly-mysql-0                    1/1     Running   0              2m45s
dragonfly-redis-master-0             1/1     Running   0              2m45s
dragonfly-redis-replicas-0           1/1     Running   0              2m45s
dragonfly-redis-replicas-1           1/1     Running   0              2m7s
dragonfly-redis-replicas-2           1/1     Running   0              101s
dragonfly-scheduler-0                1/1     Running   0              2m45s
dragonfly-seed-peer-0                1/1     Running   1 (52s ago)    2m45s

Expose the Proxy service port

Create the dfstore.yaml configuration file to expose the port on which the Dragonfly Peer’s HTTP proxy listens. The default port is 65001 and settargetPort to 65001.

kind: Service
apiVersion: v1
  name: dfstore
    app: dragonfly
    component: dfdaemon
    release: dragonfly

    - protocol: TCP
      port: 65001
      targetPort: 65001

  type: NodePort

Create service:

kubectl --namespace dragonfly-system apply -f dfstore.yaml

Forward request to Dragonfly Peer’s HTTP proxy:

kubectl --namespace dragonfly-system port-forward service/dfstore 65001:65001

Install Dragonfly Repository Agent

Set Dragonfly Repository Agent configuration

Create the dragonfly_config.jsonconfiguration file, the configuration is as follows:

  "proxy": "",
"header": {
"filter": [

In the filter of the configuration, set different values when using different object storage:

S3[“X-Amz-Algorithm”, “X-Amz-Credential”, “X-Amz-Date”, “X-Amz-Expires”, “X-Amz-SignedHeaders”, “X-Amz-Signature”]
OBS[“X-Amz-Algorithm”, “X-Amz-Credential”, “X-Amz-Date”, “X-Obs-Date”, “X-Amz-Expires”, “X-Amz-SignedHeaders”, “X-Amz-Signature”]

Set Model Repository configuration

Create cloud_credential.json cloud storage credential, the configuration is as follows:

  "gs": {
    "gs://gcs-bucket-002": "PATH_TO_GOOGLE_APPLICATION_CREDENTIALS_2"
  "s3": {
    "": {
      "secret_key": "AWS_SECRET_ACCESS_KEY",
      "key_id": "AWS_ACCESS_KEY_ID",
      "region": "AWS_DEFAULT_REGION",
      "session_token": "",
      "profile": ""
    "s3://s3-bucket-002": {
      "secret_key": "AWS_SECRET_ACCESS_KEY_2",
      "key_id": "AWS_ACCESS_KEY_ID_2",
      "region": "AWS_DEFAULT_REGION_2",
      "session_token": "AWS_SESSION_TOKEN_2",
      "profile": "AWS_PROFILE_2"
  "as": {
    "": {
      "account_str": "AZURE_STORAGE_ACCOUNT",
      "account_key": "AZURE_STORAGE_KEY"
    "as://Account-002/Container": {
      "account_str": "",
      "account_key": ""

In order to pull the model through Dragonfly, the model configuration file needs to be added following code in config.pbtxt file:

  agents [
      name: "dragonfly",

The densenet_onnx example contains modified configuration and model file. Modified config.pbtxt such as:

name: "densenet_onnx"
platform: "onnxruntime_onnx"
max_batch_size : 0
input [
    name: "data_0"
    data_type: TYPE_FP32
    format: FORMAT_NCHW
    dims: [ 3, 224, 224 ]
    reshape { shape: [ 1, 3, 224, 224 ] }
output [
    name: "fc6_1"
    data_type: TYPE_FP32
    dims: [ 1000 ]
    reshape { shape: [ 1, 1000, 1, 1 ] }
    label_filename: "densenet_labels.txt"
  agents [
      name: "dragonfly",

Triton Server integrates Dragonfly Repository Agent plugin

Install Triton Server with Docker

Pull dragonflyoss/dragonfly-repository-agent image which is integrated Dragonfly Repository Agent plugin in Triton Server, refer to Dockerfile.

docker pull dragonflyoss/dragonfly-repository-agent:latest

Run the container and mount the configuration directory:

docker run --network host --rm \
  -v ${path-to-config-dir}:/home/triton/ \
  dragonflyoss/dragonfly-repository-agent:latest tritonserver \

The correct output is as follows:

== Triton Inference Server ==
successfully loaded 'densenet_onnx'
I1130 09:43:22.595672 1]
| Repository Agent | Path                                                                   |
| dragonfly        | /opt/tritonserver/repoagents/dragonfly/ |

I1130 09:43:22.596011 1]
| Backend     | Path                                                            | Config                                                                                                                                                        |
| pytorch     | /opt/tritonserver/backends/pytorch/         | {}                                                                                                                                                            |
| onnxruntime | /opt/tritonserver/backends/onnxruntime/ | {"cmdline":{"auto-complete-config":"true","backend-directory":"/opt/tritonserver/backends","min-compute-capability":"6.000000","default-max-batch-size":"4"}} |

I1130 09:43:22.596112 1]
| Model         | Version | Status |
| densenet_onnx | 1       | READY  |

I1130 09:43:22.598318 1] Collecting CPU metrics
I1130 09:43:22.599373 1]
| Option                           | Value                                                                                                                                                                                                           |
| server_id                        | triton                                                                                                                                                                                                          |
| server_version                   | 2.37.0                                                                                                                                                                                                          |
| server_extensions                | classification sequence model_repository model_repository(unload_dependents) schedule_policy model_configuration system_shared_memory cuda_shared_memory binary_tensor_data parameters statistics trace logging |
| model_repository_path[0]         | s3://                                                                                                                                                                                 |
| model_control_mode               | MODE_NONE                                                                                                                                                                                                       |
| strict_model_config              | 0                                                                                                                                                                                                               |
| rate_limit                       | OFF                                                                                                                                                                                                             |
| pinned_memory_pool_byte_size     | 268435456                                                                                                                                                                                                       |
| min_supported_compute_capability | 6.0                                                                                                                                                                                                             |
| strict_readiness                 | 1                                                                                                                                                                                                               |
| exit_timeout                     | 30                                                                                                                                                                                                              |
| cache_enabled                    | 0                                                                                                                                                                                                               |

I1130 09:43:22.610334 1] Started GRPCInferenceService at
I1130 09:43:22.612623 1] Started HTTPService at
I1130 09:43:22.695843 1] Started Metrics Service at

Execute the following command to check the Dragonfly logs:

kubectl exec -it -n dragonfly-system dragonfly-dfdaemon-<id> -- tail -f /var/log/dragonfly/daemon/core.log

Check downloaded successfully through Dragonfly:

"level":"info","ts":"2024-02-02 05:28:02.631",
"msg":"peer task done, cost: 352ms",


Call inference API:

docker run -it --rm --net=host /workspace/install/bin/image_client -m densenet_onnx -c 3 -s INCEPTION /workspace/images/mug.jpg

Check the response successful:

Request 01
Image '/workspace/images/mug.jpg':
    15.349563 (504) = COFFEE MUG
    13.227461 (968) = CUP
    10.424893 (505) = COFFEEPOT

Performance testing

Test the performance of single-machine model download by Triton API after the integration of Dragonfly P2P. Due to the influence of the network environment of the machine itself, the actual download time is not important, but The proportion of download speed in different scenarios is more meaningful:

Bar chart showing time to download large Triton API; Triton API & Dragonfly Cold Boot; Hit Dragonfly Remote Peer Cache; Hit, Dragonfly Local Peer Cache

Test results show Triton and Dragonfly integration. It can effectively reduce the file download time. Note that this test was a single-machine test, which means that in the case of cache hits, the performance limitation is on the disk. If Dragonfly is deployed on multiple machines for P2P download, the models download speed will be faster.


Dragonfly 社区

NVIDIA Triton Inference Server


Dragonfly Github 仓库:

QR code