detectron2.export package

detectron2.export.add_export_config(cfg)[source]
Parameters

cfg (CfgNode) – a detectron2 config

Returns

CfgNode

an updated config with new options that will be used

by Caffe2Tracer.

detectron2.export.export_caffe2_model(cfg, model, inputs)[source]

Export a detectron2 model to caffe2 format.

Parameters
  • cfg (CfgNode) – a detectron2 config, with extra export-related options added by add_export_config().

  • model (nn.Module) – a model built by detectron2.modeling.build_model(). It will be modified by this function.

  • inputs – sample inputs that the given model takes for inference. Will be used to trace the model.

Returns

Caffe2Model

class detectron2.export.Caffe2Model(predict_net, init_net)[source]

Bases: torch.nn.modules.module.Module

A wrapper around the traced model in caffe2’s pb format.

property predict_net

Returns: core.Net: the underlying caffe2 predict net

property init_net

Returns: core.Net: the underlying caffe2 init net

save_protobuf(output_dir)[source]

Save the model as caffe2’s protobuf format.

Parameters

output_dir (str) – the output directory to save protobuf files.

save_graph(output_file, inputs=None)[source]

Save the graph as SVG format.

Parameters
  • output_file (str) – a SVG file

  • inputs – optional inputs given to the model. If given, the inputs will be used to run the graph to record shape of every tensor. The shape information will be saved together with the graph.

static load_protobuf(dir)[source]
Parameters

dir (str) – a directory used to save Caffe2Model with save_protobuf(). The files “model.pb” and “model_init.pb” are needed.

Returns

Caffe2Model – the caffe2 model loaded from this directory.

__call__(inputs)[source]

An interface that wraps around a caffe2 model and mimics detectron2’s models’ input & output format. This is used to compare the outputs of caffe2 model with its original torch model.

Due to the extra conversion between torch/caffe2, this method is not meant for benchmark.

detectron2.export.export_onnx_model(cfg, model, inputs)[source]

Export a detectron2 model to ONNX format. Note that the exported model contains custom ops only available in caffe2, therefore it cannot be directly executed by other runtime. Post-processing or transformation passes may be applied on the model to accommodate different runtimes. :param cfg: a detectron2 config, with extra export-related options

Parameters
  • model (nn.Module) – a model built by detectron2.modeling.build_model(). It will be modified by this function.

  • inputs – sample inputs that the given model takes for inference. Will be used to trace the model.

Returns

onnx.ModelProto – an onnx model.

class detectron2.export.Caffe2Tracer(cfg, model, inputs)[source]

Bases: object

Make a detectron2 model traceable with caffe2 style.

An original detectron2 model may not be traceable, or cannot be deployed directly after being traced, due to some reasons:

  1. control flow in some ops

  2. custom ops

  3. complicated pre/post processing

This class provides a traceable version of a detectron2 model by:

  1. Rewrite parts of the model using ops in caffe2. Note that some ops do not have GPU implementation.

  2. Define the inputs “after pre-processing” as inputs to the model

  3. Remove post-processing and produce raw layer outputs

More specifically about inputs: all builtin models take two input tensors.

  1. NCHW float “data” which is an image (usually in [0, 255])

  2. Nx3 float “im_info”, each row of which is (height, width, 1.0)

After making a traceable model, the class provide methods to export such a model to different deployment formats.

The class currently only supports models using builtin meta architectures.

__init__(cfg, model, inputs)[source]
Parameters
export_caffe2()[source]

Export the model to Caffe2’s protobuf format. The returned object can be saved with .save_protobuf() method. The result can be loaded and executed using Caffe2 runtime.

Returns

Caffe2Model

export_onnx()[source]

Export the model to ONNX format. Note that the exported model contains custom ops only available in caffe2, therefore it cannot be directly executed by other runtime. Post-processing or transformation passes may be applied on the model to accommodate different runtimes.

Returns

onnx.ModelProto – an onnx model.

export_torchscript()[source]

Export the model to a torch.jit.TracedModule by tracing. The returned object can be saved to a file by “.save()”.

Returns

torch.jit.TracedModule – a torch TracedModule