From ca28dfff35c89e3423871b363f54cbb0bcdb16e7 Mon Sep 17 00:00:00 2001 From: Andrew Reusch Date: Tue, 20 Jul 2021 12:14:09 -0700 Subject: [PATCH 1/5] Rename runtime-config to executor-config and add documentation for Model Library Format (#8270) * Rename runtime-config to executor-config. * Add documentation. * address comments, make tests pass * fix unit test * fix sphinx doc errors * address manupa comments --- docs/dev/index.rst | 1 + docs/dev/model_library_format.rst | 169 ++++++++++++++++++ python/tvm/driver/tvmc/model.py | 4 +- python/tvm/micro/model_library_format.py | 10 +- tests/python/driver/tvmc/test_mlf.py | 5 + .../test_micro_model_library_format.py | 10 +- 6 files changed, 187 insertions(+), 12 deletions(-) create mode 100644 docs/dev/model_library_format.rst diff --git a/docs/dev/index.rst b/docs/dev/index.rst index b4fb37d790f4..76d50f496e75 100644 --- a/docs/dev/index.rst +++ b/docs/dev/index.rst @@ -423,3 +423,4 @@ microTVM :maxdepth: 1 microtvm_design + model_library_format diff --git a/docs/dev/model_library_format.rst b/docs/dev/model_library_format.rst new file mode 100644 index 000000000000..fec90de4bcea --- /dev/null +++ b/docs/dev/model_library_format.rst @@ -0,0 +1,169 @@ +.. Licensed to the Apache Software Foundation (ASF) under one + or more contributor license agreements. See the NOTICE file + distributed with this work for additional information + regarding copyright ownership. The ASF licenses this file + to you under the Apache License, Version 2.0 (the + "License"); you may not use this file except in compliance + with the License. You may obtain a copy of the License at + +.. http://www.apache.org/licenses/LICENSE-2.0 + +.. Unless required by applicable law or agreed to in writing, + software distributed under the License is distributed on an + "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + KIND, either express or implied. See the License for the + specific language governing permissions and limitations + under the License. + +Model Library Format +==================== + +About Model Library Format +-------------------------- + +TVM traditionally exports generated libraries as Dynamic Shared Objects (e.g. DLLs (Windows) or .so +(linux)). Inferences can be performed using those libraries by loading them into an executable using +``libtvm_runtime.so``. This process is very dependent on services provided by traditional OS. + +For deployment to unconventional platforms (e.g. those lacking traditional OS), TVM provides another +output format, Model Library Format. Initially, the microTVM project is the primary use case for this +format. Should it become useful in other use cases (and in particular, should it become possible to +export BYOC artifacts in Model Library Format), it could be used as a general-purpose TVM export +format. Model Library Format is a tarball containing a file for each piece of the TVM compiler +output. + +What can be Exported? +--------------------- + +At the time of writing, export is limited to full models built with ``tvm.relay.build``. + +Directory Layout +---------------- + +Model Library Format is contained within a tarball. All paths are relative to the root of the +tarball: + +- ``/`` - Root of the tarball + + - ``codegen`` - Root directory for all generated device code + + - (see `codegen`_ section) + + - ``executor-config/`` - Configuration for the executor which drives model inference + + - ``graph/`` - Root directory containing configuration for the GraphExecutor + + - ``graph.json`` - GraphExecutor JSON configuration + + - ``metadata.json`` - Machine-parseable metadata for this model + + - ``parameters/`` - Root directory where simplified parameters are placed + + - ``.params`` - Parameters for the model tvm.relay._save_params format + + - ``src/`` - Root directory for all source code consumed by TVM + + - ``relay.txt`` - Relay source code for the generated model + +Description of Sub-directories +------------------------------ + +.. _subdir_codegen: + +``codegen`` +^^^^^^^^^^^ + +All TVM-generated code is placed in this directory. At the time of writing, there is 1 file per +Module in the generated Module tree, though this restriction may change in the future. Files in +this directory should have filenames of the form ``/(lib|src)/.``. + +These components are described below: + + * ```` - Identifies the TVM target on which the code should run. Currently, only ``host`` + is supported. + * ```` - A unique slug identifying this file. Currently ``lib``, with ``>`` an + auto-incrementing integer. + * ```` - Suffix identifying the filename format. Currently ``c`` or ``o``. + +An example directory tree for a CPU-only model is shown below: + +- ``codegen/`` - Codegen directory + + - ``host/`` - Generated code for ``target_host`` + + - ``lib/`` - Generated binary object files + + - ``lib0.o`` - LLVM module (if ``llvm`` target is used) + - ``lib1.o`` - LLVM CRT Metadata Module (if ``llvm`` target is used) + + - ``src/`` - Generated C source + + - ``lib0.c`` - C module (if ``c`` target is used) + - ``lib1.c`` - C CRT Metadata module (if ``c`` target is used) + +``executor-config`` +^^^^^^^^^^^^^^^^^^^ + +Contains machine-parsable configuration for executors which can drive model inference. Currently, +only the GraphExecutor produces configuration for this directory, in ``graph/graph.json``. This +file should be read in and the resulting string supplied to the ``GraphExecutor()`` constructor for +parsing. + +``parameters`` +^^^^^^^^^^^^^^ + +Contains machine-parseable parameters. A variety of formats may be provided, but at present, only +the format produced by ``tvm.relay._save_params`` is supplied. When building with +``tvm.relay.build``, the ``name`` parameter is considered to be the model name. A single file is +created in this directory ``.json``. + +``src`` +^^^^^^^ + +Contains source code parsed by TVM. Currently, just the Relay source code is created in +``src/relay.txt``. + +Metadata +-------- + +Machine-parseable metadata is placed in a file ``metadata.json`` at the root of the tarball. +Metadata is a dictionary with these keys: + +- ``export_datetime``: Timestamp when this Model Library Format was generated, in + `strftime `_ + format ``"%Y-%M-%d %H:%M:%SZ",``. +- ``memory``: A summary of the memory usage of each generated function. Documented in + `Memory Usage Summary`_. +- ``model_name``: The name of this model (e.g. the ``name`` parameter supplied to + ``tvm.relay.build``). +- ``executors``: A list of executors supported by this model. Currently, this list is always + ``["graph"]``. +- ``target``: A dictionary mapping ``device_type`` (the underlying integer, as a string) to the + sub-target which describes that relay backend used for that ``device_type``. +- ``version``: A numeric version number that identifies the format used in this Model Library + Format. This number is incremented when the metadata structure or on-disk structure changes. + This document reflects version ``5``. + +Memory Usage Summary +^^^^^^^^^^^^^^^^^^^^ + +A dictionary with these sub-keys: + + - ``"main"``: ``list[MainFunctionWorkspaceUsage]``. A list summarizing memory usage for each + workspace used by the main function and all sub-functions invoked. + - ``"operator_functions"``: ``map[string, list[FunctionWorkspaceUsage]]``. Maps operator function + name to a list summarizing memory usage for each workpace used by the function. + +A ``MainFunctionWorkspaceUsage`` is a dict with these keys: + +- ``"device"``: ``int``. The ``device_type`` associated with this workspace. +- ``"workspace_size_bytes"``: ``int``. Number of bytes needed in this workspace by this function + and all sub-functions invoked. +- ``"constants_size_bytes"``: ``int``. Size of the constants used by the main function. +- ``"io_size_bytes"``: ``int``. Sum of the sizes of the buffers used from this workspace by this + function and sub-functions. + +A ``FunctionWorkspaceUsage`` is a dict with these keys: + +- ``"device"``: ``int``. The ``device_type`` associated with this workspace. +- ``"workspace_size_bytes"``: ``int``. Number of bytes needed in this workspace by this function. diff --git a/python/tvm/driver/tvmc/model.py b/python/tvm/driver/tvmc/model.py index 8c8828ddd49b..7dc3fd4cdd36 100644 --- a/python/tvm/driver/tvmc/model.py +++ b/python/tvm/driver/tvmc/model.py @@ -336,8 +336,8 @@ def import_package(self, package_path: str): with open(temp.relpath("metadata.json")) as metadata_json: metadata = json.load(metadata_json) - is_graph_runtime = "graph" in metadata["runtimes"] - graph = temp.relpath("runtime-config/graph/graph.json") if is_graph_runtime else None + has_graph_executor = "graph" in metadata["executors"] + graph = temp.relpath("executor-config/graph/graph.json") if has_graph_executor else None params = temp.relpath("parameters/default.params") self.type = "mlf" diff --git a/python/tvm/micro/model_library_format.py b/python/tvm/micro/model_library_format.py index 87c067051f82..ad49ee7d9578 100644 --- a/python/tvm/micro/model_library_format.py +++ b/python/tvm/micro/model_library_format.py @@ -225,7 +225,7 @@ def reset(tarinfo): tar_f.add(str(source_dir), arcname=".", filter=reset) -_GENERATED_VERSION = 4 +_GENERATED_VERSION = 5 def _export_graph_model_library_format( @@ -241,7 +241,7 @@ def _export_graph_model_library_format( Temporary directory to populate with Model Library Format contents. """ is_aot = isinstance(mod, executor_factory.AOTExecutorFactoryModule) - runtime = ["aot"] if is_aot else ["graph"] + executor = ["aot"] if is_aot else ["graph"] metadata = { "version": _GENERATED_VERSION, @@ -249,7 +249,7 @@ def _export_graph_model_library_format( "export_datetime": datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%SZ"), "memory": _build_memory_map(mod), "target": {int(k): str(v) for k, v in mod.target.items()}, - "runtimes": runtime, + "executors": executor, "style": "full-model", } @@ -272,7 +272,7 @@ def _export_graph_model_library_format( f.write(str(mod.ir_mod)) if not is_aot: - graph_config_dir = tempdir / "runtime-config" / "graph" + graph_config_dir = tempdir / "executor-config" / "graph" graph_config_dir.mkdir(parents=True) with open(graph_config_dir / "graph.json", "w") as f: f.write(mod.get_executor_config()) @@ -363,7 +363,7 @@ def _export_operator_model_library_format(mod: build_module.OperatorModule, temp "export_datetime": datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%SZ"), "memory": memory_map, "target": {k: str(v) for k, v in targets.items()}, - "runtimes": [], + "executors": [], "style": "operator", } with open(tempdir / "metadata.json", "w") as metadata_f: diff --git a/tests/python/driver/tvmc/test_mlf.py b/tests/python/driver/tvmc/test_mlf.py index 4669fab916a6..0426f5678153 100644 --- a/tests/python/driver/tvmc/test_mlf.py +++ b/tests/python/driver/tvmc/test_mlf.py @@ -18,6 +18,7 @@ import pytest import os import shlex +import sys import tvm from tvm.driver import tvmc @@ -130,3 +131,7 @@ def test_tvmc_import_package_mlf_aot(tflite_mobilenet_v1_1_quant, tflite_compile assert tvmc_package.graph is None, ".graph must not be set in the MLF archive for AOT executor." assert tvmc_package.params is not None, ".params must be set in the MLF archive." assert tvmc_package.type == "mlf", ".type must be set to 'mlf' in the MLF format." + + +if __name__ == "__main__": + sys.exit(pytest.main([__file__] + sys.argv[1:])) diff --git a/tests/python/unittest/test_micro_model_library_format.py b/tests/python/unittest/test_micro_model_library_format.py index 246c0336a001..a15e37925eea 100644 --- a/tests/python/unittest/test_micro_model_library_format.py +++ b/tests/python/unittest/test_micro_model_library_format.py @@ -56,7 +56,7 @@ def test_export_operator_model_library_format(): with open(os.path.join(extract_dir, "metadata.json")) as json_f: metadata = json.load(json_f) - assert metadata["version"] == 4 + assert metadata["version"] == 5 assert metadata["model_name"] == "add" export_datetime = datetime.datetime.strptime( metadata["export_datetime"], "%Y-%m-%d %H:%M:%SZ" @@ -89,7 +89,7 @@ def test_export_operator_model_library_format(): def validate_graph_json(extract_dir, factory): - with open(os.path.join(extract_dir, "runtime-config", "graph", "graph.json")) as graph_f: + with open(os.path.join(extract_dir, "executor-config", "graph", "graph.json")) as graph_f: graph_json = graph_f.read() assert graph_json == factory.graph_json @@ -141,7 +141,7 @@ def @main(%a : Tensor[(1, 2), uint8], %b : Tensor[(1, 2), float32], %c : Tensor[ with open(os.path.join(extract_dir, "metadata.json")) as json_f: metadata = json.load(json_f) - assert metadata["version"] == 4 + assert metadata["version"] == 5 assert metadata["model_name"] == "add" export_datetime = datetime.datetime.strptime( metadata["export_datetime"], "%Y-%m-%d %H:%M:%SZ" @@ -221,7 +221,7 @@ def @main(%a : Tensor[(1, 2), uint8], %b : Tensor[(1, 2), float32], %c : Tensor[ with open(os.path.join(extract_dir, "metadata.json")) as json_f: metadata = json.load(json_f) - assert metadata["version"] == 4 + assert metadata["version"] == 5 assert metadata["model_name"] == "add" export_datetime = datetime.datetime.strptime( metadata["export_datetime"], "%Y-%m-%d %H:%M:%SZ" @@ -300,7 +300,7 @@ def @main(%p0: Tensor[(1, 56, 56, 128), int16], %p1: Tensor[(3, 3, 128, 1), int1 with open(os.path.join(extract_dir, "metadata.json")) as json_f: metadata = json.load(json_f) - assert metadata["version"] == 4 + assert metadata["version"] == 5 assert metadata["model_name"] == "qnn_conv2d" export_datetime = datetime.datetime.strptime( metadata["export_datetime"], "%Y-%m-%d %H:%M:%SZ" From 1141709235d0f54c5fb02a7f567b833a81fe352b Mon Sep 17 00:00:00 2001 From: Qiang Zhang Date: Wed, 21 Jul 2021 04:08:54 +0800 Subject: [PATCH 2/5] Fix 8093, Enhance Buffer Index Simplify (#8204) --- src/tir/ir/buffer.cc | 9 ++++++++- tests/python/unittest/test_tir_buffer.py | 17 +++++++++++++++++ 2 files changed, 25 insertions(+), 1 deletion(-) diff --git a/src/tir/ir/buffer.cc b/src/tir/ir/buffer.cc index e2fcf89d8966..435870a5e5cc 100644 --- a/src/tir/ir/buffer.cc +++ b/src/tir/ir/buffer.cc @@ -32,6 +32,8 @@ #include #include +#include "../../arith/pattern_match.h" + namespace tvm { namespace tir { @@ -181,7 +183,12 @@ inline PrimExpr MergeMulMod(arith::Analyzer* analyzer, const PrimExpr& base) { // a list that contain all the elements that match Mod. // The elements in the Mod will be used to match against the elements in Mul. // The result will then be split and pushed back to these two lists. - PrimExpr simplified_base = analyzer->Simplify(base); + PrimExpr simplified_base = base; + arith::PVar x, y; + if ((floordiv(x, y) * y + floormod(x, y)).Match(simplified_base)) { + simplified_base = x.Eval(); + } + simplified_base = analyzer->Simplify(simplified_base); std::vector eles = ExprSplitAddition(simplified_base); std::list mult_exprs; std::list > mod_exprs; diff --git a/tests/python/unittest/test_tir_buffer.py b/tests/python/unittest/test_tir_buffer.py index 83377e443764..42f9c34133df 100644 --- a/tests/python/unittest/test_tir_buffer.py +++ b/tests/python/unittest/test_tir_buffer.py @@ -131,6 +131,23 @@ def assert_simplified_equal(index_simplified, index_direct): ) assert_simplified_equal(index_simplified, index_direct) + # Test Case5 + B = tvm.tir.decl_buffer((1, 14, 14, 1024)) + i = te.size_var("i") + j = te.size_var("j") + k = te.size_var("k") + + index_simplified = B.vload( + ( + idxd(idxd(idxd((i * 50176 + j * 28672 + k), 1024), 14), 14), + idxm(idxd(idxd((i * 50176 + j * 28672 + k), 1024), 14), 14), + idxm(idxd((i * 50176 + j * 28672 + k), 1024), 14), + idxm((i * 50176 + j * 28672 + k), 1024), + ) + ) + index_direct = B.vload((0, 0, 0, (i * 50176 + j * 28672 + k))) + assert_simplified_equal(index_simplified, index_direct) + @tvm.testing.requires_llvm def test_buffer_broadcast(): From 1a1be09c24a5f4f02e8f4f4f07f837353a65800b Mon Sep 17 00:00:00 2001 From: masahi Date: Wed, 21 Jul 2021 05:19:46 +0900 Subject: [PATCH 3/5] [Refactor] Remove scope attribute from Buffer class (#8463) Co-authored-by: masa --- include/tvm/tir/buffer.h | 15 +++++----- include/tvm/topi/detail/extern.h | 2 +- python/tvm/script/special_stmt.py | 2 +- python/tvm/tir/buffer.py | 10 ++++++- src/driver/driver_api.cc | 2 +- src/printer/tir_text_printer.cc | 4 +-- src/printer/tvmscript_printer.cc | 4 +-- src/tir/ir/buffer.cc | 31 +++++++++++++-------- src/tir/schedule/state.cc | 4 +-- src/tir/transforms/arg_binder.cc | 2 +- src/tir/transforms/bf16_legalize.cc | 4 +-- src/tir/transforms/compact_buffer_region.cc | 2 +- src/tir/transforms/flatten_buffer.cc | 5 +--- src/tir/transforms/inject_copy_intrin.cc | 6 ++-- src/tir/transforms/storage_flatten.cc | 4 +-- vta/python/vta/transform.py | 22 +++++++-------- 16 files changed, 65 insertions(+), 54 deletions(-) diff --git a/include/tvm/tir/buffer.h b/include/tvm/tir/buffer.h index 2507262c087f..28d202cb50a9 100644 --- a/include/tvm/tir/buffer.h +++ b/include/tvm/tir/buffer.h @@ -67,8 +67,6 @@ class BufferNode : public Object { // Meta data /*! \brief optional name of the buffer */ String name; - /*! \brief storage scope of the buffer, if other than global */ - String scope; /*! \brief Alignment requirement of data pointer in bytes. */ int data_alignment; /*! @@ -93,7 +91,6 @@ class BufferNode : public Object { v->Visit("strides", &strides); v->Visit("elem_offset", &elem_offset); v->Visit("name", &name); - v->Visit("scope", &scope); v->Visit("data_alignment", &data_alignment); v->Visit("offset_factor", &offset_factor); v->Visit("buffer_type", &buffer_type); @@ -105,7 +102,7 @@ class BufferNode : public Object { // in its semantics, skip name as name is not important. return equal.DefEqual(data, other->data) && equal(dtype, other->dtype) && equal.DefEqual(shape, other->shape) && equal.DefEqual(strides, other->strides) && - equal.DefEqual(elem_offset, other->elem_offset) && equal(scope, other->scope) && + equal.DefEqual(elem_offset, other->elem_offset) && equal(data_alignment, other->data_alignment) && equal(buffer_type, other->buffer_type); } @@ -115,7 +112,6 @@ class BufferNode : public Object { hash_reduce.DefHash(shape); hash_reduce.DefHash(strides); hash_reduce.DefHash(elem_offset); - hash_reduce(scope); hash_reduce(data_alignment); hash_reduce(buffer_type); } @@ -141,8 +137,8 @@ class Buffer : public ObjectRef { // User can specify data_alignment and offset_factor to be 0 // A default value will be picked. TVM_DLL Buffer(Var ptr, DataType dtype, Array shape, Array strides, - PrimExpr elem_offset, String name, String scope, int data_alignment, - int offset_factor, BufferType buffer_type, Span span = Span()); + PrimExpr elem_offset, String name, int data_alignment, int offset_factor, + BufferType buffer_type, Span span = Span()); /*! * \brief Return a new buffer that is equivalent with current one @@ -182,6 +178,11 @@ class Buffer : public ObjectRef { */ TVM_DLL Stmt vstore(Array begin, PrimExpr value) const; + /*! + * \brief Return the storage scope associated with this buffer. + */ + TVM_DLL String scope() const; + TVM_DEFINE_OBJECT_REF_METHODS(Buffer, ObjectRef, BufferNode); TVM_DEFINE_OBJECT_REF_COW_METHOD(BufferNode); }; diff --git a/include/tvm/topi/detail/extern.h b/include/tvm/topi/detail/extern.h index caca1e85e520..2561f8d1ca27 100644 --- a/include/tvm/topi/detail/extern.h +++ b/include/tvm/topi/detail/extern.h @@ -48,7 +48,7 @@ using namespace tvm::te; inline Buffer DeclExternBuffer(Array shape, DataType dtype, std::string name) { auto data = var(name, DataType::Handle()); auto elem_offset = PrimExpr(); - return Buffer(data, dtype, shape, Array(), elem_offset, name, "", -1, 0, kDefault); + return Buffer(data, dtype, shape, Array(), elem_offset, name, -1, 0, kDefault); } /*! diff --git a/python/tvm/script/special_stmt.py b/python/tvm/script/special_stmt.py index e40bc2fda6eb..24b2f3af9ab0 100644 --- a/python/tvm/script/special_stmt.py +++ b/python/tvm/script/special_stmt.py @@ -463,7 +463,7 @@ def match_buffer_region( data=None, strides=strides, elem_offset=elem_offset, - scope=buffer_region.buffer.scope, + scope=buffer_region.buffer.scope(), data_alignment=align, offset_factor=offset_factor, span=span, diff --git a/python/tvm/tir/buffer.py b/python/tvm/tir/buffer.py index 086d93f49a2b..b445bcb25005 100644 --- a/python/tvm/tir/buffer.py +++ b/python/tvm/tir/buffer.py @@ -134,6 +134,15 @@ def vstore(self, begin, value): begin = (begin,) if isinstance(begin, (int, PrimExpr)) else begin return _ffi_api.BufferVStore(self, begin, value) # type: ignore + def scope(self): + """Return the storage scope associated with this buffer. + Returns + ------- + scope : str + The storage scope associated with this buffer. + """ + return _ffi_api.BufferStorageScope(self) # type: ignore + def decl_buffer( shape, @@ -260,7 +269,6 @@ def decl_buffer( strides, elem_offset, name, - scope, data_alignment, offset_factor, buffer_type, diff --git a/src/driver/driver_api.cc b/src/driver/driver_api.cc index 50f00140df9b..b0434049f60f 100644 --- a/src/driver/driver_api.cc +++ b/src/driver/driver_api.cc @@ -88,7 +88,7 @@ tir::Buffer BufferWithOffsetAlignment(Array shape, DataType dtype, std elem_offset = PrimExpr(); } - return tir::Buffer(data, dtype, shape, Array(), elem_offset, name, "", data_alignment, + return tir::Buffer(data, dtype, shape, Array(), elem_offset, name, data_alignment, offset_factor, buffer_type); } diff --git a/src/printer/tir_text_printer.cc b/src/printer/tir_text_printer.cc index 0fefb0515e49..0f3b89932b68 100644 --- a/src/printer/tir_text_printer.cc +++ b/src/printer/tir_text_printer.cc @@ -204,8 +204,8 @@ Doc TIRTextPrinter::BufferNode2Doc(const BufferNode* buf, Doc doc) { if (!is_zero(buf->elem_offset)) { doc << ", elem_offset=" << Print(buf->elem_offset); } - if (buf->scope != "global") { - doc << ", scope=" << Doc::StrLiteral(buf->scope); + if (GetRef(buf).scope() != "global") { + doc << ", scope=" << Doc::StrLiteral(GetRef(buf).scope()); } if (buf->data_alignment != 128) { doc << ", align=" << buf->data_alignment; diff --git a/src/printer/tvmscript_printer.cc b/src/printer/tvmscript_printer.cc index e855712617ca..01f79bd0c750 100644 --- a/src/printer/tvmscript_printer.cc +++ b/src/printer/tvmscript_printer.cc @@ -302,8 +302,8 @@ Doc TVMScriptPrinter::AllocBufferDeclaration(const Buffer& buf) { } else { doc << ", elem_offset=" << Print(buf->elem_offset); } - if (buf->scope != "global") { - doc << ", scope=" << Doc::StrLiteral(buf->scope); + if (buf.scope() != "global") { + doc << ", scope=" << Doc::StrLiteral(buf.scope()); } if (buf->data_alignment != -1) { doc << ", align=" << buf->data_alignment; diff --git a/src/tir/ir/buffer.cc b/src/tir/ir/buffer.cc index 435870a5e5cc..335ff19dd775 100644 --- a/src/tir/ir/buffer.cc +++ b/src/tir/ir/buffer.cc @@ -51,7 +51,7 @@ Buffer decl_buffer(Array shape, DataType dtype, String name, String st Span span) { DataType storage_dtype = (dtype == DataType::Bool() ? DataType::Int(8) : dtype); return Buffer(Var(name, PointerType(PrimType(storage_dtype), storage_scope), span), dtype, shape, - Array(), PrimExpr(), name, "", 0, 0, kDefault, span); + Array(), PrimExpr(), name, 0, 0, kDefault, span); } // Split the given expression w.r.t the add operator @@ -319,6 +319,15 @@ Stmt Buffer::vstore(Array begin, PrimExpr value) const { } } +String Buffer::scope() const { + const auto* ptr_type = (*this)->data->type_annotation.as(); + ICHECK(ptr_type) << "Buffer variable is not of pointer type"; + if (ptr_type->storage_scope.empty()) { + return "global"; + } + return ptr_type->storage_scope; +} + Buffer Buffer::MakeStrideView() const { if ((*this)->strides.size() != 0) return *this; if ((*this)->shape.size() == 0) return *this; @@ -358,7 +367,7 @@ Buffer Buffer::MakeSlice(Array begins, Array extents) const return MakeStrideView().MakeSlice(begins, extents); } } - return Buffer(n->data, n->dtype, extents, strides, elem_offset, n->name + "_slice", n->scope, + return Buffer(n->data, n->dtype, extents, strides, elem_offset, n->name + "_slice", n->data_alignment, 0, n->buffer_type); } @@ -391,8 +400,8 @@ PrimExpr Buffer::access_ptr(int access_mask, DataType ptr_type, int content_lane } Buffer::Buffer(Var data, DataType dtype, Array shape, Array strides, - PrimExpr elem_offset, String name, String scope, int data_alignment, - int offset_factor, BufferType buffer_type, Span span) { + PrimExpr elem_offset, String name, int data_alignment, int offset_factor, + BufferType buffer_type, Span span) { DataType storage_dtype = dtype; // specially handle bool if (storage_dtype == DataType::Bool()) { @@ -409,10 +418,6 @@ Buffer::Buffer(Var data, DataType dtype, Array shape, Array n->shape = std::move(shape); n->strides = std::move(strides); n->name = std::move(name); - if (scope.length() == 0) { - scope = "global"; - } - n->scope = std::move(scope); if (!elem_offset.defined()) { elem_offset = make_const(n->DefaultIndexType(), 0); } @@ -444,11 +449,11 @@ TVM_STATIC_IR_FUNCTOR(ReprPrinter, vtable) TVM_REGISTER_NODE_TYPE(BufferNode); TVM_REGISTER_GLOBAL("tir.Buffer").set_body([](TVMArgs args, TVMRetValue* ret) { - ICHECK_EQ(args.size(), 11); - auto buffer_type = args[9].operator String(); + ICHECK_EQ(args.size(), 10); + auto buffer_type = args[8].operator String(); BufferType type = (buffer_type == "auto_broadcast") ? kAutoBroadcast : kDefault; - *ret = Buffer(args[0], args[1], args[2], args[3], args[4], args[5], args[6], args[7], args[8], - type, args[10]); + *ret = + Buffer(args[0], args[1], args[2], args[3], args[4], args[5], args[6], args[7], type, args[9]); }); TVM_REGISTER_GLOBAL("tir.BufferAccessPtr").set_body_method(&Buffer::access_ptr); @@ -457,5 +462,7 @@ TVM_REGISTER_GLOBAL("tir.BufferVLoad").set_body_method(&Buffer::vload); TVM_REGISTER_GLOBAL("tir.BufferVStore").set_body_method(&Buffer::vstore); +TVM_REGISTER_GLOBAL("tir.BufferStorageScope").set_body_method(&Buffer::scope); + } // namespace tir } // namespace tvm diff --git a/src/tir/schedule/state.cc b/src/tir/schedule/state.cc index ca61dfea2768..906f5aaabe08 100644 --- a/src/tir/schedule/state.cc +++ b/src/tir/schedule/state.cc @@ -43,7 +43,7 @@ Array AnalyzeRegionUpperBound(const BufferRegion& region, AsIntSet(LoopDomainOfSRefTreePath( /*low_inclusive=*/dom_low_inclusive, /*high_exclusive=*/dom_high_exclusive, - /*extra_relax_scope=*/runtime::StorageScope::Create(region->buffer->scope)))); + /*extra_relax_scope=*/runtime::StorageScope::Create(region->buffer.scope())))); } /*! @@ -67,7 +67,7 @@ Array AnalyzeRegionLowerBound(const BlockRealize& realize, LoopDomainOfSRefTreePath( /*low_inclusive=*/dom_low_inclusive, /*high_exclusive=*/dom_high_exclusive, - /*extra_relax_scope=*/runtime::StorageScope::Create(region->buffer->scope)), + /*extra_relax_scope=*/runtime::StorageScope::Create(region->buffer.scope())), /*predicate=*/realize->predicate, /*analyzer=*/analyzer)) { return result.value(); } diff --git a/src/tir/transforms/arg_binder.cc b/src/tir/transforms/arg_binder.cc index 9cd29357f8c7..293c990d2745 100644 --- a/src/tir/transforms/arg_binder.cc +++ b/src/tir/transforms/arg_binder.cc @@ -88,7 +88,7 @@ void ArgBinder::BindArray(const Array& arg, const Array& val void ArgBinder::BindBuffer(const Buffer& arg, const Buffer& value, const std::string& arg_name, bool fuzzy_match) { - ICHECK_EQ(arg->scope, value->scope) << "Argument " << arg_name << " Buffer bind scope mismatch"; + ICHECK_EQ(arg.scope(), value.scope()) << "Argument " << arg_name << " Buffer bind scope mismatch"; ICHECK_EQ(arg->dtype, value->dtype) << "Argument " << arg_name << " Buffer bind data type mismatch"; if (value->data_alignment % arg->data_alignment != 0) { diff --git a/src/tir/transforms/bf16_legalize.cc b/src/tir/transforms/bf16_legalize.cc index 7a8789457923..76845cbebd2a 100644 --- a/src/tir/transforms/bf16_legalize.cc +++ b/src/tir/transforms/bf16_legalize.cc @@ -323,8 +323,8 @@ class BF16LowerRewriter : public StmtExprMutator { DataType dtype = DataType::UInt(16, oldbuf->dtype.lanes()); Var buffer_var = Var(oldbuf->data->name_hint, PointerType(PrimType(dtype))); auto newbuf = Buffer(buffer_var, dtype, oldbuf->shape, oldbuf->strides, oldbuf->elem_offset, - oldbuf->name, oldbuf->scope, oldbuf->data_alignment, - oldbuf->offset_factor, oldbuf->buffer_type); + oldbuf->name, oldbuf->data_alignment, oldbuf->offset_factor, + oldbuf->buffer_type); buffer_remap_[oldbuf] = newbuf; var_remap_[oldbuf->data] = buffer_var; changes.emplace_back(itr.first, newbuf); diff --git a/src/tir/transforms/compact_buffer_region.cc b/src/tir/transforms/compact_buffer_region.cc index edbafe27cf13..f69a9e54afa4 100644 --- a/src/tir/transforms/compact_buffer_region.cc +++ b/src/tir/transforms/compact_buffer_region.cc @@ -203,7 +203,7 @@ class BufferAccessRegionCollector : public StmtExprVisitor { std::unordered_map dom_map; for (const ForNode* loop : ancestor_loops_) { const VarNode* loop_var = loop->loop_var.get(); - if (NeedRelaxThread(GetRef(loop), runtime::StorageScope::Create(buffer->scope))) { + if (NeedRelaxThread(GetRef(loop), runtime::StorageScope::Create(buffer.scope()))) { dom_map[loop_var] = IntSetFromMinExtent(loop->min, loop->extent); } } diff --git a/src/tir/transforms/flatten_buffer.cc b/src/tir/transforms/flatten_buffer.cc index 07f7b42fe2eb..88c254a8cb5e 100644 --- a/src/tir/transforms/flatten_buffer.cc +++ b/src/tir/transforms/flatten_buffer.cc @@ -127,10 +127,7 @@ class BufferFlattener : public StmtExprMutator { } static Stmt MakeAllocStmt(const Buffer& buffer, Stmt body) { - String storage_scope = buffer->scope; - if (storage_scope.empty()) { - storage_scope = "global"; - } + String storage_scope = buffer.scope(); PrimExpr area = BufferArea(buffer); body = Allocate(buffer->data, buffer->dtype, {area}, const_true(), std::move(body)); body = AttrStmt(buffer->data, attr::storage_scope, StringImm(storage_scope), std::move(body)); diff --git a/src/tir/transforms/inject_copy_intrin.cc b/src/tir/transforms/inject_copy_intrin.cc index f7443c74c0f7..40f0e368d93d 100644 --- a/src/tir/transforms/inject_copy_intrin.cc +++ b/src/tir/transforms/inject_copy_intrin.cc @@ -148,11 +148,9 @@ class CopyIntrinInjector : public StmtMutator { dst_strides.push_back(make_const(DataType::Int(32), 1)); } Buffer dst = Buffer(store->buffer_var, store->value.dtype(), dst_shape, dst_strides, - store_strides[loop_var_size], store->buffer_var->name_hint, - GetStorageScope(store->buffer_var.get()), 0, 0, kDefault); + store_strides[loop_var_size], store->buffer_var->name_hint, 0, 0, kDefault); Buffer src = Buffer(load->buffer_var, load->dtype, src_shape, src_strides, src_elem_offset, - load->buffer_var->name_hint, GetStorageScope(load->buffer_var.get()), 0, 0, - kDefault); + load->buffer_var->name_hint, 0, 0, kDefault); *out = flower_copy_fromto_(src, dst, pad_before, pad_after, pad_value); ICHECK(out->defined()) << "flower function did not return correct stmt"; return true; diff --git a/src/tir/transforms/storage_flatten.cc b/src/tir/transforms/storage_flatten.cc index 0db86130a8da..5de22fe8665d 100644 --- a/src/tir/transforms/storage_flatten.cc +++ b/src/tir/transforms/storage_flatten.cc @@ -198,7 +198,7 @@ class StorageFlattener : public StmtExprMutator { auto new_var = Var(op->buffer->data->name_hint, PointerType(ptr_type->element_type, skey.to_string())); e.buffer = Buffer(new_var, op->buffer->dtype, shape, strides, PrimExpr(), op->buffer->name, - skey.to_string(), align, 0, kDefault); + align, 0, kDefault); buf_map_[key] = e; Stmt body = this->VisitStmt(op->body); @@ -224,7 +224,7 @@ class StorageFlattener : public StmtExprMutator { ret = Allocate(e.buffer->data, storage_type, shape, make_const(DataType::Bool(e.buffer->dtype.lanes()), true), body); } - ret = AttrStmt(e.buffer->data, attr::storage_scope, StringImm(e.buffer->scope), ret); + ret = AttrStmt(e.buffer->data, attr::storage_scope, StringImm(skey.to_string()), ret); if (create_bound_attributes_ && ShapeIsValid(e.buffer->shape)) { ret = AttrStmt(e.buffer->data, tir::attr::buffer_bound, diff --git a/vta/python/vta/transform.py b/vta/python/vta/transform.py index 7c7d02b40fbb..383841f19e34 100644 --- a/vta/python/vta/transform.py +++ b/vta/python/vta/transform.py @@ -495,21 +495,21 @@ def _inject_copy(src, dst, pad_before, pad_after, pad_value): # FIXME: pad_value is ignored... env = get_env() _ = pad_value - if dst.scope == "global": + if dst.scope() == "global": # Store if pad_before or pad_after: raise RuntimeError("Do not support copy into DRAM with pad") - if src.scope == env.acc_scope: + if src.scope() == env.acc_scope: elem_width = env.OUT_WIDTH elem_bytes = env.OUT_ELEM_BYTES mem_type = env.dev.MEM_ID_OUT data_type = "int%d" % env.OUT_WIDTH task_qid = env.dev.QID_STORE_OUT else: - raise RuntimeError("Do not support copy %s->dram" % (src.scope)) + raise RuntimeError("Do not support copy %s->dram" % (src.scope())) _check_compact(src) x_size, y_size, x_stride, offset = _get_2d_pattern( - dst, elem_width, elem_bytes, data_type, src.scope, allow_fold=True + dst, elem_width, elem_bytes, data_type, src.scope(), allow_fold=True ) irb = tvm.tir.ir_builder.create() irb.scope_attr(env.dev.vta_axis, "coproc_scope", env.dev.get_task_qid(task_qid)) @@ -528,27 +528,27 @@ def _inject_copy(src, dst, pad_before, pad_after, pad_value): ) ) return irb.get() - elif src.scope == "global": - if dst.scope == env.acc_scope: + elif src.scope() == "global": + if dst.scope() == env.acc_scope: elem_width = env.ACC_WIDTH elem_bytes = env.ACC_ELEM_BYTES mem_type = env.dev.MEM_ID_ACC data_type = "int%d" % env.ACC_WIDTH task_qid = env.dev.QID_LOAD_OUT - elif dst.scope == env.inp_scope: + elif dst.scope() == env.inp_scope: elem_width = env.INP_WIDTH elem_bytes = env.INP_ELEM_BYTES mem_type = env.dev.MEM_ID_INP data_type = "int%d" % env.INP_WIDTH task_qid = env.dev.QID_LOAD_INP - elif dst.scope == env.wgt_scope: + elif dst.scope() == env.wgt_scope: elem_width = env.WGT_WIDTH elem_bytes = env.WGT_ELEM_BYTES mem_type = env.dev.MEM_ID_WGT data_type = "int%d" % env.WGT_WIDTH task_qid = env.dev.QID_LOAD_WGT else: - raise RuntimeError("Do not support copy dram->%s" % (dst.scope)) + raise RuntimeError("Do not support copy dram->%s" % (dst.scope())) # collect pad statistics if pad_before: assert pad_after @@ -586,7 +586,7 @@ def _inject_copy(src, dst, pad_before, pad_after, pad_value): _check_compact(dst) x_size, y_size, x_stride, offset = _get_2d_pattern( - src, elem_width, elem_bytes, data_type, dst.scope, allow_fold=allow_fold + src, elem_width, elem_bytes, data_type, dst.scope(), allow_fold=allow_fold ) if data_type != src.dtype: @@ -617,7 +617,7 @@ def _inject_copy(src, dst, pad_before, pad_after, pad_value): return irb.get() else: - raise RuntimeError("Do not support copy %s->%s" % (src.scope, dst.scope)) + raise RuntimeError("Do not support copy %s->%s" % (src.scope(), dst.scope())) return tvm.tir.transform.InjectCopyIntrin("dma_copy", _inject_copy) From dbdfc444268d0a2c37bbb56da4e4895466267dc1 Mon Sep 17 00:00:00 2001 From: Matthew Brookhart Date: Tue, 20 Jul 2021 18:26:05 -0600 Subject: [PATCH 4/5] Enable ONNX tests that needed onnxruntime 1.7.0 (#8502) --- tests/python/frontend/onnx/test_forward.py | 84 ++++++++++------------ 1 file changed, 39 insertions(+), 45 deletions(-) diff --git a/tests/python/frontend/onnx/test_forward.py b/tests/python/frontend/onnx/test_forward.py index 9328a82271d7..8b633c18977a 100644 --- a/tests/python/frontend/onnx/test_forward.py +++ b/tests/python/frontend/onnx/test_forward.py @@ -2675,9 +2675,8 @@ def test_convtranspose(): def repeat(N, D): return tuple([N for _ in range(D)]) - # TODO(mbrookhart): onnxruntime in CI only supports 2D, and 1D and 3D # Once onnxruntime update is complete - for D in [2]: + for D in [1, 2, 3]: # Convolution with padding verify_convtranspose_with_padding( (1, 1) + repeat(5, D), @@ -2706,38 +2705,36 @@ def repeat(N, D): repeat(1, D), True, ) - ## TODO(mbrookhart): renable autopad tests when CI ONNX - ## and ONNX runtime match versions - # # Convolution with autopadding - # verify_convtranspose_with_padding( - # (1, 1) + repeat(5, D), - # (1, 1) + repeat(3, D), - # None, - # repeat(3, D), - # repeat(1, D), - # repeat(1, D), - # auto_pad="SAME_UPPER", - # ) - # # Convolution with valid autopadding - # verify_convtranspose_with_padding( - # (1, 1) + repeat(5, D), - # (1, 1) + repeat(3, D), - # None, - # repeat(3, D), - # repeat(1, D), - # repeat(1, D), - # auto_pad="VALID", - # ) - # # Convolution with non uniform stride - # verify_convtranspose_with_padding( - # (1, 1) + repeat(5, D), - # (1, 1) + repeat(3, D), - # None, - # repeat(3, D), - # repeat(2, D), - # repeat(1, D), - # auto_pad="SAME_UPPER", - # ) + # Convolution with autopadding + verify_convtranspose_with_padding( + (1, 1) + repeat(5, D), + (1, 1) + repeat(3, D), + None, + repeat(3, D), + repeat(1, D), + repeat(1, D), + auto_pad="SAME_UPPER", + ) + # Convolution with valid autopadding + verify_convtranspose_with_padding( + (1, 1) + repeat(5, D), + (1, 1) + repeat(3, D), + None, + repeat(3, D), + repeat(1, D), + repeat(1, D), + auto_pad="VALID", + ) + # Convolution with non uniform stride + verify_convtranspose_with_padding( + (1, 1) + repeat(5, D), + (1, 1) + repeat(3, D), + None, + repeat(3, D), + repeat(2, D), + repeat(1, D), + auto_pad="SAME_UPPER", + ) # Convolution with dilation # TODO(mbrookhart): Relay doesn't currently support convtranspose with dilation # verify_convtranspose_with_padding( @@ -3597,17 +3594,14 @@ def verify(ishape, oshape, scales, mode, coord_trans="asymmetric", alpha=0.5, ex verify([1, 16] + [32] * ndim, [], [1, 1] + [0.5] * ndim, method, coord_trans) verify([1, 16] + [32] * ndim, [], [1, 1] + [2] * ndim, method, coord_trans) - if ndim == 2: - ## TODO(mbrookhart): ONNX Runtime in CI only supports 2D linear resize - ## Remove this condition when updating CI - method = "linear" - # upsampling - verify([1, 16] + [32] * ndim, [1, 16] + [64] * ndim, [], method) - # downsampling - verify([1, 16] + [32] * ndim, [1, 16] + [16] * ndim, [], method) - # scales are specified instead of sizes - verify([1, 16] + [32] * ndim, [], [1, 1] + [0.5] * ndim, method) - verify([1, 16] + [32] * ndim, [], [1, 1] + [2] * ndim, method) + method = "linear" + # upsampling + verify([1, 16] + [32] * ndim, [1, 16] + [64] * ndim, [], method) + # downsampling + verify([1, 16] + [32] * ndim, [1, 16] + [16] * ndim, [], method) + # scales are specified instead of sizes + verify([1, 16] + [32] * ndim, [], [1, 1] + [0.5] * ndim, method) + verify([1, 16] + [32] * ndim, [], [1, 1] + [2] * ndim, method) if ndim == 2: # ONNX Runtime only supports cubic interpolation for 2D images From 78142b66a6bcbc50d892a34233f3586ab8c21b8a Mon Sep 17 00:00:00 2001 From: Tianqi Chen Date: Wed, 21 Jul 2021 09:14:39 -0400 Subject: [PATCH 5/5] Organize the CodeOwners file: (#8512) - Order by depth first - Always show the complete prefix --- .github/CODEOWNERS | 154 ++++++++++++++++++++++++++++----------------- 1 file changed, 96 insertions(+), 58 deletions(-) diff --git a/.github/CODEOWNERS b/.github/CODEOWNERS index 5beff7235314..58ee78787176 100644 --- a/.github/CODEOWNERS +++ b/.github/CODEOWNERS @@ -26,95 +26,133 @@ # We normally recommend committers to shepherd code in their area of expertise. * @apache/tvm-committers +# Order is important; the last matching pattern takes the most precedence. +# The sub modules should be ordered first by depth. +# Making sure we append new sub-module rules after exisiting modules rules. -# automation related -**/auto_scheduler/** @merrymercy @jcf94 @comaniac @junrushao1994 @vinx13 -**/autotvm/** @merrymercy @jcf94 @comaniac @junrushao1994 @vinx13 +############################## +# Top-level Fallbacks +############################## +include/** @tqchen @jroesch @yzhliu @icemelon9 @junrushao1994 @comaniac @zhiics +src/** @tqchen @jroesch @yzhliu @icemelon9 @junrushao1994 @comaniac @zhiics +apps/** @tqchen @jroesch @yzhliu @icemelon9 @junrushao1994 @comaniac @zhiics +python/** @tqchen @jroesch @yzhliu @icemelon9 @junrushao1994 @comaniac @zhiics -# TIR -**/tir/** @junrushao1994 @vinx13 @tqchen @kparzysz-quic @ZihengJiang @masahi @were +# Thirdparty license audit +3rdparty/** @tqchen @jroesch +licenses/** @tqchen @jroesch -# TE -**/te/** @junrushao1994 @vinx13 @tqchen @kparzysz-quic @ZihengJiang @masahi @were +# JVM language +jvm/** @yzhliu -# Target -**/target/** @junrushao1994 @vinx13 @tqchen @kparzysz-quic @ZihengJiang @masahi +# Golang +golang/** @srkreddy1238 -# ARITH and simplifier -**/arith/** @tqchen @junrushao1994 @vinx13 +# WASM +web/** @tqchen @jroesch -# parser -**/parser/** @jroesch @slyubomirsky +# Docker +docker/** @areusch @leandron @jroesch -# Runtime -**/runtime/** @vinx13 @tqchen @FronzenGene @liangfu @areusch @tmoreau89 @ajtulloch @masahi @kazum @ZihengJiang @junrushao1994 +# Conda +conda/** @tqchen @junrushao1994 @comaniac -# micro TVM -**/micro/** @areusch @liangfu @tmoreau89 +# CMake +cmake/** @jroesch @tqchen @areusch @junrushao1994 @comaniac +# rust bindings +rust/** @jroesch @nhynes @nhynes -# VTA +# vta vta/** @tmoreau89 @vegaluisjose +# docs +docs/** @comaniac @junrushao1994 @tqchen @jroesch @areusch @yzhliu @merrymercy @icemelon9 +tutorials/** @comaniac @junrushao1994 @tqchen @jroesch @areusch @yzhliu @merrymercy @icemelon9 -# relay -**/relay/** @jroesch @slyubomirsky @icemelon9 @MarisaKirisame @ZihengJiang @yzhliu @vinx13 @mbrookhart @jwfromm @zhiics @anijain2305 @wweic @eqy @junrushao1994 +# tests +tests/** @comaniac @junrushao1994 @tqchen @jroesch @areusch @yzhliu @merrymercy @icemelon9 +############################## +# Specific modules +############################## -# quantization and QNN -**/qnn/** @jwfromm @anijain2305 @ZihengJiang +# automation related +src/auto_scheduler/** @merrymercy @jcf94 @comaniac @junrushao1994 @vinx13 +include/tvm/auto_scheduler/** @merrymercy @jcf94 @comaniac @junrushao1994 @vinx13 +python/tvm/auto_scheduler/** @merrymercy @jcf94 @comaniac @junrushao1994 @vinx13 +python/tvm/autotvm/** @merrymercy @jcf94 @comaniac @junrushao1994 @vinx13 -# BYOC -src/relay/backend/contrib/** @zhiics @trevor-m @comaniac @mbaret +# node system and reflection +src/node/** @junrushao1994 @vinx13 @tqchen @jroesch @comaniac +include/tvm/node/** @junrushao1994 @vinx13 @tqchen @jroesch @comaniac +# ir: Common IR +src/ir/** @junrushao1994 @vinx13 @tqchen @jroesch @comaniac +include/tvm/ir/** @junrushao1994 @vinx13 @tqchen @jroesch @comaniac +python/tvm/ir/** @junrushao1994 @vinx13 @tqchen @jroesch @comaniac -# TOPI -**/topi/** @Laurawly @Huyuwei @kevinthesun @jwfromm @vinx13 @masahi @FronzenGene @yzhliu @mbrookhart @ZihengJiang @jcf94 +# tir +src/tir/** @junrushao1994 @vinx13 @tqchen @kparzysz-quic @ZihengJiang @masahi @were +include/tvm/tir/** @junrushao1994 @vinx13 @tqchen @kparzysz-quic @ZihengJiang @masahi @were +python/tvm/tir/** @junrushao1994 @vinx13 @tqchen @kparzysz-quic @ZihengJiang @masahi @were -# frontends -python/tvm/relay/frontend/** @jwfromm @mbrookhart @srkreddy1238 @siju-samuel @Huyuwei @hlu1 @kazum @PariksheetPinjari909 +# te +src/te/** @junrushao1994 @vinx13 @tqchen @kparzysz-quic @ZihengJiang @masahi @were +include/tvm/te/** @junrushao1994 @vinx13 @tqchen @kparzysz-quic @ZihengJiang @masahi @were +python/tvm/te/** @junrushao1994 @vinx13 @tqchen @kparzysz-quic @ZihengJiang @masahi @were +# target +src/target/** @junrushao1994 @vinx13 @tqchen @kparzysz-quic @ZihengJiang @masahi +include/tvm/target/** @junrushao1994 @vinx13 @tqchen @kparzysz-quic @ZihengJiang @masahi +python/tvm/target/** @junrushao1994 @vinx13 @tqchen @kparzysz-quic @ZihengJiang @masahi -# TVMC -python/tvm/driver/tvmc/** @leandron +# arith: Arithmetic module and simplifiers +src/arith/** @tqchen @junrushao1994 @vinx13 +include/tvm/arith/** @tqchen @junrushao1994 @vinx13 +python/tvm/arith/** @tqchen @junrushao1994 @vinx13 -# Docker -docker/** @areusch @leandron @jroesch +# parser +src/parser/** @jroesch @slyubomirsky -# Conda -conda/** @tqchen @junrushao1994 @comaniac +# runtime +src/runtime/** @vinx13 @tqchen @FronzenGene @liangfu @areusch @tmoreau89 @ajtulloch @masahi @kazum @ZihengJiang @junrushao1994 +include/tvm/runtime/** @vinx13 @tqchen @FronzenGene @liangfu @areusch @tmoreau89 @ajtulloch @masahi @kazum @ZihengJiang @junrushao1994 +python/tvm/runtime/** @vinx13 @tqchen @FronzenGene @liangfu @areusch @tmoreau89 @ajtulloch @masahi @kazum @ZihengJiang @junrushao1994 -# CMake -cmake/** @jroesch @tqchen @areusch @junrushao1994 @comaniac +# runtime/micro +src/runtime/micro/** @areusch @liangfu @tmoreau89 +src/runtime/crt/** @areusch @liangfu @tmoreau89 +include/tvm/runtime/crt/** @areusch @liangfu @tmoreau89 +include/tvm/runtime/micro/** @areusch @liangfu @tmoreau89 +python/tvm/micro/** @areusch @liangfu @tmoreau89 -# rust bindings -rust/** @jroesch @nhynes @nhynes - -# docs -docs/** @comaniac @junrushao1994 @tqchen @jroesch @areusch @yzhliu @merrymercy @icemelon9 -tutorials/** @comaniac @junrushao1994 @tqchen @jroesch @areusch @yzhliu @merrymercy @icemelon9 +# relay +src/relay/** @jroesch @slyubomirsky @icemelon9 @MarisaKirisame @ZihengJiang @yzhliu @vinx13 @mbrookhart @jwfromm @zhiics @anijain2305 @wweic @eqy @junrushao1994 +include/tvm/relay/** @jroesch @slyubomirsky @icemelon9 @MarisaKirisame @ZihengJiang @yzhliu @vinx13 @mbrookhart @jwfromm @zhiics @anijain2305 @wweic @eqy @junrushao1994 +python/tvm/relay/** @jroesch @slyubomirsky @icemelon9 @MarisaKirisame @ZihengJiang @yzhliu @vinx13 @mbrookhart @jwfromm @zhiics @anijain2305 @wweic @eqy @junrushao1994 -# Tests -tests/** @comaniac @junrushao1994 @tqchen @jroesch @areusch @yzhliu @merrymercy @icemelon9 -# JVM language -jvm/** @yzhliu +# relay/qnn +src/relay/qnn/** @jwfromm @anijain2305 @ZihengJiang +inlcude/tvm/relay/qnn/** @jwfromm @anijain2305 @ZihengJiang +python/tvm/relay/qnn/** @jwfromm @anijain2305 @ZihengJiang -# Golang -golang/** @srkreddy1238 +# relay/backend/contrib: BYOC +src/relay/backend/contrib/** @zhiics @trevor-m @comaniac @mbaret +# relay/frontends +python/tvm/relay/frontend/** @jwfromm @mbrookhart @srkreddy1238 @siju-samuel @Huyuwei @hlu1 @kazum @PariksheetPinjari909 -# WASM -web/** @tqchen @jroesch +# topi: Operator definitions +src/topi/** @Laurawly @Huyuwei @kevinthesun @jwfromm @vinx13 @masahi @FronzenGene @yzhliu @mbrookhart @ZihengJiang @jcf94 +include/tvm/topi/** @Laurawly @Huyuwei @kevinthesun @jwfromm @vinx13 @masahi @FronzenGene @yzhliu @mbrookhart @ZihengJiang @jcf94 +python/tvm/topi/** @Laurawly @Huyuwei @kevinthesun @jwfromm @vinx13 @masahi @FronzenGene @yzhliu @mbrookhart @ZihengJiang @jcf94 -# Fallbacks -include/** @tqchen @jroesch @yzhliu @icemelon9 @junrushao1994 @comaniac @zhiics -src/** @tqchen @jroesch @yzhliu @icemelon9 @junrushao1994 @comaniac @zhiics -apps/** @tqchen @jroesch @yzhliu @icemelon9 @junrushao1994 @comaniac @zhiics -python/** @tqchen @jroesch @yzhliu @icemelon9 @junrushao1994 @comaniac @zhiics +# tvm/driver/ +python/tvm/driver/** @leandron @jwfromm @tqchen @jroesch -# Thirdparty license audit -3rdparty/** @tqchen @jroesch -licenses/** @tqchen @jroesch +# tvm/driver/tvmc +python/tvm/driver/tvmc/** @leandron @jwfromm