Skip to content

Commit

Permalink
Merge branch 'master' into benchmark
Browse files Browse the repository at this point in the history
* master:
  Fix split's last factor issue (apache#4044)
  [COMMUNITY] ajtulloch -> committer (apache#4043)
  [TOPI]Add op argwhere (apache#3994)
  [topi] add ARM v8.2 udot (uint8) support (apache#3978)
  [COMMUNITY] anijain2305 -> reviewer (apache#4036)
  [QNN] Renaming dense operator. (apache#4033)
  [Relay][Compile_engine] Int64 shape handling for outputs. (apache#4031)
  Add dmlc-core to the list of installed header directories. (apache#4035)
  [ARITH] migrate indexdiv/mod to floordiv/mod (apache#4008)
  [Relay] Move prelude to text format (apache#3939)
  make tvm compilable by gcc 4.9.2 (apache#4032)
  [AUTOTVM][DOCS] Add a link to the defining network description of auto-tuning tutorial (apache#4023)
  [ARITH] cleanup the indexmod/div on python side (apache#4028)
  [Fix] Add more pad_mode support for onnx converter (apache#4029)
  Add parser support for ReLU tflite operator (apache#4022)
  Additional MXNet Convolution and Deconvolution tests (apache#4026)
  docs: minor spelling tweaks (apache#4027)
  • Loading branch information
petrex committed Oct 2, 2019
2 parents 0cb837f + 2d53762 commit ed4c185
Show file tree
Hide file tree
Showing 93 changed files with 2,637 additions and 1,466 deletions.
5 changes: 5 additions & 0 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -361,6 +361,11 @@ if (INSTALL_DEV)
FILES_MATCHING
PATTERN "*.h"
)
install(
DIRECTORY "3rdparty/dmlc-core/include/." DESTINATION "include"
FILES_MATCHING
PATTERN "*.h"
)
install(
DIRECTORY "nnvm/include/." DESTINATION "include"
FILES_MATCHING
Expand Down
2 changes: 2 additions & 0 deletions CONTRIBUTORS.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,7 @@ We do encourage everyone to work anything they are interested in.
- [Siva](https://github.com/srkreddy1238): @srkreddy1238 - frontends, golang
- [Haichen Shen](https://github.com/icemelon9) (PMC): @icemelon9 - relay, topi
- [Zhixun Tan](https://github.com/phisiart): @phisiart - opengl, web
- [Andrew Tulloch](https://github.com/ajtulloch): @ajtulloch - topi, compiler, runtime
- [Leyuan Wang](https://github.com/Laurawly): @Laurawly: - topi
- [Yao Wang](https://github.com/kevinthesun): @kevinthesun: - topi, vision
- [Jian Weng](https://github.com/were): @were: - hybrid script
Expand All @@ -74,6 +75,7 @@ We do encourage everyone to work anything they are interested in.
- [Hao Lu](https://github.com/hlu1): @hlu1
- [Nick Hynes](https://github.com/nhynes): @nhynes
- [Yuwei Hu](https://github.com/Huyuwei): @Huyuwei
- [Animesh Jain](https://github.com/anijain2305): @anijain2305
- [Yizhi Liu](https://github.com/yzhliu) : @yzhliu
- [Zhixun Tan](https://github.com/phisiart): @phisiart
- [Zhi Chen](https://github.com/zhiics): @zhiics
Expand Down
2 changes: 1 addition & 1 deletion docs/contribute/code_review.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ Perform Code Reviews

This is a general guideline for code reviewers. First of all, while it is great to add new features to a project, we must also be aware that each line of code we introduce also brings **technical debt** that we may have to eventually pay.

Open source code is maintained by a community with diverse backend, and it is even more important to bring clear, documented and maintainable code. Code reviews are shepherding process to spot potential problems, improve quality of the code. We should, however, not rely on code review process to get the code into a ready state. Contributors are encouraged to polish the code to a ready state before requesting reviews. This is especially expected for code owner and comitter candidates.
Open source code is maintained by a community with diverse backend, and it is even more important to bring clear, documented and maintainable code. Code reviews are shepherding process to spot potential problems, improve quality of the code. We should, however, not rely on code review process to get the code into a ready state. Contributors are encouraged to polish the code to a ready state before requesting reviews. This is especially expected for code owner and committer candidates.

Here are some checklists for code reviews, it is also helpful reference for contributors

Expand Down
2 changes: 1 addition & 1 deletion docs/contribute/error_handling.rst
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ error messages when necessary.
def preferred():
# Very clear about what is being raised and what is the error message.
raise OpNotImplemented("Operator relu is not implemented in the MXNet fronend")
raise OpNotImplemented("Operator relu is not implemented in the MXNet frontend")
def _op_not_implemented(op_name):
return OpNotImplemented("Operator {} is not implemented.").format(op_name)
Expand Down
2 changes: 1 addition & 1 deletion docs/contribute/pull_request.rst
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ We use docker container to create stable CI environments
that can be deployed to multiple machines.
You can find the prebuilt images in `<https://hub.docker.com/r/tvmai/>`_ .
Because we want a relatively stable CI environment and make use of pre-cached image,
all of the CI images are built and maintained by comitters.
all of the CI images are built and maintained by committers.

Upgrade of CI images can cause problems and need fixes to accommodate the new env.
Here is the protocol to update CI image:
Expand Down
4 changes: 2 additions & 2 deletions docs/deploy/android.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@

NNVM compilation of model for android target could follow same approach like android_rpc.

An reference exampe can be found at [chainer-nnvm-example](https://github.com/tkat0/chainer-nnvm-example)
An reference example can be found at [chainer-nnvm-example](https://github.com/tkat0/chainer-nnvm-example)

Above example will directly run the compiled model on RPC target. Below modification at [rum_mobile.py](https://github.com/tkat0/chainer-nnvm-example/blob/5b97fd4d41aa4dde4b0aceb0be311054fb5de451/run_mobile.py#L64) will save the compilation output which is required on android target.

Expand All @@ -39,4 +39,4 @@ deploy_lib.so, deploy_graph.json, deploy_param.params will go to android target.
## TVM Runtime for Android Target

Refer [here](https://github.com/dmlc/tvm/blob/master/apps/android_deploy/README.md#build-and-installation) to build CPU/OpenCL version flavor TVM runtime for android target.
From android java TVM API to load model & execute can be refered at this [java](https://github.com/dmlc/tvm/blob/master/apps/android_deploy/app/src/main/java/ml/dmlc/tvm/android/demo/MainActivity.java) sample source.
From android java TVM API to load model & execute can be referred at this [java](https://github.com/dmlc/tvm/blob/master/apps/android_deploy/app/src/main/java/ml/dmlc/tvm/android/demo/MainActivity.java) sample source.
2 changes: 1 addition & 1 deletion docs/dev/hybrid_script.rst
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ In HalideIR, loops have in total 4 types: ``serial``, ``unrolled``, ``parallel``
Variables
~~~~~~~~~

Because there is no variables in ``HalideIR``, all the mutatable variables will be lowered to an array with size 1.
Because there is no variables in ``HalideIR``, all the mutable variables will be lowered to an array with size 1.
It takes the first store of a variable as its declaration.

Math Intrinsics
Expand Down
8 changes: 4 additions & 4 deletions docs/dev/inferbound.rst
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ A TVM schedule is composed of Stages. Each stage has exactly one Operation, e.g.
Array<Operation> outputs;
Array<Stage> stages;
Map<Operation, Stage> stage_map;
// remainder ommitted
// remainder omitted
};
class StageNode : public Node {
Expand All @@ -81,14 +81,14 @@ A TVM schedule is composed of Stages. Each stage has exactly one Operation, e.g.
Array<IterVar> all_iter_vars;
Array<IterVar> leaf_iter_vars;
Array<IterVarRelation> relations;
// remainder ommitted
// remainder omitted
};
class OperationNode : public Node {
public:
virtual Array<IterVar> root_iter_vars();
virtual Array<Tensor> InputTensors();
// remainder ommitted
// remainder omitted
};
class ComputeOpNode : public OperationNode {
Expand All @@ -97,7 +97,7 @@ A TVM schedule is composed of Stages. Each stage has exactly one Operation, e.g.
Array<IterVar> reduce_axis;
Array<Expr> body;
Array<IterVar> root_iter_vars();
// remainder ommitted
// remainder omitted
};
}
Expand Down
2 changes: 1 addition & 1 deletion docs/dev/relay_pass_infra.rst
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ more details). For example, during registration of a pass (will be covered in
later), the pass developers can specify the name of the pass, the optimization
level it will be performed at, and/or the passes that are required.
``opt_level`` could be used to help the pass infra identify if a certain pass
needes to be executed when running under a user-provided optimization level. The
needs to be executed when running under a user-provided optimization level. The
``required`` field can be used by the pass infra to resolve pass dependencies.

.. code:: c++
Expand Down
2 changes: 1 addition & 1 deletion docs/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ This representation is high level, and can be helpful to perform generic optimiz
such as memory reuse, layout transformation and automatic differentiation.

TVM adopts a low level representation, that explicitly express the choice of memory
layout, parallelization pattern, locality and hardware primtives etc.
layout, parallelization pattern, locality and hardware primitives etc.
This level of IR is closer to directly target hardwares.
The low level IR adopt ideas from existing image processing languages like Halide, darkroom
and loop transformation tools like loopy and polyhedra based analysis.
Expand Down
2 changes: 1 addition & 1 deletion docs/install/from_source.rst
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ The configuration of TVM can be modified by `config.cmake`.

- TVM optionally depends on LLVM. LLVM is required for CPU codegen that needs LLVM.

- LLVM 4.0 or higher is needed for build with LLVM. Note that verison of LLVM from default apt may lower than 4.0.
- LLVM 4.0 or higher is needed for build with LLVM. Note that version of LLVM from default apt may lower than 4.0.
- Since LLVM takes long time to build from source, you can download pre-built version of LLVM from
`LLVM Download Page <http://releases.llvm.org/download.html>`_.

Expand Down
4 changes: 2 additions & 2 deletions docs/langref/hybrid_script.rst
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ Users can access containers by either constants or constants loops annotated.
Variables
~~~~~~~~~

All the mutatable variables will be lowered to an array with size 1.
All the mutable variables will be lowered to an array with size 1.
It regards the first store of a variable as its declaration.

.. note::
Expand Down Expand Up @@ -158,7 +158,7 @@ Attributes
~~~~~~~~~~

So far, ONLY tensors' ``shape`` and ``dtype`` attribute are supported!
The ``shape`` atrribute is essentailly a tuple, so you MUST access it as an array.
The ``shape`` attribute is essentially a tuple, so you MUST access it as an array.
Currently, only constant-indexed access is supported.

.. code-block:: python
Expand Down
4 changes: 2 additions & 2 deletions docs/langref/relay_expr.rst
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ Dataflow and Control Fragments
==============================

For the purposes of comparing Relay to traditional computational graph-based IRs, it
can be useful to consider Relay exrpessions in terms of dataflow and control fragments.
can be useful to consider Relay expressions in terms of dataflow and control fragments.
Each portion of a Relay program containing expressions that only affect the dataflow can
be viewed as a traditional computation graph when writing and expressing transformations.

Expand Down Expand Up @@ -88,7 +88,7 @@ expression where it is bound, respectively.
In the below code segment, notice that :code:`%a` is defined twice. This is
permitted, as in most functional languages; in the scope of the second
:code:`let` expression, the name :code:`%a` is "shadowed," meaning all
references to :code:`%a` in the inner scope refer to the later defintion, while
references to :code:`%a` in the inner scope refer to the later definition, while
references to :code:`%a` in the outer scope continue to refer to
the first one.

Expand Down
4 changes: 2 additions & 2 deletions docs/langref/relay_type.rst
Original file line number Diff line number Diff line change
Expand Up @@ -290,7 +290,7 @@ parameters must be treated as different types) and be
recursive (a constructor for an ADT can take an instance of
that ADT, thus an ADT like a tree or list can be inductively
built up). The representation of ADTs in the type system must
be able to accomodate these facts, as the below sections will detail.
be able to accommodate these facts, as the below sections will detail.

Global Type Variable
~~~~~~~~~~~~~~~~~~~~
Expand All @@ -316,7 +316,7 @@ Definitions (Type Data)
~~~~~~~~~~~~~~~~~~~~~~~

Besides a name, an ADT needs to store the constructors that are used
to define it and any type paramters used within them. These are
to define it and any type parameters used within them. These are
stored in the module, :ref:`analogous to global function definitions<module-description>`.

While type-checking uses of ADTs, the type system sometimes must
Expand Down
4 changes: 2 additions & 2 deletions docs/vta/dev/config.rst
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ below.
We provide additional detail below regarding each parameter:

- ``TARGET``: Can be set to ``"pynq"``, ``"ultra96"``, ``"sim"`` (fast simulator), or ``"tsim"`` (cycle accurate sim with verilator).
- ``HW_VER``: Hardware version which increments everytime the VTA hardware design changes. This parameter is used to uniquely idenfity hardware bitstreams.
- ``HW_VER``: Hardware version which increments every time the VTA hardware design changes. This parameter is used to uniquely identity hardware bitstreams.
- ``LOG_BATCH``: Equivalent to A in multiplication of shape (A, B) x (B, C), or typically, the batch dimension of inner tensor computation.
- ``LOG_BLOCK``: Equivalent to B and C in multiplication of shape (A, B) x (B, C), or typically, the input/output channel dimensions of the innter tensor computation.
- ``LOG_BLOCK``: Equivalent to B and C in multiplication of shape (A, B) x (B, C), or typically, the input/output channel dimensions of the inner tensor computation.

2 changes: 1 addition & 1 deletion docs/vta/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -202,7 +202,7 @@ Before powering up the device, we need to flash the microSD card image with late
#### Flash SD Card and Boot Angstrom Linux

To flash SD card and boot Linux on DE10-Nano, it is recommended to navigate to the [Resource](https://www.terasic.com.tw/cgi-bin/page/archive.pl?Language=English&CategoryNo=167&No=1046&PartNo=4) tab of the DE10-Nano product page from Terasic Inc.
After registeration and login on the webpage, the prebuild Angstrom Linux image would be available for downloading and flashing.
After registration and login on the webpage, the prebuilt Angstrom Linux image would be available for downloading and flashing.
Specifically, to flash the downloaded Linux SD card image into your physical SD card:

First, extract the gzipped archive file.
Expand Down
2 changes: 1 addition & 1 deletion include/tvm/expr.h
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ class Var;
/*!
* \brief A variable node in the IR.
*
* A vraible is uniquely identified by its address.
* A variable is uniquely identified by its address.
*
* Each variable is only binded once in the following nodes:
* - Allocate
Expand Down
6 changes: 6 additions & 0 deletions include/tvm/relay/attrs/transform.h
Original file line number Diff line number Diff line change
Expand Up @@ -314,6 +314,12 @@ struct OneHotAttrs : public tvm::AttrsNode<OneHotAttrs> {
}
}; // struct OneHotAttrs

/*! \brief Attributes for ArgWhere operator */
struct ArgWhereAttrs : public tvm::AttrsNode<ArgWhereAttrs> {
TVM_DECLARE_ATTRS(ArgWhereAttrs, "relay.attrs.ArgWhereAttrs") {
}
}; // struct ArgWhereAttrs

} // namespace relay
} // namespace tvm
#endif // TVM_RELAY_ATTRS_TRANSFORM_H_
3 changes: 2 additions & 1 deletion include/tvm/relay/expr_functor.h
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,8 @@ class ExprFunctor<R(const Expr& n, Args...)> {
virtual R VisitExpr_(const ConstructorNode* op, Args... args) EXPR_FUNCTOR_DEFAULT;
virtual R VisitExpr_(const MatchNode* op, Args... args) EXPR_FUNCTOR_DEFAULT;
virtual R VisitExprDefault_(const Node* op, Args...) {
throw Error(std::string("Do not have a default for ") + op->type_key());
LOG(FATAL) << "Do not have a default for " << op->type_key();
throw;
}

private:
Expand Down
44 changes: 38 additions & 6 deletions include/tvm/relay/module.h
Original file line number Diff line number Diff line change
Expand Up @@ -87,21 +87,34 @@ class ModuleNode : public RelayNode {
*/
TVM_DLL void Add(const GlobalVar& var, const Function& func, bool update = false);

/*!
* \brief Add a function to the global environment.
* \param var The name of the global function.
* \param func The function.
*
* It does not do type inference as Add does.
*/
TVM_DLL void AddUnchecked(const GlobalVar& var, const Function& func);

/*!
* \brief Add a type-level definition to the global environment.
* \param var The var of the global type definition.
* \param type The type definition.
* \param type The ADT.
* \param update Controls whether you can replace a definition in the
* environment.
*/
TVM_DLL void AddDef(const GlobalTypeVar& var, const TypeData& type);
TVM_DLL void AddDef(const GlobalTypeVar& var, const TypeData& type, bool update = false);

/*!
* \brief Add a function to the global environment.
* \brief Add a type definition to the global environment.
* \param var The name of the global function.
* \param func The function.
* \param type The ADT.
* \param update Controls whether you can replace a definition in the
* environment.
*
* It does not do type inference as Add does.
* It does not do type inference as AddDef does.
*/
TVM_DLL void AddUnchecked(const GlobalVar& var, const Function& func);
TVM_DLL void AddDefUnchecked(const GlobalTypeVar& var, const TypeData& type, bool update = false);

/*!
* \brief Update a function in the global environment.
Expand All @@ -110,6 +123,13 @@ class ModuleNode : public RelayNode {
*/
TVM_DLL void Update(const GlobalVar& var, const Function& func);

/*!
* \brief Update a type definition in the global environment.
* \param var The name of the global type definition to update.
* \param type The new ADT.
*/
TVM_DLL void UpdateDef(const GlobalTypeVar& var, const TypeData& type);

/*!
* \brief Remove a function from the global environment.
* \param var The name of the global function to update.
Expand All @@ -130,13 +150,25 @@ class ModuleNode : public RelayNode {
*/
TVM_DLL GlobalVar GetGlobalVar(const std::string& str) const;

/*!
* \brief Collect all global vars defined in this module.
* \returns An array of global vars
*/
tvm::Array<GlobalVar> GetGlobalVars() const;

/*!
* \brief Look up a global function by its name.
* \param str The unique string specifying the global variable.
* \returns The global variable.
*/
TVM_DLL GlobalTypeVar GetGlobalTypeVar(const std::string& str) const;

/*!
* \brief Collect all global type vars defined in this module.
* \returns An array of global type vars
*/
tvm::Array<GlobalTypeVar> GetGlobalTypeVars() const;

/*!
* \brief Look up a global function by its variable.
* \param var The global var to lookup.
Expand Down
3 changes: 2 additions & 1 deletion include/tvm/relay/pattern_functor.h
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,8 @@ class PatternFunctor<R(const Pattern& n, Args...)> {
virtual R VisitPattern_(const PatternTupleNode* op,
Args... args) PATTERN_FUNCTOR_DEFAULT;
virtual R VisitPatternDefault_(const Node* op, Args...) {
throw Error(std::string("Do not have a default for ") + op->type_key());
LOG(FATAL) << "Do not have a default for " << op->type_key();
throw;
}

private:
Expand Down
3 changes: 2 additions & 1 deletion python/tvm/autotvm/task/space.py
Original file line number Diff line number Diff line change
Expand Up @@ -226,7 +226,8 @@ def __init__(self, axes, policy, **kwargs):
def _generate_space(self, now, tmp_stack, enforce_no_tail=False):
"""Generate space by DFS"""
if now == self.num_output - 1:
if not enforce_no_tail or self.product % np.prod(tmp_stack, dtype=np.int64) == 0:
prod = np.prod(tmp_stack, dtype=np.int64)
if self.product % prod == 0 or (not enforce_no_tail and prod < self.product):
self.entities.append(SplitEntity([-1] + tmp_stack[::-1]))
else:
for factor in self.factors:
Expand Down
4 changes: 3 additions & 1 deletion python/tvm/autotvm/task/task.py
Original file line number Diff line number Diff line change
Expand Up @@ -350,7 +350,9 @@ def _count_flop(exp):
return _count_flop(exp.value)
if isinstance(exp, expr.Var):
return 0
if isinstance(exp, (expr.Add, expr.Sub, expr.Mul, expr.Div, expr.Mod,
if isinstance(exp, (expr.Add, expr.Sub, expr.Mul,
expr.Div, expr.Mod,
expr.FloorDiv, expr.FloorMod,
expr.Max, expr.Min,
expr.EQ, expr.NE, expr.LT, expr.LE, expr.GT, expr.GE,
expr.And, expr.Or, expr.Not)):
Expand Down
Loading

0 comments on commit ed4c185

Please sign in to comment.