Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

merge from apach/tvm to neo ai tvm #59

Merged
merged 31 commits into from
Nov 25, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
31 commits
Select commit Hold shift + click to select a range
956aee7
[TOPI][OP] Support Faster-RCNN Proposal OP on CPU (#4297)
FrozenGene Nov 13, 2019
d6cf282
[QNN][Legalize] Specialize for Platforms without any fast Int8 arithm…
anijain2305 Nov 13, 2019
aae9ebd
fix error when memory_id is VTA_MEM_ID_OUT (#4330)
jason-song-dev Nov 14, 2019
9f2a7d0
[CI][DOCKER] Add ONNX runtime dep (#4314)
tqchen Nov 14, 2019
9ab329e
[QNN] Quantize - Fixing the sequence of lowering. (#4316)
anijain2305 Nov 14, 2019
53b48df
[QNN] Use Int16 upcast in Fallback Conv2D. Fix test names. (#4329)
anijain2305 Nov 14, 2019
607c8ea
[doc][fix] fix sphinx parsing for pass infra tutorial (#4337)
zhiics Nov 14, 2019
5148587
change ci image version (#4313)
tqchen Nov 14, 2019
9651abb
[Codegen] remove fp16 function override for cuda (#4331)
yzhliu Nov 14, 2019
6e9c638
[CI] Set workspace to be per executor (#4336)
tqchen Nov 14, 2019
d0f43dc
[Build][Windows] Fix Windows build by including cctype (#4319)
soiferj Nov 14, 2019
5e61941
Enable hipModuleGetGlobal() (#4321)
petrex Nov 15, 2019
32266bb
[Relay][Pass] Add pass to remove unused functions in relay module (#4…
wweic Nov 15, 2019
52f1dea
Add support for quant. mul operator in tflite frontend (#4283)
inadob Nov 15, 2019
7e50a49
Add topi.nn.fifo_buffer to TVM doc (#4343)
hcho3 Nov 15, 2019
b6d6efc
Solve custom model of prelu (#4326)
FrozenGene Nov 15, 2019
7a7685f
Deprecate NNVM warning msg (#4333)
yzhliu Nov 15, 2019
c84baa4
[Contrib] Add MKL DNN option (#4323)
icemelon Nov 15, 2019
584df60
[Relay][Frontend][TF] Fix transpose when axes is not a param (#4327)
soiferj Nov 15, 2019
aa00977
[RUNTIME] Add device query for AMD GcnArch (#4341)
petrex Nov 15, 2019
42f8016
[Test][Relay][Pass] Add test case for lambda lift (#4317)
wweic Nov 15, 2019
a3b8499
[Relay][Frontend][ONNX] operator support: DepthToSpace, SpaceToDepth …
cchung100m Nov 15, 2019
8bf20c4
imp module is deprecated (#4275)
were Nov 15, 2019
0be8e56
[VTA] Bug fix for padded load with large inputs (#4293)
liangfu Nov 15, 2019
a0df146
fix inconsistent tag name (#4134)
ziyu-guo Nov 15, 2019
41b65ef
[CodeGen] Add build config option disable_assert to control whether t…
FrozenGene Nov 15, 2019
fcd881f
Bump up CUDA log version in tophub.py (#4347)
alexgl-github Nov 15, 2019
7fabde9
Add check to ensure input file was successfully opened in NNVM deploy…
tweej Nov 15, 2019
4d5c586
[COMMUNITY] Add DISCLAIMER, KEYS for ASF release (#4345)
tqchen Nov 15, 2019
48acb45
[Relay][VM][Interpreter] Enable first-class constructors in VM and in…
weberlo Nov 15, 2019
800f494
Update dmlc_tvm_commit_id.txt
Nov 15, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,7 @@ tvm_option(PICOJSON_PATH "Path to PicoJSON" "3rdparty/picojson")
# Contrib library options
tvm_option(USE_BLAS "The blas library to be linked" none)
tvm_option(USE_MKL_PATH "MKL root path when use MKL blas" none)
tvm_option(USE_MKLDNN "Build with MKLDNN" OFF)
tvm_option(USE_CUDNN "Build with cuDNN" OFF)
tvm_option(USE_CUBLAS "Build with cuBLAS" OFF)
tvm_option(USE_MIOPEN "Build with ROCM:MIOpen" OFF)
Expand Down
12 changes: 12 additions & 0 deletions DISCLAIMER
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
Apache TVM (incubating) is an effort undergoing incubation at The
Apache Software Foundation (ASF), sponsored by the Apache Incubator PMC.

Incubation is required of all newly accepted
projects until a further review indicates that the
infrastructure, communications, and decision making process have
stabilized in a manner consistent with other successful ASF
projects.

While incubation status is not necessarily a reflection
of the completeness or stability of the code, it does indicate
that the project has yet to be fully endorsed by the ASF.
24 changes: 13 additions & 11 deletions Jenkinsfile
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@
//

ci_lint = "tvmai/ci-lint:v0.51"
ci_gpu = "tvmai/ci-gpu:v0.55"
ci_gpu = "tvmai/ci-gpu:v0.56"
ci_cpu = "tvmai/ci-cpu:v0.54"
ci_i386 = "tvmai/ci-i386:v0.52"

Expand All @@ -64,6 +64,8 @@ docker_run = 'docker/bash.sh'
// timeout in minutes
max_time = 120

workspace = "workspace/exec_${env.EXECUTOR_NUMBER}"

// initialize source codes
def init_git() {
checkout scm
Expand All @@ -86,7 +88,7 @@ def init_git_win() {
stage("Sanity Check") {
timeout(time: max_time, unit: 'MINUTES') {
node('CPU') {
ws('workspace/tvm/sanity') {
ws("${workspace}/tvm/sanity") {
init_git()
sh "${docker_run} ${ci_lint} ./tests/scripts/task_lint.sh"
}
Expand Down Expand Up @@ -134,7 +136,7 @@ def unpack_lib(name, libs) {
stage('Build') {
parallel 'BUILD: GPU': {
node('GPUBUILD') {
ws('workspace/tvm/build-gpu') {
ws("${workspace}/tvm/build-gpu") {
init_git()
sh """
mkdir -p build
Expand Down Expand Up @@ -182,7 +184,7 @@ stage('Build') {
},
'BUILD: CPU': {
node('CPU') {
ws('workspace/tvm/build-cpu') {
ws("${workspace}/tvm/build-cpu") {
init_git()
sh """
mkdir -p build
Expand Down Expand Up @@ -213,7 +215,7 @@ stage('Build') {
},
'BUILD : i386': {
node('CPU') {
ws('workspace/tvm/build-i386') {
ws("${workspace}/tvm/build-i386") {
init_git()
sh """
mkdir -p build
Expand All @@ -238,7 +240,7 @@ stage('Build') {
stage('Unit Test') {
parallel 'python3: GPU': {
node('TensorCore') {
ws('workspace/tvm/ut-python-gpu') {
ws("${workspace}/tvm/ut-python-gpu") {
init_git()
unpack_lib('gpu', tvm_multilib)
timeout(time: max_time, unit: 'MINUTES') {
Expand All @@ -250,7 +252,7 @@ stage('Unit Test') {
},
'python3: i386': {
node('CPU') {
ws('workspace/tvm/ut-python-i386') {
ws("${workspace}/tvm/ut-python-i386") {
init_git()
unpack_lib('i386', tvm_multilib)
timeout(time: max_time, unit: 'MINUTES') {
Expand All @@ -263,7 +265,7 @@ stage('Unit Test') {
},
'java: GPU': {
node('GPU') {
ws('workspace/tvm/ut-java') {
ws("${workspace}/tvm/ut-java") {
init_git()
unpack_lib('gpu', tvm_multilib)
timeout(time: max_time, unit: 'MINUTES') {
Expand All @@ -277,7 +279,7 @@ stage('Unit Test') {
stage('Integration Test') {
parallel 'topi: GPU': {
node('GPU') {
ws('workspace/tvm/topi-python-gpu') {
ws("${workspace}/tvm/topi-python-gpu") {
init_git()
unpack_lib('gpu', tvm_multilib)
timeout(time: max_time, unit: 'MINUTES') {
Expand All @@ -288,7 +290,7 @@ stage('Integration Test') {
},
'frontend: GPU': {
node('GPU') {
ws('workspace/tvm/frontend-python-gpu') {
ws("${workspace}/tvm/frontend-python-gpu") {
init_git()
unpack_lib('gpu', tvm_multilib)
timeout(time: max_time, unit: 'MINUTES') {
Expand All @@ -299,7 +301,7 @@ stage('Integration Test') {
},
'legacy: GPU': {
node('GPU') {
ws('workspace/tvm/legacy-python-gpu') {
ws("${workspace}/tvm/legacy-python-gpu") {
init_git()
unpack_lib('gpu', tvm_multilib)
timeout(time: max_time, unit: 'MINUTES') {
Expand Down
74 changes: 74 additions & 0 deletions KEYS
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
This file contains the PGP keys of various developers.
Please don't use them for email unless you have to. Their main
purpose is code signing.

Examples of importing this file in your keystore:
gpg --import KEYS.txt
(need pgp and other examples here)

Examples of adding your key to this file:
pgp -kxa <your name> and append it to this file.
(pgpk -ll <your name> && pgpk -xa <your name>) >> this file.
(gpg --list-sigs <your name>
&& gpg --armor --export <your name>) >> this file.

-----------------------------------------------------------------------------------
pub rsa4096 2019-11-15 [SC]
EF52D68AD5276994249816836754EA97C55E3DEB
uid [ultimate] Tianqi Chen (CODE SIGNING KEY) <tqchen@apache.org>
sig 3 6754EA97C55E3DEB 2019-11-15 Tianqi Chen (CODE SIGNING KEY) <tqchen@apache.org>
sub rsa4096 2019-11-15 [E]
sig 6754EA97C55E3DEB 2019-11-15 Tianqi Chen (CODE SIGNING KEY) <tqchen@apache.org>

-----BEGIN PGP PUBLIC KEY BLOCK-----

mQINBF3OK24BEADD4hxjrsgb4jIDIACHS15X+5YP/YaUF5UDDQs/bNn/xGJGVl4/
4sJ6qKZcvMDrWTmnNItYBuaHi1qhGvlcASBekm/9PU2U8lZmAF1lZkKIIYZkX+If
s8PEYurE8cDr65orrdsFF8Zwb+u6x+gMsHNivsU2Kn3xbQjGmeW44UA+aaXzcJp6
sVk3aX5DypoYJNBmbASyOjZVWkcrJ+NKEfJ1dKtka5/siqOjuvCd8NT5dJVhZbm3
Sf8iclEMqog1LhdI/FhE2fB3C5hJkzcinq2v55qDaGqsL+qgT7agf9b4t0EgjbVh
cs6jlCglad+Oz27BQIjt06HE1OB5T/Gxa080FK4JZMpxZJ5tDA2/7DQM2MyN84z/
s62JuBJnsrzr4w8D/QcAyzAmyzAqvxLR/aqLgJTIcQiw6AenHovKkNbEQOBYE2T5
ms7uVO2E2Tv42J4Te4OKhpId9mK+7elCLvOb2DfAJDdYxDN9c8dJTls+G6xmv0h9
bb2+QRjkpDiFeu1hKNEe0/ST/YXDfRYpKl+1t/QZ+JccLgEdEwuo/IQ1e4POH2h0
Zqvy7TR5obeTf0TvmLzW+i3s1oUkmSAnQEncSGnGnlugYk0BLuMMi9Fhx6qcC5pC
cA3nsRqFKebtnpop+m+psFkmd//xKSXJt9IYVEbQVNiUKm9uYq6RxZEAmQARAQAB
tDJUaWFucWkgQ2hlbiAoQ09ERSBTSUdOSU5HIEtFWSkgPHRxY2hlbkBhcGFjaGUu
b3JnPokCTgQTAQgAOBYhBO9S1orVJ2mUJJgWg2dU6pfFXj3rBQJdzituAhsDBQsJ
CAcCBhUKCQgLAgQWAgMBAh4BAheAAAoJEGdU6pfFXj3rVJIQALBArXEaFDdTw8wl
65nPLU6+QPc6eMn7mz6BDp1V7xL6Lq1GbArLpmQHIFhfQ/5Qmg80wuFBU1CNSRHd
tdZq3v8tB9Txvhy6bLQ+IijWH/TxSEPqnrkNsWBQLqAygDC5O3Ook/T6B5kuc176
Kz+w+YhzPS5hoPfJK6xGoKDNlkhmI/EnUjAq459VNpXeoeemiydzvApiCHH0VfOj
XnmgAJsAJA21EfT5Wuh/WODsf0HkaXB0xoWZfE/ugIQBLhZi9nUTYgwU2r4a+v4A
4C2T1OyJ3mDU+Oi/z6d0WJvsIrLCFcF4Q7b/6+MGkgLDGlsEKK2LZMrulGzQ1QY/
O4ck3dVDseqT2urplrTamDIh1IQmOt1FqMFwugdjfQwJ5HQeX6IeUGZei2Av/IZR
8Vw5Wxtm1Aksz3Js6iP3QmAh7txDUKO+eT5zLSXBoPmkleLnvCdtlvwaSNCAudHw
12h10IV286OetJvyyjmh/q/30sKNGiuucLMzPMwtLNW/j3cts3fqRHIHxepT6m94
FoYIlwVu4afiGgSi/7cN4p9GgfwnFGeETd25pgNG0KdXbVWniO1dTEKzOtvtuPYK
Y88ZAfdOgj4dyeI9ZnJV8RaZvpImDPVHGQm69/071jBxyWZnVi/YtOm+DjHfw0Vi
uiUdzoIb54oWW8tbiNg/nfiLUaJBuQINBF3OK24BEAC9W8Cwubu4Dpr4m0IIrLF5
zRRqQm9QIcEC0QHf6w1c2NWQTJP+MQY/jZLjtKw5yCQDghT+qsil2p8xCM0EqRd6
6NqxsAoweTCoV0MwolQv5T3KuP54SlNWjO+6gT73LkKuOHoIyy5cS9pIITlExHy+
XHtfQi1keDpWUEyvSRG9slu1DcxAeo6nFEpCuoQ+xx/lrCMxDlyZJCDhj2fXs2hK
8oKLV5NbIuifbXbCiOvZUdBHk0yLCEc6wNsVR30yLijSiPCKsAPcsG0PjQnz3eTb
0czq+6g50zUVOTioUghIlZ1DhCsxQGnlxoLY71pnmc7qVszdXPV2Mp7/KSIhDJFQ
LN0enDVz9aRXfpEK3SifxaPVNd61O/BGziza+XCK5qpEQL95UM2NdQCWixYmIOJE
k95tpnagtNupMkrY6WEa0CjVBzF1kdr5WpeUd6w85rA/opcqpQ8yLmvpyJ4tXZhN
7oAWZSUzyB904FMswUEhaS7pEJIlACeFcPwm31Jv/637gw1CopZpDxDUaW5/boG5
9Gp9D/GV2gyMrHAcwA1gZSbmolv5ZYcnUmwTPijVNZ+o70HBbvbNZqziPgy9G+L/
oGBkY/fpg7qfaGtAbOUbx1ck04CbafSUQIxpCG8in6zwrIRnn4uj6q4wIZ8SnvQ0
h3Ug0DmdsxvB/xdfillH/QARAQABiQI2BBgBCAAgFiEE71LWitUnaZQkmBaDZ1Tq
l8VePesFAl3OK24CGwwACgkQZ1Tql8VePeuZ1Q//csRsGDKNrW5e0EitEcfPZ0PC
teEw7A16dniXiCQF39KxxLzjCjUq7U8iWNm7bn1zdXcSVYZow+i5hFWXgZLKTKep
tQoocJmQ7kPV5oiTBewFy9T4BICUekj/EhXhSz1wxb3GSc+uHL2IUlFkixTY4k4B
9zq49gkNkTM02Or3quu1ZWAgeol1BSyV0tcI1h3M0OXtrN6idLyzQJFRyMYtzfwp
Pd2+hdaKAl8mKANs/GMJni3QvyVXzuJxMP6SNOFx4mWj0UVFVZvosv1lLXDesvwY
sNZmz5IkfuU4DHz1ZzZc3sThkpBdBiadvyKtNsenNh5nEXtwVhpiFf3IdZAvG7Ks
7i3Fx1/ObbvxMCWeFoB6oP/swHr9i6dqntiJoB6Gl5y1ye3qte8PiNuwRVhz+YOK
58Ga3wWMvODpi2AgSFv7cd1OFXXsoonORfmpcfAp+h6dIr/ttQMP2929/NoX3Cs4
/pXoG9L5EOpMfj0Q24sAGW8VzuCAHL3e7QSijFuSHZxz9oe4C28/mAY+KP0dif0Q
O3rq4kpqlhseyzcRyE1LWBvzuCeSTui2OPmyivFY57TOPnMHm5sXVby1VUiwm0B0
RgBtZDRLv765lAFGtp43sccZ7zfRaKhkVmzh3bAZ62nJyQNGw0TWg96Pf7Kjb0Bv
ha8fS9ysWDy/Ye65MP4=
=MSiP
-----END PGP PUBLIC KEY BLOCK-----
7 changes: 6 additions & 1 deletion NOTICE
Original file line number Diff line number Diff line change
@@ -1 +1,6 @@
TVM End to End Deep Learning Compiler Stack: https://tvm.ai/
Apache TVM (incubating)
Copyright 2017 and onwards The Apache Software Foundation

This product includes software developed at
The Apache Software Foundation (http://www.apache.org/).

3 changes: 3 additions & 0 deletions cmake/config.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -115,6 +115,9 @@ set(USE_BLAS none)
# set(USE_MKL_PATH <path to venv or site-packages directory>) if using `pip install mkl`
set(USE_MKL_PATH none)

# Whether use MKLDNN library
set(USE_MKLDNN OFF)

# Whether use OpenMP thread pool, choices: gnu, intel
# Note: "gnu" uses gomp library, "intel" uses iomp5 library
set(USE_OPENMP none)
Expand Down
7 changes: 7 additions & 0 deletions cmake/modules/contrib/BLAS.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -55,3 +55,10 @@ elseif(USE_BLAS STREQUAL "none")
else()
message(FATAL_ERROR "Invalid option: USE_BLAS=" ${USE_BLAS})
endif()

if(USE_MKLDNN STREQUAL "ON")
find_library(BLAS_LIBRARY_MKLDNN dnnl)
list(APPEND TVM_RUNTIME_LINKER_LIBS ${BLAS_LIBRARY_MKLDNN})
add_definitions(-DUSE_DNNL=1)
message(STATUS "Use MKLDNN library " ${BLAS_LIBRARY_MKLDNN})
endif()
2 changes: 1 addition & 1 deletion dmlc_tvm_commit_id.txt
Original file line number Diff line number Diff line change
@@ -1 +1 @@
e541c75863775f9011a658a36b86f084133bfbb7
2c5c4da697753ca79ea1551cc91c3072cecbbbb1
1 change: 1 addition & 0 deletions docker/install/ubuntu_install_onnx.sh
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ set -o pipefail

# fix to certain version for now
pip3 install onnx==1.5.0
pip3 install onnxruntime==1.0.0

# torch depends on a number of other packages, but unhelpfully, does
# not expose that in the wheel!!!
Expand Down
2 changes: 2 additions & 0 deletions docs/api/python/topi.rst
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,7 @@ List of operators
topi.nn.conv2d_hwcn
topi.nn.depthwise_conv2d_nchw
topi.nn.depthwise_conv2d_nhwc
topi.nn.fifo_buffer
topi.max
topi.sum
topi.min
Expand Down Expand Up @@ -199,6 +200,7 @@ topi.nn
.. autofunction:: topi.nn.conv2d_hwcn
.. autofunction:: topi.nn.depthwise_conv2d_nchw
.. autofunction:: topi.nn.depthwise_conv2d_nhwc
.. autofunction:: topi.nn.fifo_buffer

topi.image
~~~~~~~~~~
Expand Down
8 changes: 6 additions & 2 deletions docs/deploy/nnvm.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,9 +59,11 @@ An example in c++.
#include <tvm/runtime/registry.h>
#include <tvm/runtime/packed_func.h>

#include <algorithm>
#include <fstream>
#include <iterator>
#include <algorithm>
#include <stdexcept>
#include <string>

int main()
{
Expand Down Expand Up @@ -97,7 +99,9 @@ int main()
int64_t in_shape[4] = {1, 3, 224, 224};
TVMArrayAlloc(in_shape, in_ndim, dtype_code, dtype_bits, dtype_lanes, device_type, device_id, &x);
// load image data saved in binary
std::ifstream data_fin("cat.bin", std::ios::binary);
const std::string data_filename = "cat.bin";
std::ifstream data_fin(data_filename, std::ios::binary);
if(!data_fin) throw std::runtime_error("Could not open: " + data_filename);
data_fin.read(static_cast<char*>(x->data), 3 * 224 * 224 * 4);

// get the function from the module(set input data)
Expand Down
4 changes: 4 additions & 0 deletions include/tvm/build_module.h
Original file line number Diff line number Diff line change
Expand Up @@ -229,6 +229,9 @@ class BuildConfigNode : public Node {
/*! \brief Whether to disable loop vectorization. */
bool disable_vectorize = false;

/*! \brief Whether to disable assert stmt generation. */
bool disable_assert = false;

void VisitAttrs(AttrVisitor* v) {
v->Visit("data_alignment", &data_alignment);
v->Visit("offset_factor", &offset_factor);
Expand All @@ -244,6 +247,7 @@ class BuildConfigNode : public Node {
v->Visit("instrument_bound_checkers", &instrument_bound_checkers);
v->Visit("disable_select_rewriting", &disable_select_rewriting);
v->Visit("disable_vectorize", &disable_vectorize);
v->Visit("disable_assert", &disable_assert);
}

static constexpr const char* _type_key = "BuildConfig";
Expand Down
7 changes: 7 additions & 0 deletions include/tvm/ir_pass.h
Original file line number Diff line number Diff line change
Expand Up @@ -563,6 +563,13 @@ LoweredFunc LowerCustomDatatypes(LoweredFunc f, const std::string& target);
*/
LoweredFunc InferFragment(LoweredFunc f);

/*!
* \brief skip assert stmt generation
* \param f The function to be transformed.
* \return Transformed function.
*/
LoweredFunc SkipAssert(LoweredFunc f);

/*!
* \brief Verify if memory accesses are legal for a specific target device type.
*
Expand Down
14 changes: 7 additions & 7 deletions include/tvm/relay/module.h
Original file line number Diff line number Diff line change
Expand Up @@ -144,6 +144,13 @@ class ModuleNode : public RelayNode {
*/
TVM_DLL bool ContainGlobalVar(const std::string& name) const;

/*!
* \brief Check if the global_type_var_map_ contains a global type variable.
* \param name The variable name.
* \returns true if contains, otherise false.
*/
TVM_DLL bool ContainGlobalTypeVar(const std::string& name) const;

/*!
* \brief Lookup a global function by its variable.
* \param str The unique string specifying the global variable.
Expand Down Expand Up @@ -198,13 +205,6 @@ class ModuleNode : public RelayNode {
*/
TVM_DLL TypeData LookupDef(const std::string& var) const;

/*!
* \brief Check if a global type definition exists
* \param var The name of the global type definition.
* \return Whether the definition exists.
*/
TVM_DLL bool HasDef(const std::string& var) const;

/*!
* \brief Look up a constructor by its tag.
* \param tag The tag for the constructor.
Expand Down
9 changes: 6 additions & 3 deletions include/tvm/relay/transform.h
Original file line number Diff line number Diff line change
Expand Up @@ -552,17 +552,20 @@ TVM_DLL Pass Legalize(const std::string& legalize_map_attr_name = "FTVMLegalize"
TVM_DLL Pass CanonicalizeCast();

/*!
* \brief Add abstraction over a function
* \brief Add abstraction over a constructor or global variable bound to a function.
*
* For example: `square` is transformed to
* `fun x -> square x`.
* `fn (%x: int32) -> int32 { square(x) }`.
*
* See https://en.wikipedia.org/wiki/Lambda_calculus#%CE%B7-conversion
* for more details.
*
* \param expand_constructor Whether to expand constructors.
* \param expand_global_var Whether to expand global variables.
*
* \return The pass.
*/
TVM_DLL Pass EtaExpand();
TVM_DLL Pass EtaExpand(bool expand_constructor, bool expand_global_var);

/*!
* \brief Print the IR for a module to help debugging.
Expand Down
3 changes: 2 additions & 1 deletion include/tvm/runtime/device_api.h
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,8 @@ enum DeviceAttrKind : int {
kDeviceName = 5,
kMaxClockRate = 6,
kMultiProcessorCount = 7,
kMaxThreadDimensions = 8
kMaxThreadDimensions = 8,
kGcnArch = 9
};

/*! \brief Number of bytes each allocation must align to */
Expand Down
4 changes: 4 additions & 0 deletions nnvm/python/nnvm/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
# coding: utf-8
"""NNVM python API for ease of use and help new framework establish python API. """
from __future__ import absolute_import as _abs
import warnings

from . import _base
from . import symbol as sym
Expand All @@ -10,3 +11,6 @@
from . import frontend

__version__ = _base.__version__

warnings.warn("NNVM is deprecated and will be removed in a future version. Use Relay instead.",
FutureWarning)
Loading