Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Arith] Bound for Shape Variables #4486

Closed
wants to merge 29 commits into from
Closed

Conversation

@tqchen
Copy link
Member

tqchen commented Dec 10, 2019

I am not sure if we want to introduce this specialized assertion as a special Expr. It might be a better idea to have it as an intrinsic

@yzhliu
Copy link
Member Author

yzhliu commented Dec 11, 2019

@tqchen @icemelon9 modified per comments, please review again.

@yzhliu yzhliu changed the title [WIP][Arith] Bound for Shape Variables [Arith] Bound for Shape Variables Dec 12, 2019
@yzhliu
Copy link
Member Author

yzhliu commented Dec 27, 2019

@tqchen do you have any idea why this is failing (I cannot reproduce locally, and seems to be flaky) https://ci.tvm.ai/blue/organizations/jenkins/tvm/detail/PR-4486/13/pipeline#step-112-log-519

The generated IR is different, but both should be correct. This PR:

produce T_divide {
  parallel (ax0.ax1.fused, 0, (tvm_assert_bound(n0, 0, n0)*tvm_assert_bound(n1, 0, n1))) {
    for (ax2, 0, tvm_assert_bound(n2, 0, n2)) {
      for (ax3, 0, tvm_assert_bound(n3, 0, n3)) {
        if (((ax0.ax1.fused/tvm_assert_bound(n1, 0, n1)) < tvm_assert_bound(n0, 0, n0))) {
          if ((0 <= (ax0.ax1.fused % tvm_assert_bound(n1, 0, n1)))) {
            if (((ax0.ax1.fused % tvm_assert_bound(n1, 0, n1)) < tvm_assert_bound(n1, 0, n1))) {
              T_divide[(((((ax0.ax1.fused/tvm_assert_bound(n1, 0, n1))*stride) + ((ax0.ax1.fused % tvm_assert_bound(n1, 0, n1))*stride)) + (ax2*stride)) + (ax3*stride))] = (A[(((((ax0.ax1.fused/tvm_assert_bound(n1, 0, n1))*stride) + ((ax0.ax1.fused % tvm_assert_bound(n1, 0, n1))*stride)) + (ax2*stride)) + (ax3*stride))]/float32(k))
            }
          }
        }
      }
    }
  }
}

previously:

produce T_divide {
  parallel (ax0.ax1.fused, 0, (n0*n1)) {
    for (ax2, 0, n2) {
      for (ax3, 0, n3) {
        if ((0 <= select((((n1 >= 0) && ((ax0.ax1.fused % n1) >= 0)) || ((n1 < 0) && ((ax0.ax1.fused % n1) <= 0))), (ax0.ax1.fused/n1), ((ax0.ax1.fused/n1) - 1)))) {
          if ((select((((n1 >= 0) && ((ax0.ax1.fused % n1) >= 0)) || ((n1 < 0) && ((ax0.ax1.fused % n1) <= 0))), (ax0.ax1.fused/n1), ((ax0.ax1.fused/n1) - 1)) < n0)) {
            if ((0 <= select((((n1 >= 0) && ((ax0.ax1.fused % n1) >= 0)) || ((n1 < 0) && ((ax0.ax1.fused % n1) <= 0))), (ax0.ax1.fused % n1), ((ax0.ax1.fused % n1) + n1)))) {
              if ((select((((n1 >= 0) && ((ax0.ax1.fused % n1) >= 0)) || ((n1 < 0) && ((ax0.ax1.fused % n1) <= 0))), (ax0.ax1.fused % n1), ((ax0.ax1.fused % n1) + n1)) < n1)) {
                T_divide[((((select((((n1 >= 0) && ((ax0.ax1.fused % n1) >= 0)) || ((n1 < 0) && ((ax0.ax1.fused % n1) <= 0))), (ax0.ax1.fused/n1), ((ax0.ax1.fused/n1) - 1))*stride) + (select((((n1 >= 0) && ((ax0.ax1.fused % n1) >= 0)) || ((n1 < 0) && ((ax0.ax1.fused % n1) <= 0))), (ax0.ax1.fused % n1), ((ax0.ax1.fused % n1) + n1))*stride)) + (ax2*stride)) + (ax3*stride))] = (A[((((select((((n1 >= 0) && ((ax0.ax1.fused % n1) >= 0)) || ((n1 < 0) && ((ax0.ax1.fused % n1) <= 0))), (ax0.ax1.fused/n1), ((ax0.ax1.fused/n1) - 1))*stride) + (select((((n1 >= 0) && ((ax0.ax1.fused % n1) >= 0)) || ((n1 < 0) && ((ax0.ax1.fused % n1) <= 0))), (ax0.ax1.fused % n1), ((ax0.ax1.fused % n1) + n1))*stride)) + (ax2*stride)) + (ax3*stride))]/float32(k))
              }
            }
          }
        }
      }
    }
  }
}

@yzhliu
Copy link
Member Author

yzhliu commented Dec 28, 2019

find the problem...

@yzhliu
Copy link
Member Author

yzhliu commented Jan 2, 2020

@tqchen @icemelon9 It is ready for review

@@ -292,9 +292,16 @@ def get_binds(args, compact=False, binds=None):
binds = {} if binds is None else binds.copy()
cfg = current_build_config()
arg_list = []

def is_var(idx):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rename -> is_var_or_assert_bound

@@ -68,6 +68,41 @@ std::vector<const Object*> GetPath(Expr target, Expr expr) {
return v.path_;
}

class BoundRemover : public IRMutator {
public:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NOTE, we are in progress of updating IRMutators to new style, would be great if we can directly change it here related #4607

@@ -626,4 +626,19 @@ Expr trunc(Expr x) {
return ir::Call::make(x.dtype(), "trunc", {x}, ir::Call::PureIntrinsic);
}

Expr assert_bound(Expr value, Expr lower, Expr upper) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wrap_assert_bound_on_var

@tqchen
Copy link
Member

tqchen commented Jan 2, 2020

After looking at the set of changes, seems we still have quite a few places where we want to try to assume the properties of a Var.

This makes me wonder if we should also revisit the option to bring more information to Variable, but instead of putting them inside Variable, we can subclass the Variable to create special vars(e.g. IterVar) that contains these information. Given that things are easier with the new Object system. See https://discuss.tvm.ai/t/discuss-embed-more-bound-information-into-var-or-expr/4079/28

Of course this does not remove the value of the additional constraint wrapper, @icemelon9 @yzhliu thoughts?

@yzhliu
Copy link
Member Author

yzhliu commented Jan 2, 2020

@tqchen I also feel it would be better than wrapping with a call node

@tqchen
Copy link
Member

tqchen commented Jan 2, 2020

If we are going to use a Var instead, we will can introduce a special subclass of Variable called ShapeVarNode, which corresponds to shape can contain non-neg info, the solution mentioned in the post will depend on migrating all the Mutator to the new style(which will be done in a few days).

Note that will mean we will need to force user to construct shape_var when passing to shapes.
In the meanwhile, the intrinsic approach can co-exist with the shape_var one even if we decided to introduce that one, so it might be fine if we first proceed with this solution

@tqchen
Copy link
Member

tqchen commented Jan 6, 2020

@yzhliu can you rebase, @icemelon9 @ZihengJiang can you help to do a round of review?

@yzhliu
Copy link
Member Author

yzhliu commented Jan 6, 2020

@tqchen You want me to check current approach in, or directly switch to IterVar implement instead?
I feel IterVar would be cleaner, currently we do if node is assert_bound in many places.

@tqchen
Copy link
Member

tqchen commented Jan 6, 2020

@yzhliu I am fine either way, you can make a decision.

Since both are useful approaches and won't affect each other. We might want to think about the name a bit if we are going to sub-class Var (while it makes sense to use IterVar for iterators, it may not sounds too related to shape, perhaps ShapeVar would be a better name).

@tqchen
Copy link
Member

tqchen commented Jan 16, 2020

close this pr as it is superceded by #4684

@tqchen tqchen closed this Jan 16, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants