-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[TE] Support negative indices #9023
[TE] Support negative indices #9023
Conversation
For the context, this PR was split from another PR following the discussion #8971 (comment), since this could be potentially a controversial change (change the semantics of |
I like this idea. I've been slowly throwing |
src/te/tensor.cc
Outdated
|
||
if (support_negative_indices) { | ||
for (size_t i = 0; i < shape.size(); i++) { | ||
PrimExpr new_index = if_then_else(indices[i] < make_const(indices[i]->dtype, 0), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you want to use select here, because it does not involves a memory operation(thus can evaluate both sides)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
I understand the difference between if_then_else and Select
Don't fully understand why we want to be able to evaluate both sides however. Is this to prevent branching?
I think this can be useful for some of the indexing operations. We could try to make it more explicit though. e.g. introduce a new API. |
Is the negative indices used inside TE? If so I wonder can we make it a transformation pass rather than runtime behavior? |
@yzh119 I believe the intended usage was for the case where the value of indices are not known at compile time, otherwise compiler will be able to prove and simplify the conditionals |
I think I sort of understand the usecase here: some negative indices are not known to be negative until in runtime. This forces us to defer the conversion from compile-time to runtime. On the other hand, I am not 100% sure if it is the best fix by adding a new argument in the public interface, given that in most cases indices are just positive and well in-range. I was thinking, if the issue comes from an importer, is it possible to add an operator like |
We do have a |
I'll get ahead with this and start responding to comments if there are no other objections. |
This is ready for another look though I might need some guidance for unit tests |
903a7bc
to
84e5ab9
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks good to me given we want to fully support numpy-style negative indexing. On the API side, although I still think we can put this as a helper function in topi instead of a member function of te::Tensor
, the difference is minimal because we are not making architectural change here. So nice work!
BTW, shall we add a unittest for this?
I'll look into the changes you suggest and add a unit test. |
Ah no worries! I’m not suggesting changes :-) just unittests |
@junrushao1994 looked at the unittests for cpp and they seem a little bare. I added a single simple test which logs the following results: A is of shape (x, y) A(0, 0): A.IndexWithNegativeIndices(-1, -1): A.IndexWithNegativeIndices(0, -1): Some guidance on what is sufficient would be helpful. |
What's the status of this PR? |
Oh, forgot about this as I moved to working on other things. We still cool with merging @junrushao1994 ? |
e19fdba
to
9b7d01a
Compare
@AndrewZhaoLuo yeah let's get it merged! |
@junrushao1994 this is now ready to merge |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Thanks @AndrewZhaoLuo! It's merged :-) |
* initial change * more explicit api * switch to select * add support for negative indices * reduce things further * lint * to CamelCase * unit test Co-authored-by: Andrew Zhao Luo <andrewzhaoluo@system76-pc.localdomain>
* initial change * more explicit api * switch to select * add support for negative indices * reduce things further * lint * to CamelCase * unit test Co-authored-by: Andrew Zhao Luo <andrewzhaoluo@system76-pc.localdomain>
* initial change * more explicit api * switch to select * add support for negative indices * reduce things further * lint * to CamelCase * unit test Co-authored-by: Andrew Zhao Luo <andrewzhaoluo@system76-pc.localdomain>
Negative indices are a pretty common feature in most modern programming languages and libraries. This proposed PR would add optional support for negative indices out of the box for tensors in TE.
A lot of operators from other frontends support negative indices. See ONNX, numpy, and PyTorch where their tensors support negative indices out of the box. Unfortunately TE and the rest of TVM does not.
One benefit of this change is now it is trivial to change operators to support negative indices. This makes adapting frontends to our operators simpler. For example an operator using this simply has to turn the
support_negative_indices
flag to true for every relevant indexing operation. An example of doing this can be found in https://github.com/AndrewZhaoLuo/tvm/blob/02f1870d7f2e274cfd7e04678691da019ff201f0/include/tvm/topi/transform.h#L1265. Right now how we handle negative indices is by manually converting them to positive indices which can be cumbersome and results in lots of code duplication.The downside is that it technically adds a little more computation per indexing operation. While these operations are probably memory and not compute bound I am not 100% confident that setting
support_negative_indices=true
would not result in performance regressions. Furthermore, it will make some of printouts very ugly since it will but the comparison operation in each index statement.@jroesch also has great points that this might make loop level analysis hard and result in some optimizations harder.
Pros:
Cons:
Unknowns: