-
Notifications
You must be signed in to change notification settings - Fork 105
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Vector and tensor fields #342
Comments
Okay, now that we have a plan for this functionality (see #731) how are we going to go about it? Somehow both this issue and #731 have to be solved simultaneously since they both implement multi-indexed spaces, this one on the CPU, the other one on the GPU. That change is going to be massive, a bloody hell to review as a whole. Is there any way to make this transition more smooth and digestible? To me, there seem to be at least two options:
In any case, I will put up "WIP" PRs for both branches, and I think the best is to have some kind of continuous development, no big changes in one chunk. Preliminary design reviews would make much sense. |
If at all possible I'd go for option 1 since any of the other ones would perhaps imply we need to make design choices that will be sub-optimal down the line. I'll have a look at the WIP stuff and review. |
Absolutely no idea, but I think the idea is that it should. Anyway dtypes like this are only "hacks" in a sense, and don't "really" exist, see for example >>> arr = np.zeros(5, dtype=(float, (3, 3)))
>>> arr2 = np.zeros([5, 3, 3], dtype=float)
>>> arr.dtype == arr2.dtype
True
>>> arr.shape == arr2.shape
True So an equivalent way to approach it is to simply give a shape. |
This should i.m.o. (if the user wants to explicitly have non-contiguous data) be handled with a productspace. |
Totally agree. I now have a much better picture of what you can do in libgpuarray. Using dtypes with shapes is only a hack and not necessary. The space should have a shape.
That may not even be necessary, but I'll investigate. A situation where this could occur would be slicing with |
Closes: #225, #342, #856, #964, #1085 Details: - Implement multi-indexing of ODL vectors - Implement tensor-valued `FunctionSpaceElement` using a numpy.dtype with shape - Implement __array_ufunc__ interface for tensors and DiscreteLpElement - Remove `order` from spaces, add to `element` instead - Allow Numpy 1.13 - Rewrite documentation - Rename `uspace` and `dspace` to `fspace` and `tspace`, respectively. - Move fn_ops code to tensor_ops - Implement `MatrixOperator` for multiple axes - Allow `field=None` in LinearSpace - Remove local Numpy compat code - Adapt tests - Simplify pytest fixtures
Closes: #225, #342, #856, #964, #1085 Details: - Implement multi-indexing of ODL vectors - Implement tensor-valued `FunctionSpaceElement` using a numpy.dtype with shape - Implement __array_ufunc__ interface for tensors and DiscreteLpElement - Remove `order` from spaces, add to `element` instead - Allow Numpy 1.13 - Rewrite documentation - Rename `uspace` and `dspace` to `fspace` and `tspace`, respectively. - Move fn_ops code to tensor_ops - Implement `MatrixOperator` for multiple axes - Allow `field=None` in LinearSpace - Remove local Numpy compat code - Adapt tests - Simplify pytest fixtures
Closes: #225, #342, #856, #964, #1085 Details: - Implement multi-indexing of ODL vectors - Implement tensor-valued `FunctionSpaceElement` using a numpy.dtype with shape - Implement __array_ufunc__ interface for tensors and DiscreteLpElement - Remove `order` from spaces, add to `element` instead - Allow Numpy 1.13 - Rewrite documentation - Rename `uspace` and `dspace` to `fspace` and `tspace`, respectively. - Move fn_ops code to tensor_ops - Implement `MatrixOperator` for multiple axes - Allow `field=None` in LinearSpace - Remove local Numpy compat code - Adapt tests - Simplify pytest fixtures
Closed by #1088 |
Suggestion:
Since vector fields, or more general, tensor fields, are always power spaces of an underlying scalar function space, we can safely assume that it is homogeneous in data type. So we could use Numpy's advanced data type functionality to represent elements of such a space:
The only problem I see is when we get non-contiguous input data, i.e. one component at a time from some location we don't have control of. The simple way out here would be to just make it contiguous if necessary - which probably means always if the data is not one big array.
Out of curiosity: Is this feature supported in gpuarray?
The text was updated successfully, but these errors were encountered: