You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Multiply and Divide have a far reduced range compared to Spark, in part because of PromotePrecision and the requirement that a Binop has the same type for the lhs and rhs of the operation. CUDF does not have any requirement like that for decimal binops. So we can increase the number of operations that we can put on the GPU if we create new versions of GpuMultiply and GpuDivide specifically for decimal that do not require that the precision and scale be the same. It would take a desired output type as well as the inputs and do its best to figure out based off of what CUDF does internally how to get the correct answer. We would then do pattern matching trying to strip out the PromotePrecision/Cast on the inputs to this, and also remove the CheckOverflow at the end of it.
The text was updated successfully, but these errors were encountered:
Multiply and Divide have a far reduced range compared to Spark, in part because of
PromotePrecision
and the requirement that a Binop has the same type for the lhs and rhs of the operation. CUDF does not have any requirement like that for decimal binops. So we can increase the number of operations that we can put on the GPU if we create new versions ofGpuMultiply
andGpuDivide
specifically for decimal that do not require that the precision and scale be the same. It would take a desired output type as well as the inputs and do its best to figure out based off of what CUDF does internally how to get the correct answer. We would then do pattern matching trying to strip out thePromotePrecision
/Cast
on the inputs to this, and also remove theCheckOverflow
at the end of it.The text was updated successfully, but these errors were encountered: