Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add missing dependency onnxconverter_common, fix multi regression with xgboost #679

Merged
merged 8 commits into from
Jan 24, 2024

Conversation

xadupre
Copy link
Collaborator

@xadupre xadupre commented Jan 23, 2024

Fixes issue #673, #676

@xadupre xadupre changed the title add missing dependency onnxconverter_common Add missing dependency onnxconverter_common, fix multi regression with xgboost Jan 23, 2024
Signed-off-by: Xavier Dupre <xadupre@microsoft.com>
Signed-off-by: Xavier Dupre <xadupre@microsoft.com>
Signed-off-by: Xavier Dupre <xadupre@microsoft.com>
Signed-off-by: Xavier Dupre <xadupre@microsoft.com>
Signed-off-by: Xavier Dupre <xadupre@microsoft.com>
Signed-off-by: Xavier Dupre <xadupre@microsoft.com>
Signed-off-by: Xavier Dupre <xadupre@microsoft.com>
Signed-off-by: Xavier Dupre <xadupre@microsoft.com>
@xadupre xadupre merged commit eb21c0e into onnx:main Jan 24, 2024
19 checks passed
@casassg
Copy link
Contributor

casassg commented Feb 14, 2024

@xadupre any chance to get a release for this version?

@devspatron
Copy link

devspatron commented Mar 10, 2024

Hello Xavier Dupré, its blurshift again with the same problem i have ran this code on google colab


the code

code= "
//export models as ONNX
!pip install onnx
!pip install onnxmltools
!pip install skl2onnx
!pip install keras2onnx
!pip install onnxruntime

//!pip uninstall protobuf
//!pip install protobuf==4.21.9
!pip install protobuf==3.20.*
!pip freeze | grep protobuf

//use my cpu specifi version
!pip install -U xgboost==1.7.5

//code as follows:
import xgboost
import numpy as np
import onnxmltools
from skl2onnx import convert_sklearn
from skl2onnx.common.data_types import FloatTensorType
from skl2onnx import update_registered_converter

//Your data and labels [-Ox (253, 84) -Oy (253, 112) -]
X = np.random.rand(100, 10 )
y = np.random.rand(100, 210 )

//Train XGBoost regressor
model = xgboost.XGBRegressor(objective='reg:squarederror', n_estimators=10, );
model.fit(X, y);

//Define input type (adjust shape according to your input)
num_features0 = X.shape[-1];
initial_type = [('float_input', FloatTensorType([None, num_features0 ]))]

//Convert XGBoost model to ONNX
onnx_model = convert_sklearn( model, initial_types=initial_type, target_opset=12, );
//onnx_model = onnxmltools.convert_xgboost( model=model, initial_types=[('input', FloatTensorType([None, num_features0 ]))], );

//Save the ONNX model /content/sample_data
with open( "/content/xgboost_model.onnx", "wb" ) as f:
f.write( onnx_model.SerializeToString())

";


the error

the error i get
error ="

MissingShapeCalculator Traceback (most recent call last)

in <cell line: 22>()
20
21 # Convert XGBoost model to ONNX
---> 22 onnx_model = convert_sklearn( model, initial_types=initial_type, target_opset=12, );
23 # onnx_model = onnxmltools.convert_xgboost( model=model, initial_types=[('input', FloatTensorType([None, num_features0 ]))], );
24

4 frames

/usr/local/lib/python3.10/dist-packages/skl2onnx/common/_topology.py in infer_types(self)
628 # Invoke a core inference function
629 if self.type is None:
--> 630 raise MissingShapeCalculator(
631 "Unable to find a shape calculator for type '{}'.".format(
632 type(self.raw_operator)

MissingShapeCalculator: Unable to find a shape calculator for type '<class 'xgboost.sklearn.XGBRegressor'>'.
It usually means the pipeline being converted contains a
transformer or a predictor with no corresponding converter
implemented in sklearn-onnx. If the converted is implemented
in another library, you need to register
the converted so that it can be used by sklearn-onnx (function
update_registered_converter). If the model is not yet covered
by sklearn-onnx, you may raise an issue to
https://github.com/onnx/sklearn-onnx/issues
to get the converter implemented or even contribute to the
project. If the model is a custom model, a new converter must
be implemented. Examples can be found in the gallery.

";

This problem still persists, even after installing new packages, Could you try running my code on google colab and try to examine the resulting converted onnx model using 'https://netron.app/'. Ensure the app has output shape=(-1, 210).
Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants