Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Detecting Supported Layers #35

Open
Proryanator opened this issue Apr 7, 2024 · 9 comments
Open

Detecting Supported Layers #35

Proryanator opened this issue Apr 7, 2024 · 9 comments

Comments

@Proryanator
Copy link

Proryanator commented Apr 7, 2024

Hey there! This repo is super amazing, thanks for putting all these findings together.

Was reading this sub-section (link below) on which layers are unsupported, and I had an idea on how to programmatically identify them.

https://github.com/hollance/neural-engine/blob/master/docs/unsupported-layers.md

This part here: "S → U → S → U → S → U" about swapping supported/unsupported layers made me wonder. In theory, if we take a layer X from a CoreML model (or just some CoreML op) we have that we do not yet know can run on the ANE, and we make a dummy model that is built solely of that layer, i.e. X -> X -> X ... and we set the compute unit to be CPU/ANE only, this would encourage the Neural Engine to run this model on 1 compute unit since that would be efficient. So, in theory, if you set the compute unit to be CPU && ANE and you see that the layer runs on the ANE only, then you will have identified that this op is ANE compatible? 🧐 could even record stats as well.

I'd like to test this theory but wanted to run this by you. Thinking this could be a way to, given a model, programmatically piecemeal individual layers, build a simple repeated layer model, and produce a chart of whether the layer is CPU/GPU/ANE supported (maybe even with statistics). Maybe even a chart that can be publically available of ops and their supported compute units (since something like that does not exist today to my knowledge).

Would help with identifying areas where a layer could be swapped out/modified to encourage running the model more efficiently on the ANE.

Any thoughts would be appreciated! 😊

@hollance
Copy link
Owner

hollance commented Apr 7, 2024

That sounds like an interesting approach!

@Proryanator
Copy link
Author

Thanks! Will give this a shot at some point and share what I find.

@Proryanator
Copy link
Author

Proryanator commented May 25, 2024

@hollance thanks for writing those CoreML survival guides, I find them to be an invaluable supplement for apple's own documentation. I've started working on this, calling it "anetools" 👊

@Proryanator
Copy link
Author

Proryanator commented May 26, 2024

Using your section of "Model Surgery" I was able to successfully isolate the first layer programmatically in a small Neural Network! Cool thing is I could run the performance tab on it too.

Going to work on making this generically possible for all layers in the model, which will require a bit more work than just input/output name matching (shapes and datatypes too which will be a bit difficult).

Off the top of your head, do you know how I could programmatically map the layer.output shape (which is not as straightforward as using layer.output.shape, probably more complex) to the equivalent model.output_description.type?

Screenshot 2024-05-25 at 11 17 39 PM

@Proryanator Proryanator changed the title Thoughts on "Detecting" Supported Layers Detecting Supported Layers May 26, 2024
@hollance
Copy link
Owner

Honestly, it's been too long since I did anything with Core ML so I don't know off the top of my head.

@Proryanator
Copy link
Author

No worries! I think I found it deep within some proto objects. Gonna have to add in dynamic data building per data type per input (there's only a handful of FeatureType(s) so not too bad).

@Proryanator
Copy link
Author

Proryanator commented Jun 11, 2024

@hollance was this feature in XCode around when you were working on CoreML? (Performance tab, you can see it in the link).

I saw the apple discussion/article was from WWDC22' but not sure if it was around back then.

It somehow detects the compute unit per layer, and also tells you whether a layer is compatible with a given compute unit. It sometimes doesn't work but when it does it is pretty nice.

Seems like Apple already implemented a "supported layers" feature kinda 😂

https://developer.apple.com/videos/play/wwdc2022/10027/

@hollance
Copy link
Owner

Unfortunately that feature was not available when I was working with Core ML. Would have been useful at the time. :-)

@Proryanator
Copy link
Author

Proryanator commented Jun 11, 2024

Yeah I realized this does most of what I'm trying to research 😆 although it does on occasion fail for ML Program models it seems. If you like I could make some README updates to this project in reflection of the features in XCode for detecting ANE layer support, and whether it gets picked up.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants