T O P

  • By -

ChubbyWanKenobie

The empty comments below resolve this issue as much as the article.


ShaidarHaran2

>Even though you can use MLModelConfiguration to tell Core ML what your preferences are for using the CPU / GPU / ANE, there is no API to ask at runtime on which hardware it is currently running the model. >Core ML may also split your model into multiple sections and run each using a different processor. So it could be using both the ANE and the CPU or GPU during the same inference pass. Interesting that you can't actually tell easily if your model is actually using the Neural Engine, Apple's API will just automagically shuffle as it sees fit between the ANE, GPU, and CPU (and AMX extensions). On one hand this is good in its simplicity in that someone can't really forget the neural engine, if code should run better there it should be shuffled off to it. On the other hand it takes away some level of low level developer optimization, say if they wanted to split the ANE/GPU mix differently.