No, it categorically doesn't. Not just that, it's CPU support is quite lacking (fp32 only). Currently, there are two ways to support the ANE: CoreML and MPSGraph.
It is less about conversion and more about extending ANE support for transformer-style models or giving developers more control.
The issue is in targeting specific hardware blocks. When you convert with coremltools, Core ML takes over and doesn't provide fine-grained control - run on GPU, CPU or ANE. Also, ANE isn't really designed with transformers in mind, so most LLM inference defaults to GPU.
If you want to convert models to run on the ANE there are tools provided:
> Convert models from TensorFlow, PyTorch, and other libraries to Core ML.
https://apple.github.io/coremltools/docs-guides/index.html