http://www.perlmonks.org?node_id=11157929


in reply to Do you have AI::MXNet running?

Apache MXNet moved into the Attic in 2023-09. Apache MXNet was a flexi +ble and efficient library for Deep Learning.
(https://attic.apache.org/projects/mxnet.html)

however that does not mean you can not install it, if you are on linux perhaps download its source tarball ( https://github.com/apache/mxnet/tree/master ) and see where that takes you. I am doing the same now on my linux box (latest fedora).

Replies are listed 'Best First'.
Re^2: Do you have AI::MXNet running?
by The_Dj (Beadle) on Feb 28, 2024 at 01:19 UTC

    Hi, Anon

    Thanks, I know that it's been retired, But until I can figure out AI::TensorFlow, it's the best I can get. (AI::FANN is just too slow)


    I've tried the source tarball- getting it to compile is a headache in itself.
    But then AI::MXNetCAPI (a prereq) can't build because the signature for MXOptimizeForBackend in c_api.h differs wildly from the parameters passed in mxnet_wrap.cxx

    ☹️

      a real pain!

      the signature for MXOptimizeForBackend in c_api.h differs wildly from the parameters passed in mxnet_wrap.cxx

      I have solved that by updating the signature in mxnet.i to what c_api.h has. But I still don't know until the other dependency AI::NNVMCAPI compiles. If that hack does not solve it then we can try wrap the new signature with the old one.

      Right now, AI::NNVMCAPI requires tvm, as you mentioned. My last attempt is getting tvm from the git repository (https://github.com/apache/tvm), install Intel mkl, dnnl etc. and see. There are problems right now with tvm compilation as I have encountered this: https://github.com/apache/tvm/issues/16556 because I have set DNNL ON.

      All in all, it seems Apache have created millions of lines of code which are so smelly with bitrot is unbearable. Is the attic the right place or the cold ground instead?

        In case others find there way here and we don't get to the bottom of this:

        Apparently Nvidia changed MXOptimizeForBackend with the switch to cuda 12
        and that's what's broken AI::MXNetCAPI

        There is a beta of mxnet 2.0 available at https://archive.apache.org/dist/incubator/mxnet/2.0.0.beta1/
        But it has its own compile issues... and it's a beta, so...

        HTH someone

        It's the end of the road for me for now. tvm does not compile for me complaining mostly about intel's oneapi api changes in its latest version (and intel's oneapi older versions are only available if you pay). On top of that the idiotic decision to use cmake makes it cumbersome to try different CFLAGS/LDFLAGS settings. If you get lucky with tvm please post your settings.