Apache MXNet moved into the Attic in 2023-09. Apache MXNet was a flexi
+ble and efficient library for Deep Learning.
(https://attic.apache.org/projects/mxnet.html)
however that does not mean you can not install it, if you are on linux perhaps download its source tarball ( https://github.com/apache/mxnet/tree/master ) and see where that takes you. I am doing the same now on my linux box (latest fedora). | [reply] [d/l] |
Hi, Anon
Thanks, I know that it's been retired, But until I can figure out AI::TensorFlow, it's the best I can get. (AI::FANN is just too slow)
I've tried the source tarball- getting it to compile is a headache in itself.
But then AI::MXNetCAPI (a prereq) can't build because the signature for MXOptimizeForBackend in c_api.h differs wildly from the parameters passed in mxnet_wrap.cxx
☹️
| [reply] [d/l] [select] |
a real pain!
the signature for MXOptimizeForBackend in c_api.h differs wildly from the parameters passed in mxnet_wrap.cxx
I have solved that by updating the signature in mxnet.i to what c_api.h has. But I still don't know until the other dependency AI::NNVMCAPI compiles. If that hack does not solve it then we can try wrap the new signature with the old one.
Right now, AI::NNVMCAPI requires tvm, as you mentioned. My last attempt is getting tvm from the git repository (https://github.com/apache/tvm), install Intel mkl, dnnl etc. and see. There are problems right now with tvm compilation as I have encountered this: https://github.com/apache/tvm/issues/16556 because I have set DNNL ON.
All in all, it seems Apache have created millions of lines of code which are so smelly with bitrot is unbearable. Is the attic the right place or the cold ground instead?
| [reply] [d/l] |
SOLVED!
Here is the short verson:
MXNet up to version 1.9.1 was written for cuda 11. It is not compatible with cuda 12
MXNet 2.0.0 with cuda12 compatability was retired before it was stable
There was a Docker container on the HUB that ran mxnet, but it's gone now
There is a Docker container that includes the cuda 11 and cudnn8 development 'stuff'
For your convenience*, here is a Dockerfile that will** build an image with working AI::MXNet 1.9.0 (1.9.1 doesn't work)
(And some tips for those who don't use docker. I'm far from expert, but what I've put here is at least correct at the time of writing)
Build with docker buildx build -t perlmx .*3 The final image is 18GB and can take hours to build :-(
Run with docker run --runtime=nvidia --gpus=all -it perlmx*4 to get a root prompt. You can then also ssh -X mxnet@172.17.0.2*5*6*7 to get X11
Or run with docker run --runtime=nvidia --gpus=all -dit perlmx to detach and then ssh -X mxnet@172.17.0.2*7 *8
(If you are new to Docker, read up on the run, start, stop, ps -a and images commands, or you will loose your work, suck up all your disks or both)
You could also use the information and magic versions below to build a bare-metal (or other VM) server if you want. I'm not going to tell you what to do.
Good Luck
# syntax=docker/dockerfile:1
# vim: filetype=dockerfile
# -*- mode: dockerfile -*-
#
# dockerfile to build working perl/mxnet with GPU support
FROM nvidia/cuda:11.1.1-cudnn8-devel AS build
WORKDIR /usr/src/mxnet
RUN <<INSTALL_CPP
apt update
export DEBIAN_FRONTEND=noninteractive
export TZ="Etc/UTC"
apt install -y build-essential git libatlas-base-dev libopencv-dev pyt
+hon3-opencv \
libcurl4-openssl-dev libgtest-dev cmake swig
cd /usr/src/gtest
cmake CMakeLists.txt
make
cp lib/*.a /usr/lib
INSTALL_CPP
RUN <<BUILD_MXNET
cd /usr/src/mxnet
export BUILD_OPTS="USE_CUDA=1 USE_CUDA_PATH=/usr/local/cuda USE_CUDNN=
+1"
export LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/
+usr/local/cuda:/usr/local/cuda-11.1/compat
git clone --branch 1.9.0 --recursive https://github.com/dmlc/mxnet .
#The Makefile isn't tuned for parallel builds, but running -j1 is _ver
+ry_ slow
#In tests, -j8 seems to work anyway! But if it does fail, this will fi
+nish
#the build anyway.
#If you still aren't getting a good build, use
#make -j1 $BUILD_OPTS
#and just wait it out.
make -j8 $BUILD_OPTS || make -j2 $BUILD_OPTS || make -j1 $BUILD_OPTS
BUILD_MXNET
RUN <<PERL_MX
export LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/
+usr/local/cuda:/usr/local/cuda-11.1/compat
cd /usr/src/mxnet/perl-package/AI-MXNetCAPI/
perl Makefile.PL
make install
cd /usr/src/mxnet/perl-package/AI-NNVMCAPI/
perl Makefile.PL
make install
cd /usr/src/mxnet/perl-package/AI-MXNet/
perl Makefile.PL
make install
PERL_MX
FROM nvidia/cuda:11.1.1-cudnn8-devel
RUN <<PREP
echo export LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/li
+b64:/usr/local/cuda:/usr/local/mxnet/lib >> /etc/profile.d/99-cuda-li
+brary-path.sh
echo export XDG_RUNTIME_DIR=/tmp/xdg_runtime_\`whoami\` >> /etc/profil
+e.d/99-gnuplot-xdg-path.sh
echo mkdir \$XDG_RUNTIME_DIR 2\>/dev/null >> /etc/profile.d/99-gnuplot
+-xdg-path.sh
export DEBIAN_FRONTEND=noninteractive
apt update
apt install -y \
wget unzip sudo \
openssh-server x11-apps net-tools vim \
libmouse-perl pdl cpanminus libgraphviz-perl libpdl-graphics-g
+nuplot-perl \
libpdl-ccs-perl libfunction-parameters-perl \
libperlio-gzip-perl libgtk2-perl
apt-get clean
cpanm -q Hash::Ordered
rm -rf /root/.cpanm
sed -i "s/^.*X11Forwarding.*$/X11Forwarding yes/" /etc/ssh/sshd_config
sed -i "s/^.*X11UseLocalhost.*$/X11UseLocalhost no/" /etc/ssh/sshd_con
+fig
grep "^X11UseLocalhost" /etc/ssh/sshd_config || echo "X11UseLocalhost
+no" >> /etc/ssh/sshd_config
useradd -m -s /bin/bash mxnet
echo "mxnet:mxnet" | chpasswd
touch /home/mxnet/.Xauthority
chown mxnet:mxnet /home/mxnet/.Xauthority
chmod 600 /home/mxnet/.Xauthority
adduser mxnet sudo
PREP
RUN --mount=from=build,src=/,dst=/mnt <<COPYING
mkdir -p /usr/local/mxnet/lib/
cp -p \
/mnt/usr/src/mxnet/perl-package/AI-NNVMCAPI/blib/arch/auto/AI/
+NNVMCAPI/NNVMCAPI.so \
/mnt/usr/src/mxnet/perl-package/AI-MXNetCAPI/blib/arch/auto/AI
+/MXNetCAPI/MXNetCAPI.so \
/mnt/usr/src/mxnet/lib/libmxnet.so \
/mnt/usr/src/mxnet/build/libpass_lib.so \
/mnt/usr/src/mxnet/build/libcustomop_lib.so \
/mnt/usr/src/mxnet/build/libcustomop_gpu_lib.so \
/mnt/usr/src/mxnet/build/libsubgraph_lib.so \
/mnt/usr/src/mxnet/build/libtransposerowsp_lib.so \
/mnt/usr/src/mxnet/build/libtransposecsr_lib.so \
/usr/local/mxnet/lib/
cp -rp /mnt/etc/alternatives/lib* /etc/alternatives/
cp -rp /mnt/usr/lib/lib* /usr/lib/
cp -rp /mnt/usr/lib/x86_64-linux-gnu /usr/lib/
cp -rp /mnt/usr/local/lib/x86_64-linux-gnu /usr/local/lib/
cp -rp /mnt/usr/local/share/man /usr/local/share/
cp -rp /mnt/usr/local/share/perl /usr/local/share/
cat <<"STARTUP" > /root/startup.sh
#!/bin/bash
echo -e "\n==== ===== =======\nPerl/MXNet startup\n==== ===== =======\
+n"
ln -s /dev/null /dev/raw1394 2>/dev/null
mkdir /tmp/runtime-mxnet 2>/dev/null
cat /etc/*elease
echo
ifconfig -a | perl -ne 'if (/^\s+inet\s+(\d\S+\d)\s/) { print "IP: $1\
+n"; }'
echo
service ssh start
echo
bash $*
STARTUP
chmod 755 /root/startup.sh
COPYING
EXPOSE 22/tcp
WORKDIR /root
ENTRYPOINT ["/root/startup.sh"]
* Enjoy. Also, I added PDL and PDL::Graphics::Gnuplot
** You need to install the NVIDIA Container Toolkit if you want to actually use your GPU.
*3 If you are new to docker: Just install docker (v23 or later) and it should just work
*4 If you don't want to use your GPU, don't have a compatible GPU or couldn't get the NVIDIA container toolkit to work, run with docker run -it perlmx or docker run -dit perlmx and make your CPU do all the things.
*5 Password is mxnet.
*6 Your IP may vary.
When you first launch the container interactively (with -it), it displays its IP address.
You can also use docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id on a running container
*7 You can also not use -X if you don't want/need X11 support.
*8 User mxnet is in sudoers, so...
FWIW: I'm running the docker service in kali linux in WSL2, with an nVidia 2080 Super. YMMV
| [reply] [d/l] [select] |