macでmkl&numpyを構築した
前回記事の投稿から数時間後, きちんとmkl&numpyの構築に成功した.
pip-install-deeplearning.hatenadiary.jp
兆候
前回記事 'ここに辿り着くまで' で記載したエラーの対処方法を調べていると 'dynamic link' がどうたらこうたら, という話をたくさん見かけました.
ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/multiarray.so, 2): Library not loaded: libmkl_rt.dylib Referenced from: /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/multiarray.so Reason: image not found
そこではDYLD_LIBRARY_PATHを設定することも示唆されていたので, 改めて調べ直すと有力な記事を発見.
github.com
構築
記事ではnumpy-1.8.1, python3.4ですが, numpy-1.10.4, python2.7でも構築できました.
/numpy-1.10.4/site.cfg または ~/.numpy-site.cfg を以下の内容で用意
[mkl] library_dirs = /opt/intel/mkl/lib include_dirs = /opt/intel/include:/opt/intel/mkl/include mkl_libs = mkl_rt lapack_libs =
~/.bash_profile を設定
export CC=clang export CXX=clang++ export FFLAGS=-ff2c export PYLINK="import sys; import os; print('-L' + os.path.abspath(os.__file__ + '/../..') + ' -lpython2.' + str(sys.version_info[1]))" export DYLD_LIBRARY_PATH="/opt/intel/lib/intel64:/opt/intel/lib:/opt/intel/mkl/lib:$DYLD_LIBRARY_PATH"
/numpy-1.10.4/setup.py を実行
python setup.py config --compiler=intelem python setup.py build --compiler=intelem python setup.py install
検証
インストールが終わったところで各種検証をしていきます.
python -c'import numpy; numpy.show_config()' lapack_opt_info: libraries = ['mkl_rt', 'pthread'] library_dirs = ['/opt/intel/mkl/lib'] define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)] include_dirs = ['/opt/intel/include', '/opt/intel/mkl/include'] blas_opt_info: libraries = ['mkl_rt', 'pthread'] library_dirs = ['/opt/intel/mkl/lib'] define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)] include_dirs = ['/opt/intel/include', '/opt/intel/mkl/include'] openblas_lapack_info: NOT AVAILABLE lapack_mkl_info: libraries = ['mkl_rt', 'pthread'] library_dirs = ['/opt/intel/mkl/lib'] define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)] include_dirs = ['/opt/intel/include', '/opt/intel/mkl/include'] blas_mkl_info: libraries = ['mkl_rt', 'pthread'] library_dirs = ['/opt/intel/mkl/lib'] define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)] include_dirs = ['/opt/intel/include', '/opt/intel/mkl/include'] mkl_info: libraries = ['mkl_rt', 'pthread'] library_dirs = ['/opt/intel/mkl/lib'] define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)] include_dirs = ['/opt/intel/include', '/opt/intel/mkl/include']
速度比較すると一目瞭然. 早くなっています(anacondaで構築したmklよりも早い).
chainer
またchainer_mnistのエラーもちゃんと解消されています.
pip-install-deeplearning.hatenadiary.jp
python train_mnist.py n_units 1000 load MNIST dataset epoch 1 graph generated train mean loss=0.190347706894, accuracy=0.942616669834 test mean loss=0.106343430469, accuracy=0.966100007296 epoch 2 train mean loss=0.0741488920168, accuracy=0.976766676207 test mean loss=0.086192065514, accuracy=0.974100006819 epoch 3 train mean loss=0.048918632143, accuracy=0.984400010208 test mean loss=0.0637195110742, accuracy=0.980100007057 epoch 4 train mean loss=0.0352272967775, accuracy=0.988666675289 test mean loss=0.0778577751776, accuracy=0.97710000515 epoch 5 train mean loss=0.0289251541828, accuracy=0.990183341404 test mean loss=0.0699444887941, accuracy=0.981000007987 epoch 6 train mean loss=0.0223340781424, accuracy=0.992733340065 test mean loss=0.0735415374582, accuracy=0.981900007725 epoch 7 train mean loss=0.023896622029, accuracy=0.99221667399 test mean loss=0.0738069448388, accuracy=0.98120000422 epoch 8 train mean loss=0.0157400782724, accuracy=0.994600005051 test mean loss=0.0902125522017, accuracy=0.979400005341 epoch 9 train mean loss=0.0156944541644, accuracy=0.994666671356 test mean loss=0.0669454357358, accuracy=0.983200008273 epoch 10 train mean loss=0.0164206012773, accuracy=0.994833338062 test mean loss=0.0776587059804, accuracy=0.981800007224 epoch 11 train mean loss=0.0131828821437, accuracy=0.995750004053 test mean loss=0.0715873185383, accuracy=0.984500007033 epoch 12 train mean loss=0.012000972836, accuracy=0.996083337069 test mean loss=0.0895336843703, accuracy=0.982900007963 epoch 13 train mean loss=0.0143335766766, accuracy=0.995516670843 test mean loss=0.0933588848546, accuracy=0.981100007892 epoch 14 train mean loss=0.0109565397249, accuracy=0.99610000362 test mean loss=0.10498684463, accuracy=0.97990000546 epoch 15 train mean loss=0.0104289456925, accuracy=0.997233335972 test mean loss=0.0856612050419, accuracy=0.983500004411 epoch 16 train mean loss=0.00717544781743, accuracy=0.997816668749 test mean loss=0.108521250341, accuracy=0.982600008845 epoch 17 train mean loss=0.00981602142994, accuracy=0.997100002766 test mean loss=0.0990101728047, accuracy=0.982200005651 epoch 18 train mean loss=0.0136716291517, accuracy=0.996433336735 test mean loss=0.109974410593, accuracy=0.981400005817 epoch 19 train mean loss=0.00630845916929, accuracy=0.998033335209 test mean loss=0.124410866243, accuracy=0.977700008154 epoch 20 train mean loss=0.0124550565974, accuracy=0.996566669941 test mean loss=0.106592905503, accuracy=0.981600005627 save the model save the optimizer
さいごに
自分自身にお疲れ様でしたと言いたいです.