Part 1: create the conda env for the project
(base) ren2@Ren2:~/data2/mengya/mengya_code/main_ImageCaptioning.pytorch$ conda create -n Captioning_Zoo python=3.8
Collecting package metadata (current_repodata.json): done
Solving environment: done
==> WARNING: A newer version of conda exists. <==
current version: 4.10.1
latest version: 4.11.0
Please update conda by running
$ conda update -n base -c defaults conda
## Package Plan ##
environment location: /home/ren2/anaconda3/envs/Captioning_Zoo
added / updated specs:
- python=3.8
The following NEW packages will be INSTALLED:
_libgcc_mutex pkgs/main/linux-64::_libgcc_mutex-0.1-main
_openmp_mutex pkgs/main/linux-64::_openmp_mutex-4.5-1_gnu
ca-certificates pkgs/main/linux-64::ca-certificates-2021.10.26-h06a4308_2
certifi pkgs/main/linux-64::certifi-2021.10.8-py38h06a4308_0
ld_impl_linux-64 pkgs/main/linux-64::ld_impl_linux-64-2.35.1-h7274673_9
libffi pkgs/main/linux-64::libffi-3.3-he6710b0_2
libgcc-ng pkgs/main/linux-64::libgcc-ng-9.3.0-h5101ec6_17
libgomp pkgs/main/linux-64::libgomp-9.3.0-h5101ec6_17
libstdcxx-ng pkgs/main/linux-64::libstdcxx-ng-9.3.0-hd4cf53a_17
ncurses pkgs/main/linux-64::ncurses-6.3-h7f8727e_2
openssl pkgs/main/linux-64::openssl-1.1.1l-h7f8727e_0
pip pkgs/main/linux-64::pip-21.2.4-py38h06a4308_0
python pkgs/main/linux-64::python-3.8.12-h12debd9_0
readline pkgs/main/linux-64::readline-8.1-h27cfd23_0
setuptools pkgs/main/linux-64::setuptools-58.0.4-py38h06a4308_0
sqlite pkgs/main/linux-64::sqlite-3.36.0-hc218d9a_0
tk pkgs/main/linux-64::tk-8.6.11-h1ccaba5_0
wheel pkgs/main/noarch::wheel-0.37.0-pyhd3eb1b0_1
xz pkgs/main/linux-64::xz-5.2.5-h7b6447c_0
zlib pkgs/main/linux-64::zlib-1.2.11-h7b6447c_3
Proceed ([y]/n)?
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
#
# To activate this environment, use
#
# $ conda activate Captioning_Zoo
#
# To deactivate an active environment, use
#
# $ conda deactivate
(base) ren2@Ren2:~/data2/mengya/mengya_code/main_ImageCaptioning.pytorch$ conda activate Captioning_Zoo
(Captioning_Zoo) ren2@Ren2:~/data2/mengya/mengya_code/main_ImageCaptioning.pytorch$ conda list
# packages in environment at /home/ren2/anaconda3/envs/Captioning_Zoo:
#
# Name Version Build Channel
_libgcc_mutex 0.1 main
_openmp_mutex 4.5 1_gnu
ca-certificates 2021.10.26 h06a4308_2
certifi 2021.10.8 py38h06a4308_0
ld_impl_linux-64 2.35.1 h7274673_9
libffi 3.3 he6710b0_2
libgcc-ng 9.3.0 h5101ec6_17
libgomp 9.3.0 h5101ec6_17
libstdcxx-ng 9.3.0 hd4cf53a_17
ncurses 6.3 h7f8727e_2
openssl 1.1.1l h7f8727e_0
pip 21.2.4 py38h06a4308_0
python 3.8.12 h12debd9_0
readline 8.1 h27cfd23_0
setuptools 58.0.4 py38h06a4308_0
sqlite 3.36.0 hc218d9a_0
tk 8.6.11 h1ccaba5_0
wheel 0.37.0 pyhd3eb1b0_1
xz 5.2.5 h7b6447c_0
zlib 1.2.11 h7b6447c_3
(Captioning_Zoo) ren2@Ren2:~/data2/mengya/mengya_code/main_ImageCaptioning.pytorch$ conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch
Collecting package metadata (current_repodata.json): done
Solving environment: done
==> WARNING: A newer version of conda exists. <==
current version: 4.10.1
latest version: 4.11.0
Please update conda by running
$ conda update -n base -c defaults conda
## Package Plan ##
environment location: /home/ren2/anaconda3/envs/Captioning_Zoo
added / updated specs:
- cudatoolkit=11.3
- pytorch
- torchaudio
- torchvision
The following packages will be downloaded:
package | build
---------------------------|-----------------
torchaudio-0.10.0 | py38_cu113 4.5 MB pytorch
------------------------------------------------------------
Total: 4.5 MB
The following NEW packages will be INSTALLED:
blas pkgs/main/linux-64::blas-1.0-mkl
bzip2 pkgs/main/linux-64::bzip2-1.0.8-h7b6447c_0
cudatoolkit pkgs/main/linux-64::cudatoolkit-11.3.1-h2bc3f7f_2
ffmpeg pytorch/linux-64::ffmpeg-4.3-hf484d3e_0
freetype pkgs/main/linux-64::freetype-2.11.0-h70c0345_0
giflib pkgs/main/linux-64::giflib-5.2.1-h7b6447c_0
gmp pkgs/main/linux-64::gmp-6.2.1-h2531618_2
gnutls pkgs/main/linux-64::gnutls-3.6.15-he1e5248_0
intel-openmp pkgs/main/linux-64::intel-openmp-2021.4.0-h06a4308_3561
jpeg pkgs/main/linux-64::jpeg-9d-h7f8727e_0
lame pkgs/main/linux-64::lame-3.100-h7b6447c_0
lcms2 pkgs/main/linux-64::lcms2-2.12-h3be6417_0
libiconv pkgs/main/linux-64::libiconv-1.15-h63c8f33_5
libidn2 pkgs/main/linux-64::libidn2-2.3.2-h7f8727e_0
libpng pkgs/main/linux-64::libpng-1.6.37-hbc83047_0
libtasn1 pkgs/main/linux-64::libtasn1-4.16.0-h27cfd23_0
libtiff pkgs/main/linux-64::libtiff-4.2.0-h85742a9_0
libunistring pkgs/main/linux-64::libunistring-0.9.10-h27cfd23_0
libuv pkgs/main/linux-64::libuv-1.40.0-h7b6447c_0
libwebp pkgs/main/linux-64::libwebp-1.2.0-h89dd481_0
libwebp-base pkgs/main/linux-64::libwebp-base-1.2.0-h27cfd23_0
lz4-c pkgs/main/linux-64::lz4-c-1.9.3-h295c915_1
mkl pkgs/main/linux-64::mkl-2021.4.0-h06a4308_640
mkl-service pkgs/main/linux-64::mkl-service-2.4.0-py38h7f8727e_0
mkl_fft pkgs/main/linux-64::mkl_fft-1.3.1-py38hd3c417c_0
mkl_random pkgs/main/linux-64::mkl_random-1.2.2-py38h51133e4_0
nettle pkgs/main/linux-64::nettle-3.7.3-hbbd107a_1
numpy pkgs/main/linux-64::numpy-1.21.2-py38h20f2e39_0
numpy-base pkgs/main/linux-64::numpy-base-1.21.2-py38h79a1101_0
olefile pkgs/main/noarch::olefile-0.46-pyhd3eb1b0_0
openh264 pkgs/main/linux-64::openh264-2.1.0-hd408876_0
pillow pkgs/main/linux-64::pillow-8.4.0-py38h5aabda8_0
pytorch pytorch/linux-64::pytorch-1.10.0-py3.8_cuda11.3_cudnn8.2.0_0
pytorch-mutex pytorch/noarch::pytorch-mutex-1.0-cuda
six pkgs/main/noarch::six-1.16.0-pyhd3eb1b0_0
torchaudio pytorch/linux-64::torchaudio-0.10.0-py38_cu113
torchvision pytorch/linux-64::torchvision-0.11.1-py38_cu113
typing_extensions pkgs/main/noarch::typing_extensions-3.10.0.2-pyh06a4308_0
zstd pkgs/main/linux-64::zstd-1.4.9-haebb681_0
Proceed ([y]/n)?
Downloading and Extracting Packages
torchaudio-0.10.0 | 4.5 MB | ################################################################################################################################### | 100%
Preparing transaction: done
Verifying transaction: done
Executing transaction: - By downloading and using the CUDA Toolkit conda packages, you accept the terms and conditions of the CUDA End User License Agreement (EULA): https://docs.nvidia.com/cuda/eula/index.html
done
(Captioning_Zoo) ren2@Ren2:~/data2/mengya/mengya_code/main_ImageCaptioning.pytorch$ conda list
# packages in environment at /home/ren2/anaconda3/envs/Captioning_Zoo:
#
# Name Version Build Channel
_libgcc_mutex 0.1 main
_openmp_mutex 4.5 1_gnu
blas 1.0 mkl
bzip2 1.0.8 h7b6447c_0
ca-certificates 2021.10.26 h06a4308_2
certifi 2021.10.8 py38h06a4308_0
cudatoolkit 11.3.1 h2bc3f7f_2
ffmpeg 4.3 hf484d3e_0 pytorch
freetype 2.11.0 h70c0345_0
giflib 5.2.1 h7b6447c_0
gmp 6.2.1 h2531618_2
gnutls 3.6.15 he1e5248_0
intel-openmp 2021.4.0 h06a4308_3561
jpeg 9d h7f8727e_0
lame 3.100 h7b6447c_0
lcms2 2.12 h3be6417_0
ld_impl_linux-64 2.35.1 h7274673_9
libffi 3.3 he6710b0_2
libgcc-ng 9.3.0 h5101ec6_17
libgomp 9.3.0 h5101ec6_17
libiconv 1.15 h63c8f33_5
libidn2 2.3.2 h7f8727e_0
libpng 1.6.37 hbc83047_0
libstdcxx-ng 9.3.0 hd4cf53a_17
libtasn1 4.16.0 h27cfd23_0
libtiff 4.2.0 h85742a9_0
libunistring 0.9.10 h27cfd23_0
libuv 1.40.0 h7b6447c_0
libwebp 1.2.0 h89dd481_0
libwebp-base 1.2.0 h27cfd23_0
lz4-c 1.9.3 h295c915_1
mkl 2021.4.0 h06a4308_640
mkl-service 2.4.0 py38h7f8727e_0
mkl_fft 1.3.1 py38hd3c417c_0
mkl_random 1.2.2 py38h51133e4_0
ncurses 6.3 h7f8727e_2
nettle 3.7.3 hbbd107a_1
numpy 1.21.2 py38h20f2e39_0
numpy-base 1.21.2 py38h79a1101_0
olefile 0.46 pyhd3eb1b0_0
openh264 2.1.0 hd408876_0
openssl 1.1.1l h7f8727e_0
pillow 8.4.0 py38h5aabda8_0
pip 21.2.4 py38h06a4308_0
python 3.8.12 h12debd9_0
pytorch 1.10.0 py3.8_cuda11.3_cudnn8.2.0_0 pytorch
pytorch-mutex 1.0 cuda pytorch
readline 8.1 h27cfd23_0
setuptools 58.0.4 py38h06a4308_0
six 1.16.0 pyhd3eb1b0_0
sqlite 3.36.0 hc218d9a_0
tk 8.6.11 h1ccaba5_0
torchaudio 0.10.0 py38_cu113 pytorch
torchvision 0.11.1 py38_cu113 pytorch
typing_extensions 3.10.0.2 pyh06a4308_0
wheel 0.37.0 pyhd3eb1b0_1
xz 5.2.5 h7b6447c_0
zlib 1.2.11 h7b6447c_3
zstd 1.4.9 haebb681_0
(Captioning_Zoo) ren2@Ren2:~/data2/mengya/mengya_code/main_ImageCaptioning.pytorch$ python tools/train.py --cfg configs/fc.yml --id fc
Traceback (most recent call last):
File "tools/train.py", line 8, in <module>
from torch.utils.tensorboard import SummaryWriter
File "/home/ren2/anaconda3/envs/Captioning_Zoo/lib/python3.8/site-packages/torch/utils/tensorboard/__init__.py", line 1, in <module>
import tensorboard
ModuleNotFoundError: No module named 'tensorboard'
(Captioning_Zoo) ren2@Ren2:~/data2/mengya/mengya_code/main_ImageCaptioning.pytorch$ pip install tensorboard
Collecting tensorboard
Using cached tensorboard-2.7.0-py3-none-any.whl (5.8 MB)
Collecting markdown>=2.6.8
Using cached Markdown-3.3.6-py3-none-any.whl (97 kB)
Collecting werkzeug>=0.11.15
Using cached Werkzeug-2.0.2-py3-none-any.whl (288 kB)
Collecting absl-py>=0.4
Using cached absl_py-1.0.0-py3-none-any.whl (126 kB)
Requirement already satisfied: setuptools>=41.0.0 in /home/ren2/anaconda3/envs/Captioning_Zoo/lib/python3.8/site-packages (from tensorboard) (58.0.4)
Collecting tensorboard-data-server<0.7.0,>=0.6.0
Using cached tensorboard_data_server-0.6.1-py3-none-manylinux2010_x86_64.whl (4.9 MB)
Collecting google-auth<3,>=1.6.3
Using cached google_auth-2.3.3-py2.py3-none-any.whl (155 kB)
Collecting requests<3,>=2.21.0
Using cached requests-2.26.0-py2.py3-none-any.whl (62 kB)
Collecting protobuf>=3.6.0
Using cached protobuf-3.19.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.1 MB)
Collecting google-auth-oauthlib<0.5,>=0.4.1
Using cached google_auth_oauthlib-0.4.6-py2.py3-none-any.whl (18 kB)
Collecting tensorboard-plugin-wit>=1.6.0
Using cached tensorboard_plugin_wit-1.8.0-py3-none-any.whl (781 kB)
Requirement already satisfied: numpy>=1.12.0 in /home/ren2/anaconda3/envs/Captioning_Zoo/lib/python3.8/site-packages (from tensorboard) (1.21.2)
Collecting grpcio>=1.24.3
Downloading grpcio-1.42.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.0 MB)
|████████████████████████████████| 4.0 MB 10.5 MB/s
Requirement already satisfied: wheel>=0.26 in /home/ren2/anaconda3/envs/Captioning_Zoo/lib/python3.8/site-packages (from tensorboard) (0.37.0)
Requirement already satisfied: six in /home/ren2/anaconda3/envs/Captioning_Zoo/lib/python3.8/site-packages (from absl-py>=0.4->tensorboard) (1.16.0)
Collecting rsa<5,>=3.1.4
Using cached rsa-4.8-py3-none-any.whl (39 kB)
Collecting pyasn1-modules>=0.2.1
Using cached pyasn1_modules-0.2.8-py2.py3-none-any.whl (155 kB)
Collecting cachetools<5.0,>=2.0.0
Using cached cachetools-4.2.4-py3-none-any.whl (10 kB)
Collecting requests-oauthlib>=0.7.0
Using cached requests_oauthlib-1.3.0-py2.py3-none-any.whl (23 kB)
Collecting importlib-metadata>=4.4
Using cached importlib_metadata-4.8.2-py3-none-any.whl (17 kB)
Collecting zipp>=0.5
Using cached zipp-3.6.0-py3-none-any.whl (5.3 kB)
Collecting pyasn1<0.5.0,>=0.4.6
Using cached pyasn1-0.4.8-py2.py3-none-any.whl (77 kB)
Requirement already satisfied: certifi>=2017.4.17 in /home/ren2/anaconda3/envs/Captioning_Zoo/lib/python3.8/site-packages (from requests<3,>=2.21.0->tensorboard) (2021.10.8)
Collecting charset-normalizer~=2.0.0
Using cached charset_normalizer-2.0.9-py3-none-any.whl (39 kB)
Collecting urllib3<1.27,>=1.21.1
Using cached urllib3-1.26.7-py2.py3-none-any.whl (138 kB)
Collecting idna<4,>=2.5
Using cached idna-3.3-py3-none-any.whl (61 kB)
Collecting oauthlib>=3.0.0
Using cached oauthlib-3.1.1-py2.py3-none-any.whl (146 kB)
Installing collected packages: urllib3, pyasn1, idna, charset-normalizer, zipp, rsa, requests, pyasn1-modules, oauthlib, cachetools, requests-oauthlib, importlib-metadata, google-auth, werkzeug, tensorboard-plugin-wit, tensorboard-data-server, protobuf, markdown, grpcio, google-auth-oauthlib, absl-py, tensorboard
Successfully installed absl-py-1.0.0 cachetools-4.2.4 charset-normalizer-2.0.9 google-auth-2.3.3 google-auth-oauthlib-0.4.6 grpcio-1.42.0 idna-3.3 importlib-metadata-4.8.2 markdown-3.3.6 oauthlib-3.1.1 protobuf-3.19.1 pyasn1-0.4.8 pyasn1-modules-0.2.8 requests-2.26.0 requests-oauthlib-1.3.0 rsa-4.8 tensorboard-2.7.0 tensorboard-data-server-0.6.1 tensorboard-plugin-wit-1.8.0 urllib3-1.26.7 werkzeug-2.0.2 zipp-3.6.0
(Captioning_Zoo) ren2@Ren2:~/data2/mengya/mengya_code/main_ImageCaptioning.pytorch$ python tools/train.py --cfg configs/fc.yml --id fc
Traceback (most recent call last):
File "tools/train.py", line 18, in <module>
import captioning.utils.opts as opts
ModuleNotFoundError: No module named 'captioning'
(Captioning_Zoo) ren2@Ren2:~/data2/mengya/mengya_code/main_ImageCaptioning.pytorch$ python -m pip install -e .
Obtaining file:///home/ren2/data2/mengya/mengya_code/main_ImageCaptioning.pytorch
Installing collected packages: captioning
Running setup.py develop for captioning
Successfully installed captioning-0.0.1
(Captioning_Zoo) ren2@Ren2:~/data2/mengya/mengya_code/main_ImageCaptioning.pytorch$ python tools/train.py --cfg configs/fc.yml --id fc
Hugginface transformers not installed; please visit https://github.com/huggingface/transformers
meshed-memory-transformer not installed; please run `pip install git+https://github.com/ruotianluo/meshed-memory-transformer.git`
Traceback (most recent call last):
File "tools/train.py", line 20, in <module>
from captioning.data.dataloader import *
File "/home/ren2/data2/mengya/mengya_code/main_ImageCaptioning.pytorch/captioning/data/dataloader.py", line 6, in <module>
import h5py
ModuleNotFoundError: No module named 'h5py'
(Captioning_Zoo) ren2@Ren2:~/data2/mengya/mengya_code/main_ImageCaptioning.pytorch$ pip install h5py
Collecting h5py
Downloading h5py-3.6.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (4.5 MB)
|████████████████████████████████| 4.5 MB 11.0 MB/s
Requirement already satisfied: numpy>=1.14.5 in /home/ren2/anaconda3/envs/Captioning_Zoo/lib/python3.8/site-packages (from h5py) (1.21.2)
Installing collected packages: h5py
Successfully installed h5py-3.6.0
(Captioning_Zoo) ren2@Ren2:~/data2/mengya/mengya_code/main_ImageCaptioning.pytorch$ python tools/train.py --cfg configs/fc.yml --id fc
Hugginface transformers not installed; please visit https://github.com/huggingface/transformers
meshed-memory-transformer not installed; please run `pip install git+https://github.com/ruotianluo/meshed-memory-transformer.git`
Traceback (most recent call last):
File "tools/train.py", line 20, in <module>
from captioning.data.dataloader import *
File "/home/ren2/data2/mengya/mengya_code/main_ImageCaptioning.pytorch/captioning/data/dataloader.py", line 7, in <module>
from lmdbdict import lmdbdict
ModuleNotFoundError: No module named 'lmdbdict'
(Captioning_Zoo) ren2@Ren2:~/data2/mengya/mengya_code/main_ImageCaptioning.pytorch$ pip install lmdbdict
Collecting lmdbdict
Using cached lmdbdict-0.2.2-py3-none-any.whl (6.0 kB)
Collecting lmdb
Downloading lmdb-1.2.1-cp38-cp38-manylinux2010_x86_64.whl (306 kB)
|████████████████████████████████| 306 kB 11.4 MB/s
Installing collected packages: lmdb, lmdbdict
Successfully installed lmdb-1.2.1 lmdbdict-0.2.2
(Captioning_Zoo) ren2@Ren2:~/data2/mengya/mengya_code/main_ImageCaptioning.pytorch$ python tools/train.py --cfg configs/fc.yml --id fc
Hugginface transformers not installed; please visit https://github.com/huggingface/transformers
meshed-memory-transformer not installed; please run `pip install git+https://github.com/ruotianluo/meshed-memory-transformer.git`
Traceback (most recent call last):
File "tools/train.py", line 21, in <module>
import skimage.io
ModuleNotFoundError: No module named 'skimage'
(Captioning_Zoo) ren2@Ren2:~/data2/mengya/mengya_code/main_ImageCaptioning.pytorch$ pip install scikit-image
Collecting scikit-image
Downloading scikit_image-0.19.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (60.7 MB)
|████████████████████████████████| 60.7 MB 10.5 MB/s
Requirement already satisfied: numpy>=1.17.0 in /home/ren2/anaconda3/envs/Captioning_Zoo/lib/python3.8/site-packages (from scikit-image) (1.21.2)
Requirement already satisfied: pillow!=7.1.0,!=7.1.1,!=8.3.0,>=6.1.0 in /home/ren2/anaconda3/envs/Captioning_Zoo/lib/python3.8/site-packages (from scikit-image) (8.4.0)
Collecting imageio>=2.4.1
Using cached imageio-2.13.3-py3-none-any.whl (3.3 MB)
Collecting tifffile>=2019.7.26
Downloading tifffile-2021.11.2-py3-none-any.whl (178 kB)
|████████████████████████████████| 178 kB 11.1 MB/s
Collecting networkx>=2.2
Using cached networkx-2.6.3-py3-none-any.whl (1.9 MB)
Collecting scipy>=1.4.1
Downloading scipy-1.7.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (39.3 MB)
|████████████████████████████████| 39.3 MB 12.2 MB/s
Collecting packaging>=20.0
Using cached packaging-21.3-py3-none-any.whl (40 kB)
Collecting PyWavelets>=1.1.1
Downloading PyWavelets-1.2.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl (6.3 MB)
|████████████████████████████████| 6.3 MB 11.0 MB/s
Collecting pyparsing!=3.0.5,>=2.0.2
Using cached pyparsing-3.0.6-py3-none-any.whl (97 kB)
Installing collected packages: pyparsing, tifffile, scipy, PyWavelets, packaging, networkx, imageio, scikit-image
Successfully installed PyWavelets-1.2.0 imageio-2.13.3 networkx-2.6.3 packaging-21.3 pyparsing-3.0.6 scikit-image-0.19.0 scipy-1.7.3 tifffile-2021.11.2
(Captioning_Zoo) ren2@Ren2:~/data2/mengya/mengya_code/main_ImageCaptioning.pytorch$ python tools/train.py --cfg configs/fc.yml --id fc
Hugginface transformers not installed; please visit https://github.com/huggingface/transformers
meshed-memory-transformer not installed; please run `pip install git+https://github.com/ruotianluo/meshed-memory-transformer.git`
Warning: coco-caption not available
Traceback (most recent call last):
File "tools/train.py", line 295, in <module>
opt = opts.parse_opt()
File "/home/ren2/data2/mengya/mengya_code/main_ImageCaptioning.pytorch/captioning/utils/opts.py", line 232, in parse_opt
from .config import CfgNode
File "/home/ren2/data2/mengya/mengya_code/main_ImageCaptioning.pytorch/captioning/utils/config.py", line 7, in <module>
import yaml
ModuleNotFoundError: No module named 'yaml'
(Captioning_Zoo) ren2@Ren2:~/data2/mengya/mengya_code/main_ImageCaptioning.pytorch$ pip install pyyaml
Collecting pyyaml
Downloading PyYAML-6.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (701 kB)
|████████████████████████████████| 701 kB 11.0 MB/s
Installing collected packages: pyyaml
Successfully installed pyyaml-6.0
(Captioning_Zoo) ren2@Ren2:~/data2/mengya/mengya_code/main_ImageCaptioning.pytorch$ python tools/train.py --cfg configs/fc.yml --id fc
Hugginface transformers not installed; please visit https://github.com/huggingface/transformers
meshed-memory-transformer not installed; please run `pip install git+https://github.com/ruotianluo/meshed-memory-transformer.git`
Warning: coco-caption not available
Traceback (most recent call last):
File "tools/train.py", line 295, in <module>
opt = opts.parse_opt()
File "/home/ren2/data2/mengya/mengya_code/main_ImageCaptioning.pytorch/captioning/utils/opts.py", line 232, in parse_opt
from .config import CfgNode
File "/home/ren2/data2/mengya/mengya_code/main_ImageCaptioning.pytorch/captioning/utils/config.py", line 8, in <module>
from yacs.config import CfgNode as _CfgNode
ModuleNotFoundError: No module named 'yacs'
(Captioning_Zoo) ren2@Ren2:~/data2/mengya/mengya_code/main_ImageCaptioning.pytorch$ pip install yacs
Collecting yacs
Using cached yacs-0.1.8-py3-none-any.whl (14 kB)
Requirement already satisfied: PyYAML in /home/ren2/anaconda3/envs/Captioning_Zoo/lib/python3.8/site-packages (from yacs) (6.0)
Installing collected packages: yacs
Successfully installed yacs-0.1.8
(Captioning_Zoo) ren2@Ren2:~/data2/mengya/mengya_code/main_ImageCaptioning.pytorch$ python tools/train.py --cfg configs/fc.yml --id fc
Hugginface transformers not installed; please visit https://github.com/huggingface/transformers
meshed-memory-transformer not installed; please run `pip install git+https://github.com/ruotianluo/meshed-memory-transformer.git`
Warning: coco-caption not available
DataLoader loading json file: data/cocotalk.json
vocab size is 9487
DataLoader loading h5 file: data/cocotalk_fc data/cocotalk_att data/cocotalk_box data/cocotalk_label.h5
max sequence length in data is 16
read 123287 image features
assigned 113287 images to split train
assigned 5000 images to split val
assigned 5000 images to split test
Read data: 0.00609278678894043
/home/ren2/anaconda3/envs/Captioning_Zoo/lib/python3.8/site-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.
warnings.warn('Was asked to gather along dimension 0, but all '
iter 0 (epoch 0), train_loss = 9.191, time/batch = 5.047
Read data: 0.00029587745666503906
iter 1 (epoch 0), train_loss = 9.060, time/batch = 0.073
Read data: 0.00022864341735839844
iter 2 (epoch 0), train_loss = 8.898, time/batch = 0.075
Read data: 0.00017642974853515625
iter 3 (epoch 0), train_loss = 8.774, time/batch = 0.065
=====================================================
Part 2: lack other captioning model
pip install matplotlib
pip install -U gensim
pip install git+https://github.com/huggingface/transformers.git
pip install git+https://github.com/ruotianluo/meshed-memory-transformer.git
FileNotFoundError: [Errno 2] No such file or directory: '/coco-caption/pycocoevalcap/wmd/data/GoogleNews-vectors-negative300.bin'
You can download the word2vec: https://s3.amazonaws.com/dl4j-distribution/GoogleNews-vectors-negative300.bin.gz
(Solution is from https://github.com/UVa-NLP/VMASK/issues/3, https://stackoverflow.com/questions/46433778/import-googlenews-vectors-negative300-bin)
wget -c "https://s3.amazonaws.com/dl4j-distribution/GoogleNews-vectors-negative300.bin.gz"
This downloads the GZIP compressed file that you can uncompress using:
gzip -d GoogleNews-vectors-negative300.bin.gz
You can then use the below command to get wordVector.
from gensim import models
w = models.KeyedVectors.load_word2vec_format('../GoogleNews-vectors-negative300.bin', binary=True)
=======================================================
Part3: evluation metric: SPICE metric issue
Solution:
cd path/coco-caption
./get_stanford_models.sh
Refer to https://github.com/tylin/coco-caption
1) You will first need to download the Stanford CoreNLP 3.6.0 code and models for use by SPICE.
To do this, run: ./get_stanford_models.sh
2) Note: SPICE will try to create a cache of parsed sentences in ./pycocoevalcap/spice/cache/.
This dramatically speeds up repeated evaluations. The cache directory can be moved by setting 'CACHE_DIR' in ./pycocoevalcap/spice. In the same file, caching can be turned off by removing the '-cache' argument to 'spice_cmd'.
ModuleNotFoundError: No module named 'pyemd'
conda install -c conda-forge pyemd
Some solution from other people. But these reasons are not proper for my case.
https://blog.csdn.net/sunny0722/article/details/119946804
https://blog.csdn.net/bit_coders/article/details/120840271
https://github.com/jiasenlu/NeuralBabyTalk/issues/9