From edff9a1302fa8afe73ae917f2d067dc83eb1c61d Mon Sep 17 00:00:00 2001 From: majorli Date: Thu, 1 Dec 2022 09:18:08 +0000 Subject: [PATCH] =?UTF-8?q?fix=20basicVSR++=EF=BC=8CbasicVSR=EF=BC=8CLIIF?= =?UTF-8?q?=EF=BC=8CTTSR=EF=BC=8CTTVSR=20readme=20issues?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit link #I63WBL Signed-off-by: majorli --- cv/super_resolution/basicVSR++/pytorch/README.md | 16 +++++++--------- cv/super_resolution/basicVSR/pytorch/README.md | 15 ++++++--------- cv/super_resolution/liif/pytorch/README.md | 13 +++++-------- cv/super_resolution/ttsr/pytorch/README.md | 15 ++++++++++----- cv/super_resolution/ttvsr/pytorch/README.md | 15 ++++++--------- 5 files changed, 34 insertions(+), 40 deletions(-) diff --git a/cv/super_resolution/basicVSR++/pytorch/README.md b/cv/super_resolution/basicVSR++/pytorch/README.md index 382b2c315..286d34297 100755 --- a/cv/super_resolution/basicVSR++/pytorch/README.md +++ b/cv/super_resolution/basicVSR++/pytorch/README.md @@ -7,30 +7,28 @@ A recurrent structure is a popular framework choice for the task of video super- ## Step 1: Installing packages ```shell -$ sh build_env.sh +sh build_env.sh ``` ## Step 2: Preparing datasets +Download REDS dataset from [homepage](https://seungjunnah.github.io/Datasets/reds.html) ```shell -$ cd /path/to/modelzoo/official/cv/super_resolution/basicVSR/pytorch - -# Download REDS -$ mkdir -p data/REDS -# Homepage of REDS: https://seungjunnah.github.io/Datasets/reds.html +mkdir -p data/ +ln -s ${REDS_DATASET_PATH} data/REDS ``` ## Step 3: Training ### One single GPU ```shell -$ python3 train.py [training args] # config file can be found in the configs directory +python3 train.py [training args] # config file can be found in the configs directory ``` ### Mutiple GPUs on one machine ```shell -$ bash train_dist.sh [training args] # config file can be found in the configs directory +bash dist_train.sh [training args] # config file can be found in the configs directory ``` ## Reference -https://github.com/open-mmlab/mmediting \ No newline at end of file +https://github.com/open-mmlab/mmediting diff --git a/cv/super_resolution/basicVSR/pytorch/README.md b/cv/super_resolution/basicVSR/pytorch/README.md index b8b371196..ad7b88c07 100755 --- a/cv/super_resolution/basicVSR/pytorch/README.md +++ b/cv/super_resolution/basicVSR/pytorch/README.md @@ -7,30 +7,27 @@ BasicVSR is a video super-resolution pipeline including optical flow and residua ## Step 1: Installing packages ```shell -$ sh build_env.sh +sh build_env.sh ``` ## Step 2: Preparing datasets - +Download REDS dataset from [homepage](https://seungjunnah.github.io/Datasets/reds.html) ```shell -$ cd /path/to/modelzoo/official/cv/super_resolution/basicVSR/pytorch - -# Download REDS to data/REDS -# Homepage of REDS: https://seungjunnah.github.io/Datasets/reds.html - +mkdir -p data/ +ln -s ${REDS_DATASET_PATH} data/REDS ``` ## Step 3: Training ### One single GPU ```shell -$ python3 train.py [training args] # config file can be found in the configs directory +python3 train.py [training args] # config file can be found in the configs directory ``` ### Mutiple GPUs on one machine ```shell -$ bash train_dist.sh [training args] # config file can be found in the configs directory +bash dist_train.sh [training args] # config file can be found in the configs directory ``` ## Reference diff --git a/cv/super_resolution/liif/pytorch/README.md b/cv/super_resolution/liif/pytorch/README.md index d12da9ba6..9512dfaa5 100755 --- a/cv/super_resolution/liif/pytorch/README.md +++ b/cv/super_resolution/liif/pytorch/README.md @@ -7,21 +7,18 @@ How to represent an image? While the visual world is presented in a continuous m ## Step 1: Installing packages ```shell -$ pip3 install -r requirements.txt +pip3 install -r requirements.txt ``` ## Step 2: Preparing datasets ```shell -$ cd /path/to/modelzoo/cv/super_resolution/liif/pytorch - # Download DIV2K -$ mkdir -p data/DIV2K +mkdir -p data/DIV2K # Home page: https://data.vision.ee.ethz.ch/cvl/DIV2K/ # Download validation samples -$ cd ../.. -$ mkdir -p data/test +mkdir -p data/test # Home page of Set5: http://people.rennes.inria.fr/Aline.Roumy/results/SR_BMVC12.html # Home page of Set14: https://github.com/jbhuang0604/SelfExSR ``` @@ -30,12 +27,12 @@ $ mkdir -p data/test ### One single GPU ```shell -$ bash train.sh [training args] # config file can be found in the configs directory +python3 train.py [training args] # config file can be found in the configs directory ``` ### Mutiple GPUs on one machine ```shell -$ bash train_dist.sh [training args] # config file can be found in the configs directory +bash dist_train.sh [training args] # config file can be found in the configs directory ``` ## Results on BI-V100 diff --git a/cv/super_resolution/ttsr/pytorch/README.md b/cv/super_resolution/ttsr/pytorch/README.md index 7198e9111..595edec58 100755 --- a/cv/super_resolution/ttsr/pytorch/README.md +++ b/cv/super_resolution/ttsr/pytorch/README.md @@ -14,23 +14,28 @@ pip3 install -r requirements.txt ## Step 2: Preparing datasets ```bash -$ mkdir data && cd data -# down CUFED here && Home page: https://zzutk.github.io/SRNTT-Project-Page/ +mkdir -p data/ +cd data +# Download CUFED Dataset from [homepage](https://zzutk.github.io/SRNTT-Project-Page) +# the folder would be like: +data/CUFED/ +└── train +    ├── input +    └── ref ``` - ## Step 3: Training ### Multiple GPUs on one machine ```bash -$ CUDA_VISIBLE_DEVICES=${gpu_id_1,gpu_id_2,...} bash train.sh ${num_gpus} +CUDA_VISIBLE_DEVICES=${gpu_id_1,gpu_id_2,...} bash train.sh ${num_gpus} ``` For example, GPU 5 and GPU 7 are available for use and you can use these two GPUs as follows: ```bash -$ CUDA_VISIBLE_DEVICES=5,7 bash train.sh 2 +CUDA_VISIBLE_DEVICES=5,7 bash train.sh 2 ``` ## Reference diff --git a/cv/super_resolution/ttvsr/pytorch/README.md b/cv/super_resolution/ttvsr/pytorch/README.md index d4bd75dcb..e7cd14ff8 100755 --- a/cv/super_resolution/ttvsr/pytorch/README.md +++ b/cv/super_resolution/ttvsr/pytorch/README.md @@ -8,30 +8,27 @@ We proposed an approach named TTVSR to study video super-resolution by leveragin ## Step 1: Installing packages ```shell -$ pip3 install -r requirements.txt +pip3 install -r requirements.txt ``` ## Step 2: Preparing datasets - +Download REDS dataset from [homepage](https://seungjunnah.github.io/Datasets/reds.html) ```shell -$ cd /path/to/modelzoo/official/cv/super_resolution/ttvsr/pytorch - -# Download REDS -$ mkdir -p data/REDS -# Homepage of REDS: https://seungjunnah.github.io/Datasets/reds.html +mkdir -p data/ +ln -s ${REDS_DATASET_PATH} data/REDS ``` ## Step 3: Training ### One single GPU ```shell -$ python3 train.py [training args] # config file can be found in the configs directory +python3 train.py [training args] # config file can be found in the configs directory ``` ### Mutiple GPUs on one machine ```shell -$ bash train_dist.sh [training args] # config file can be found in the configs directory +bash dist_train.sh [training args] # config file can be found in the configs directory ``` ## Results on BI-V100 -- Gitee