diff --git a/cv/super_resolution/basicVSR++/pytorch/README.md b/cv/super_resolution/basicVSR++/pytorch/README.md index 382b2c31555c238293096d72c6afb97142f460fd..286d34297b8c19022aace597fd2db6034c9487f9 100755 --- a/cv/super_resolution/basicVSR++/pytorch/README.md +++ b/cv/super_resolution/basicVSR++/pytorch/README.md @@ -7,30 +7,28 @@ A recurrent structure is a popular framework choice for the task of video super- ## Step 1: Installing packages ```shell -$ sh build_env.sh +sh build_env.sh ``` ## Step 2: Preparing datasets +Download REDS dataset from [homepage](https://seungjunnah.github.io/Datasets/reds.html) ```shell -$ cd /path/to/modelzoo/official/cv/super_resolution/basicVSR/pytorch - -# Download REDS -$ mkdir -p data/REDS -# Homepage of REDS: https://seungjunnah.github.io/Datasets/reds.html +mkdir -p data/ +ln -s ${REDS_DATASET_PATH} data/REDS ``` ## Step 3: Training ### One single GPU ```shell -$ python3 train.py [training args] # config file can be found in the configs directory +python3 train.py [training args] # config file can be found in the configs directory ``` ### Mutiple GPUs on one machine ```shell -$ bash train_dist.sh [training args] # config file can be found in the configs directory +bash dist_train.sh [training args] # config file can be found in the configs directory ``` ## Reference -https://github.com/open-mmlab/mmediting \ No newline at end of file +https://github.com/open-mmlab/mmediting diff --git a/cv/super_resolution/basicVSR/pytorch/README.md b/cv/super_resolution/basicVSR/pytorch/README.md index b8b371196c79089cec8e80d2bc90d6d2ad9e2f95..ad7b88c0720de3de8f350a4fde39f4333139b38b 100755 --- a/cv/super_resolution/basicVSR/pytorch/README.md +++ b/cv/super_resolution/basicVSR/pytorch/README.md @@ -7,30 +7,27 @@ BasicVSR is a video super-resolution pipeline including optical flow and residua ## Step 1: Installing packages ```shell -$ sh build_env.sh +sh build_env.sh ``` ## Step 2: Preparing datasets - +Download REDS dataset from [homepage](https://seungjunnah.github.io/Datasets/reds.html) ```shell -$ cd /path/to/modelzoo/official/cv/super_resolution/basicVSR/pytorch - -# Download REDS to data/REDS -# Homepage of REDS: https://seungjunnah.github.io/Datasets/reds.html - +mkdir -p data/ +ln -s ${REDS_DATASET_PATH} data/REDS ``` ## Step 3: Training ### One single GPU ```shell -$ python3 train.py [training args] # config file can be found in the configs directory +python3 train.py [training args] # config file can be found in the configs directory ``` ### Mutiple GPUs on one machine ```shell -$ bash train_dist.sh [training args] # config file can be found in the configs directory +bash dist_train.sh [training args] # config file can be found in the configs directory ``` ## Reference diff --git a/cv/super_resolution/liif/pytorch/README.md b/cv/super_resolution/liif/pytorch/README.md index d12da9ba67a016a3a767567438cfbdda90dd9ccb..9512dfaa55ce64db1504c5f11b85167d1199f7ea 100755 --- a/cv/super_resolution/liif/pytorch/README.md +++ b/cv/super_resolution/liif/pytorch/README.md @@ -7,21 +7,18 @@ How to represent an image? While the visual world is presented in a continuous m ## Step 1: Installing packages ```shell -$ pip3 install -r requirements.txt +pip3 install -r requirements.txt ``` ## Step 2: Preparing datasets ```shell -$ cd /path/to/modelzoo/cv/super_resolution/liif/pytorch - # Download DIV2K -$ mkdir -p data/DIV2K +mkdir -p data/DIV2K # Home page: https://data.vision.ee.ethz.ch/cvl/DIV2K/ # Download validation samples -$ cd ../.. -$ mkdir -p data/test +mkdir -p data/test # Home page of Set5: http://people.rennes.inria.fr/Aline.Roumy/results/SR_BMVC12.html # Home page of Set14: https://github.com/jbhuang0604/SelfExSR ``` @@ -30,12 +27,12 @@ $ mkdir -p data/test ### One single GPU ```shell -$ bash train.sh [training args] # config file can be found in the configs directory +python3 train.py [training args] # config file can be found in the configs directory ``` ### Mutiple GPUs on one machine ```shell -$ bash train_dist.sh [training args] # config file can be found in the configs directory +bash dist_train.sh [training args] # config file can be found in the configs directory ``` ## Results on BI-V100 diff --git a/cv/super_resolution/ttsr/pytorch/README.md b/cv/super_resolution/ttsr/pytorch/README.md index 7198e9111905033091f6dc077fc890279c3cb632..595edec58824bfb8f300207aaac3a93430996f61 100755 --- a/cv/super_resolution/ttsr/pytorch/README.md +++ b/cv/super_resolution/ttsr/pytorch/README.md @@ -14,23 +14,28 @@ pip3 install -r requirements.txt ## Step 2: Preparing datasets ```bash -$ mkdir data && cd data -# down CUFED here && Home page: https://zzutk.github.io/SRNTT-Project-Page/ +mkdir -p data/ +cd data +# Download CUFED Dataset from [homepage](https://zzutk.github.io/SRNTT-Project-Page) +# the folder would be like: +data/CUFED/ +└── train +    ├── input +    └── ref ``` - ## Step 3: Training ### Multiple GPUs on one machine ```bash -$ CUDA_VISIBLE_DEVICES=${gpu_id_1,gpu_id_2,...} bash train.sh ${num_gpus} +CUDA_VISIBLE_DEVICES=${gpu_id_1,gpu_id_2,...} bash train.sh ${num_gpus} ``` For example, GPU 5 and GPU 7 are available for use and you can use these two GPUs as follows: ```bash -$ CUDA_VISIBLE_DEVICES=5,7 bash train.sh 2 +CUDA_VISIBLE_DEVICES=5,7 bash train.sh 2 ``` ## Reference diff --git a/cv/super_resolution/ttvsr/pytorch/README.md b/cv/super_resolution/ttvsr/pytorch/README.md index d4bd75dcb04877bd846d0ccef76ecb6a5b2efb9a..e7cd14ff8b38bf00516477405353f8c33c83d2bc 100755 --- a/cv/super_resolution/ttvsr/pytorch/README.md +++ b/cv/super_resolution/ttvsr/pytorch/README.md @@ -8,30 +8,27 @@ We proposed an approach named TTVSR to study video super-resolution by leveragin ## Step 1: Installing packages ```shell -$ pip3 install -r requirements.txt +pip3 install -r requirements.txt ``` ## Step 2: Preparing datasets - +Download REDS dataset from [homepage](https://seungjunnah.github.io/Datasets/reds.html) ```shell -$ cd /path/to/modelzoo/official/cv/super_resolution/ttvsr/pytorch - -# Download REDS -$ mkdir -p data/REDS -# Homepage of REDS: https://seungjunnah.github.io/Datasets/reds.html +mkdir -p data/ +ln -s ${REDS_DATASET_PATH} data/REDS ``` ## Step 3: Training ### One single GPU ```shell -$ python3 train.py [training args] # config file can be found in the configs directory +python3 train.py [training args] # config file can be found in the configs directory ``` ### Mutiple GPUs on one machine ```shell -$ bash train_dist.sh [training args] # config file can be found in the configs directory +bash dist_train.sh [training args] # config file can be found in the configs directory ``` ## Results on BI-V100