Skip to content

Commit

Permalink
updated training resnet2060 command
Browse files Browse the repository at this point in the history
  • Loading branch information
anxiangsir committed Jun 23, 2021
1 parent f7007b6 commit e1300d3
Showing 1 changed file with 7 additions and 6 deletions.
13 changes: 7 additions & 6 deletions recognition/arcface_torch/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,15 +9,11 @@ torch >= 1.6.0
More details see [eval.md](docs/install.md) in docs.

## Training
### 1. Single node, 1 GPUs:
```shell
python -m torch.distributed.launch --nproc_per_node=1 --nnodes=1 --node_rank=0 --master_addr="127.0.0.1" --master_port=1234 train.py
```
### 2. Single node, 8 GPUs:
### 1. Single node, 8 GPUs:
```shell
python -m torch.distributed.launch --nproc_per_node=8 --nnodes=1 --node_rank=0 --master_addr="127.0.0.1" --master_port=1234 train.py
```
### 3. Multiple nodes, each node 8 GPUs:
### 2. Multiple nodes, each node 8 GPUs:
Node 0:
```shell
python -m torch.distributed.launch --nproc_per_node=8 --nnodes=2 --node_rank=0 --master_addr="ip1" --master_port=1234 train.py
Expand All @@ -27,6 +23,11 @@ Node 1:
python -m torch.distributed.launch --nproc_per_node=8 --nnodes=2 --node_rank=1 --master_addr="ip1" --master_port=1234 train.py
```

### 3.Training resnet2060 with 8 GPUs:
```shell
python -m torch.distributed.launch --nproc_per_node=8 --nnodes=1 --node_rank=0 --master_addr="127.0.0.1" --master_port=1234 train.py --network r2060
```

## Speed Benchmark
![Image text](https://github.com/nttstar/insightface-resources/blob/master/images/arcface_speed.png)

Expand Down

0 comments on commit e1300d3

Please sign in to comment.