Tensorflow의 Earlystopping에 대해 질문을 드립니다.

2023.01.03 23:16 1,332 조회

Early Stopping을 시키는 데에 있어서 이해가 가지 않는 점이 있어서 질문을 남기게 되었습니다.


아래와 같은 Callback을 선언해서 compile에 넣어주고, 학습을 진행했습니다.

early_stopping_cb = keras.callbacks.EarlyStopping(patience=6, verbose=1,
                                                  restore_best_weights=True)

monitoring과 mode, min_delta를 따로 지정하지 않았으니 val_loss를 모니터링하며 mode는 auto로 loss이기 때문에 자동으로 min으로 들어갑니다. min_delta는 따로 지정하지 않아 0입니다.

이와 관련 keras 구현 github 1947번째 줄 (https://github.com/keras-team/keras/blob/master/keras/callbacks.py#L2120)

그리고, patience에 대한 설명도 github 주석 부분을 보면 consecutive(연속적인) 에포크에서 개선이 없으면 멈춘다고 되어있습니다. 하지만, 아래 에포크에서는 연속적으로 6번 감소가 되지 않는 부분이 발견되지 않아서 왜 멈추는 것인지 이해가 되지 않아 글을 남깁니다. Early Stopping이 되는 데에 다른 이유가 있는 것인지 궁금합니다.


Epoch 1/50
1882/1882 [==============================] - 880s 467ms/step - loss: 1.2263 - accuracy: 0.6585 - val_loss: 0.9816 - val_accuracy: 0.7237 - lr: 0.0010
Epoch 2/50
1882/1882 [==============================] - 874s 465ms/step - loss: 1.0468 - accuracy: 0.7031 - val_loss: 1.0004 - val_accuracy: 0.7237 - lr: 0.0010
Epoch 3/50
1882/1882 [==============================] - 871s 463ms/step - loss: 0.9898 - accuracy: 0.7207 - val_loss: 1.0415 - val_accuracy: 0.6707 - lr: 0.0010
Epoch 4/50
1882/1882 [==============================] - 877s 466ms/step - loss: 0.8964 - accuracy: 0.7392 - val_loss: 0.8319 - val_accuracy: 0.7752 - lr: 0.0010
Epoch 5/50
1882/1882 [==============================] - 881s 468ms/step - loss: 0.8093 - accuracy: 0.7508 - val_loss: 0.8079 - val_accuracy: 0.7399 - lr: 0.0010
Epoch 6/50
1882/1882 [==============================] - 867s 461ms/step - loss: 0.7608 - accuracy: 0.7607 - val_loss: 1.1483 - val_accuracy: 0.5458 - lr: 0.0010
Epoch 7/50
1882/1882 [==============================] - 873s 464ms/step - loss: 0.7202 - accuracy: 0.7666 - val_loss: 1.3388 - val_accuracy: 0.4969 - lr: 0.0010
Epoch 8/50
1882/1882 [==============================] - ETA: 0s - loss: 0.6887 - accuracy: 0.7738
Epoch 8: ReduceLROnPlateau reducing learning rate to 0.0005000000237487257.
1882/1882 [==============================] - 870s 462ms/step - loss: 0.6887 - accuracy: 0.7738 - val_loss: 0.7496 - val_accuracy: 0.7268 - lr: 0.0010
Epoch 9/50
1882/1882 [==============================] - 850s 452ms/step - loss: 0.6010 - accuracy: 0.7933 - val_loss: 0.7776 - val_accuracy: 0.6899 - lr: 5.0000e-04
Epoch 10/50
1882/1882 [==============================] - 853s 453ms/step - loss: 0.5711 - accuracy: 0.8047 - val_loss: 0.5706 - val_accuracy: 0.7950 - lr: 5.0000e-04
Epoch 11/50
1882/1882 [==============================] - 870s 462ms/step - loss: 0.5723 - accuracy: 0.8082 - val_loss: 0.5402 - val_accuracy: 0.8257 - lr: 5.0000e-04
Epoch 12/50
1882/1882 [==============================] - 857s 455ms/step - loss: 0.5641 - accuracy: 0.8085 - val_loss: 1.2622 - val_accuracy: 0.5801 - lr: 5.0000e-04
Epoch 13/50
1882/1882 [==============================] - 853s 453ms/step - loss: 0.5574 - accuracy: 0.8182 - val_loss: 0.7001 - val_accuracy: 0.7253 - lr: 5.0000e-04
Epoch 14/50
1882/1882 [==============================] - 862s 458ms/step - loss: 0.5521 - accuracy: 0.8200 - val_loss: 0.6680 - val_accuracy: 0.7419 - lr: 5.0000e-04
Epoch 15/50
1882/1882 [==============================] - ETA: 0s - loss: 0.5398 - accuracy: 0.8269
Epoch 15: ReduceLROnPlateau reducing learning rate to 0.0002500000118743628.
1882/1882 [==============================] - 840s 446ms/step - loss: 0.5398 - accuracy: 0.8269 - val_loss: 0.6545 - val_accuracy: 0.7716 - lr: 5.0000e-04
Epoch 16/50
1882/1882 [==============================] - 845s 449ms/step - loss: 0.4866 - accuracy: 0.8461 - val_loss: 0.4931 - val_accuracy: 0.8278 - lr: 2.5000e-04
Epoch 17/50
1882/1882 [==============================] - 811s 431ms/step - loss: 0.4551 - accuracy: 0.8570 - val_loss: 0.6126 - val_accuracy: 0.7825 - lr: 2.5000e-04
Epoch 18/50
1882/1882 [==============================] - 821s 436ms/step - loss: 0.4466 - accuracy: 0.8624 - val_loss: 0.5223 - val_accuracy: 0.8096 - lr: 2.5000e-04
Epoch 19/50
1882/1882 [==============================] - 816s 434ms/step - loss: 0.4344 - accuracy: 0.8661 - val_loss: 0.4747 - val_accuracy: 0.8403 - lr: 2.5000e-04
Epoch 20/50
1882/1882 [==============================] - 818s 435ms/step - loss: 0.4307 - accuracy: 0.8688 - val_loss: 0.6935 - val_accuracy: 0.7513 - lr: 2.5000e-04
Epoch 21/50
1882/1882 [==============================] - 818s 434ms/step - loss: 0.4252 - accuracy: 0.8751 - val_loss: 0.5897 - val_accuracy: 0.7924 - lr: 2.5000e-04
Epoch 22/50
1882/1882 [==============================] - 826s 439ms/step - loss: 0.4207 - accuracy: 0.8767 - val_loss: 0.4825 - val_accuracy: 0.8439 - lr: 2.5000e-04
Epoch 23/50
1882/1882 [==============================] - 822s 437ms/step - loss: 0.4204 - accuracy: 0.8779 - val_loss: 0.7381 - val_accuracy: 0.7393 - lr: 2.5000e-04
Epoch 24/50
1882/1882 [==============================] - 824s 438ms/step - loss: 0.4115 - accuracy: 0.8836 - val_loss: 0.6477 - val_accuracy: 0.7784 - lr: 2.5000e-04
Epoch 25/50
1882/1882 [==============================] - ETA: 0s - loss: 0.4154 - accuracy: 0.8838Restoring model weights from the end of the best epoch: 19.
1882/1882 [==============================] - 828s 440ms/step - loss: 0.4154 - accuracy: 0.8838 - val_loss: 0.4904 - val_accuracy: 0.8444 - lr: 2.5000e-04
Epoch 25: early stopping
로그인이 필요합니다
0 / 1000
Delkin
2023.01.04 09:05

연속으로 6번 감소하는 것을 확인하는게 아니고 최저점(loss)혹은 최고점(acc) 이후 patience동안 개선이 없다면 earlystopping이 호출되는 것으로 알고있습니다.
로그에서 epoch 19에서 val_loss의 최저점(0.4747)을 찍고 이후 patience(6번의 epoch)동안 최저점보다 낮아지지 않았기 때문에 중단된 것으로 보입니다.

Dobycode
2023.01.04 19:30

감사합니다! EarlyStopping에 대해 N번째 epoch가 N-1번째 epoch보다 계속 작아지지 않는다면 호출되는 줄 알았는데 최저점을 연속적으로 비교하는 것이란 걸 알게되었습니다. 감사합니다!

지운지운
2023.01.31 13:23

earlystopping을 사용하는 이유가 과적합 방지로 알고있는데 그렇다면 50에폭을 수행하고자 함에 있어 25번밖에 학습을 못하는것이 최종 예측 성능을 떨어뜨리는거 아닌가요? early stopping이 과적합 방지로 사용되지만 patience를 작게한다면 최종 예측 성능을  떨어뜨리는게 아닌가 하는 생각이 듭니다... 제가 잘못 이해하고 있는 부분이 있을까요?