Training loss is decreaseing but validation loss is cosntant. how to avoid overfitting

Solution for Training loss is decreaseing but validation loss is cosntant. how to avoid overfitting
is Given Below:

I want to retrain google’s mediapipe hand landmark for more key-points detection but the model is only available in tflite format that can not be retrain.

I created a model identical to mediapipe hand model, and
I trained it with my custom data but facing over-fitting issue,

I am using:

RMSprop as optimizer

MSE (Mean Square Error as loss function),

batch size = 32

initial_learning_rate=1e-3

decay_steps=1000

decay_rate=0.9

colab screenshot

The training loss has decrease up to 4.3819e-04 but the validation loss is still 0.00134

I have also tried

Adam optimizer with

Huber loss function

the validation loss dropped to 0.00083 still I face the over-fitting issue

I would suggest you to use adam as the optimizer. Its a boon in disguise. Moreover, in order to stop over fitting to an extent, use EarlyStopping from Callback in tensorflow.keras and pass validation loss as the monitor.

You can learn more here.. Keras EarlyStopping

Also play with some hyperparameters. Try increasing the batch size, reduce the epochs, etc. Also if possible, include more datasets for training.