No tienes acceso a esta clase

¡Continúa aprendiendo! Únete y comienza a potenciar tu carrera

Entrenamiento de un modelo de clasificación con redes convolucionales

15/25
Recursos

Aportes 16

Preguntas 3

Ordenar por:

¿Quieres ver más aportes, preguntas y respuestas de la comunidad?

Modifique el tamaño del bach a 64, learning rate a 8e-5, el dropout de algunas capas, obteniendo un accuracy con el set de test del 80.46%

in_shape = x_train.shape[1:]
f_size = [32, 32, 64, 64, 128, 128]#filtros en cada capa conv2D
k_size =[3, 3, 3, 3, 3, 3] #tamaño del kernel 
r_lr = 8e-5  #learning rate
r_dropout = [0.0, 0.0, 0.2, 0.2, 0.2, 0.2] #dropout en cada capa

def def_model(in_shape,n_clases,f_size,k_size,r_lr):
    model = Sequential()

    #conv1
    model.add(Conv2D(f_size[0], k_size[0], padding= 'same', kernel_regularizer=regularizers.l2(r_lr), input_shape = in_shape))
    model.add(Activation('relu'))


    #conv2
    model.add(Conv2D(f_size[1], k_size[1], padding= 'same', kernel_regularizer=regularizers.l2(r_lr)))
    model.add(Activation('relu'))
    model.add(MaxPooling2D(pool_size=(2,2)))
    model.add(Dropout(r_dropout[1]))

    #conv3
    model.add(Conv2D(f_size[2], k_size[2], padding= 'same', kernel_regularizer=regularizers.l2(r_lr)))
    model.add(Activation('relu'))
    #model.add(MaxPooling2D(pool_size=(2,2)))
    model.add(Dropout(r_dropout[2]))

    #conv4
    model.add(Conv2D(f_size[3], k_size[3], padding= 'same', kernel_regularizer=regularizers.l2(r_lr)))
    model.add(Activation('relu'))
    model.add(MaxPooling2D(pool_size=(2,2)))
    model.add(Dropout(r_dropout[3]))


    #conv5
    model.add(Conv2D(f_size[4], k_size[4], padding= 'same', kernel_regularizer=regularizers.l2(r_lr)))
    model.add(Activation('relu'))
    #model.add(MaxPooling2D(pool_size=(2,2)))
    #model.add(Dropout(0.2))

    #conv6
    model.add(Conv2D(f_size[5], k_size[5], padding= 'same', kernel_regularizer=regularizers.l2(r_lr)))
    model.add(Activation('relu'))
    model.add(MaxPooling2D(pool_size=(2,2)))
    model.add(Dropout(r_dropout[5]))
    #Clasificacion - Flatten
    model.add(Flatten())
    model.add(Dense(n_clases, activation='softmax'))
    model.summary()

    return model


model = def_model(in_shape,n_clases,f_size,k_size,r_lr)
print(y_train[0])
model.compile(loss='categorical_crossentropy',optimizer='rmsprop',metrics=['accuracy'])
hist = model.fit(x_train, y_train, batch_size = 64,epochs=60, 
                validation_data=( x_val,y_val),
                verbose=2, shuffle=True)
plt.plot(hist.history['accuracy'],label='Train')
plt.plot(hist.history['val_accuracy'],label='Val')
plt.legend()
plt.show()
model.evaluate(x_test,y_test)


Logre un accuracy de 0.7799 con estos cambios:

  • Aumentar dropouts: donde estaban 0.2 aumente a 0.3 y los 0.3 a 0.4. Esto con la intención de disminuir el overfitting.
  • Agregue una capa densa con activación relu antes de la capa de clasificación: con num_clases*2 neuronas con la intención de apoyar mas la parte de clasificación y menos la extración de caracteristicas en las capas convolucionales.
  • Cambié el optimizer a adam por curiosidad
  • Solo fueron 50 epochs para que fuera mas rapido las pruebas

Logré un accuracy de ** 82.3%**, lo que cambie fue, principalmente el optimizador “Adam”

model = Sequential()
#conv1
model.add(Conv2D(base_filtros, (3,3), padding='same',kernel_regularizer=regularizers.l2(w_regulatizer), input_shape= x_training.shape[1:]))
model.add(Activation('relu'))

#conv2
model.add(Conv2D(base_filtros, (3,3), padding='same',kernel_regularizer=regularizers.l2(w_regulatizer)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.2))

#conv3
model.add(Conv2D(2*base_filtros, (3,3), padding='same', kernel_regularizer=regularizers.l2(w_regulatizer)))
model.add(Activation('relu'))
model.add(Dropout(0.2))

## conv 4
model.add(Conv2D(2*base_filtros, (3,3), padding='same', kernel_regularizer=regularizers.l2(w_regulatizer)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.3))

## conv 5
model.add(Conv2D(4*base_filtros,(3,3), padding='same', kernel_regularizer=regularizers.l2(w_regulatizer)))
model.add(Activation('relu'))
       
## conv 6
model.add(Conv2D(4*base_filtros, (3,3), padding='same', kernel_regularizer=regularizers.l2(w_regulatizer)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.4))



## Clasificacion - Flatten

model.add(Flatten())
model.add(Dense(num_clases,activation='softmax'))#usamos finalmente softmax, porque 
#problema de clasificacion multiple

model.summary()
model.compile(loss='categorical_crossentropy', optimizer='Adam', metrics=['accuracy'])

En mi experiencia observando batch_size lo que hace es dividir el dataset en 32 o el entero que usemos, no usa 32 imagenes, sino paquetes de total_datos/32

Cambié mi optimizador a: optimizer=keras.optimizers.Adam(
learning_rate=0.0001)
Sí, le puse otro 0.
base_filtros en 64
El accuracy sobre el lote de test arroja un 85.94%
Sobre el propio train es de 88.79%
Sobre el de validación es del 86.40%

Claro que se pude seguir mejorando. Probaría con AveragePooling en vez de Max.

A modo de comentario, ejecuté el ejercicio en Colab, el primer epochs duró 8 minutos, mientras que en Kaggle duró 28 segundos. Me enamoré de Kaggle 😃

Llegué a un 81.42 en val
Cambié el learning rate a 0.5e-4 y empecé con 64 filtros, les anexo mi notebook:
https://colab.research.google.com/drive/1Mj7flDtjC3wc3ds1vl_ApviwA1-rzX42?usp=sharing

llegue al 80.54 % en accuracy en test
Mi codigo:

base_filters = 32
w_regularizer = lambda: regularizers.l2(1e-4)

model = Sequential()
# Conv 1
model.add(Conv2D(base_filters, kernel_size=(3, 3), padding='same', kernel_regularizer=w_regularizer(), activation='relu', input_shape=x_train.shape[1:]))
model.add(MaxPooling2D(pool_size=(2, 2)))
# Conv 2
model.add(Conv2D(base_filters, kernel_size=(3, 3), padding='same', kernel_regularizer=w_regularizer(), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
# Conv 3
model.add(Conv2D(2 * base_filters, kernel_size=(3, 3), padding='same', kernel_regularizer=w_regularizer(), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
# Conv 4
model.add(Conv2D(3 * base_filters, kernel_size=(3, 3), padding='same', kernel_regularizer=w_regularizer(), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
# Conv 5
model.add(Conv2D(4 * base_filters, kernel_size=(3, 3), padding='same', kernel_regularizer=w_regularizer(), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.4))
# Classification - flatten
model.add(Flatten())
model.add(Dense(num_classes, activation='softmax'))

En el link de Kaggle de los recursos, el notepad no coincide del todo con las clases 😃
Las diferencias son:
-> En el notepad vienen las líneas de normalización (In [8], y In [9]
-> Al momento de crear el modelo, el modelo del link tiene adicional “model.add(BatchNormalization())”, que tampoco se explicó en las clases.

Estas diferencias provocan que el código no se ejecute correctamente

Si al importar las librerías les aparece un error de compatibilidad entre Numpy y Scipy solo tienen que escribir en la primera celda: !pip install --upgrade numpy !pip install --upgrade scipy

Después de varios intentos sin resultados satisfactorios decidí dejarlo tal cual como lo implementó el profesor pero cambié el numero de filtros de 32 a 64, el resultado fue bastante aceptable

Es cierto, el modificar las epocas de 32 a 64 ayuda a mejorar el modelo.

Epoch 1/100
I0000 00:00:1710848272.152055     110 device_compiler.h:186] Compiled cluster using XLA!  This line is logged at most once for the lifetime of the process.
W0000 00:00:1710848272.171905     110 graph_launch.cc:671] Fallback to op-by-op mode because memset node breaks graph update
W0000 00:00:1710848284.666369     108 graph_launch.cc:671] Fallback to op-by-op mode because memset node breaks graph update
W0000 00:00:1710848285.899400     107 graph_launch.cc:671] Fallback to op-by-op mode because memset node breaks graph update
1407/1407 - 26s - 19ms/step - accuracy: 0.2482 - loss: 2.1493 - val_accuracy: 0.2264 - val_loss: 2.2124
Epoch 2/100
W0000 00:00:1710848286.956247     109 graph_launch.cc:671] Fallback to op-by-op mode because memset node breaks graph update
1407/1407 - 5s - 4ms/step - accuracy: 0.3102 - loss: 1.9185 - val_accuracy: 0.3794 - val_loss: 1.7755
Epoch 3/100
1407/1407 - 5s - 4ms/step - accuracy: 0.3714 - loss: 1.7993 - val_accuracy: 0.4160 - val_loss: 1.7351
Epoch 4/100
1407/1407 - 5s - 4ms/step - accuracy: 0.4023 - loss: 1.7282 - val_accuracy: 0.4456 - val_loss: 1.6481
Epoch 5/100
1407/1407 - 5s - 4ms/step - accuracy: 0.4239 - loss: 1.6863 - val_accuracy: 0.4766 - val_loss: 1.5702
Epoch 6/100
1407/1407 - 5s - 4ms/step - accuracy: 0.4380 - loss: 1.6588 - val_accuracy: 0.3896 - val_loss: 1.8632
Epoch 7/100
1407/1407 - 5s - 4ms/step - accuracy: 0.4504 - loss: 1.6375 - val_accuracy: 0.4698 - val_loss: 1.5869
Epoch 8/100
1407/1407 - 5s - 4ms/step - accuracy: 0.4569 - loss: 1.6228 - val_accuracy: 0.4164 - val_loss: 1.8364
Epoch 9/100
1407/1407 - 5s - 4ms/step - accuracy: 0.4604 - loss: 1.6150 - val_accuracy: 0.4874 - val_loss: 1.5515
Epoch 10/100
1407/1407 - 5s - 4ms/step - accuracy: 0.4684 - loss: 1.6014 - val_accuracy: 0.4938 - val_loss: 1.5301
Epoch 11/100
1407/1407 - 5s - 4ms/step - accuracy: 0.4697 - loss: 1.5933 - val_accuracy: 0.4450 - val_loss: 1.7061
Epoch 12/100
1407/1407 - 5s - 4ms/step - accuracy: 0.4735 - loss: 1.5800 - val_accuracy: 0.5088 - val_loss: 1.4983
Epoch 13/100
1407/1407 - 5s - 4ms/step - accuracy: 0.4781 - loss: 1.5770 - val_accuracy: 0.4542 - val_loss: 1.7027
Epoch 14/100
1407/1407 - 5s - 4ms/step - accuracy: 0.4821 - loss: 1.5720 - val_accuracy: 0.4986 - val_loss: 1.5362
Epoch 15/100
1407/1407 - 5s - 4ms/step - accuracy: 0.4846 - loss: 1.5666 - val_accuracy: 0.5208 - val_loss: 1.4637
Epoch 16/100
1407/1407 - 5s - 4ms/step - accuracy: 0.4854 - loss: 1.5575 - val_accuracy: 0.4450 - val_loss: 1.7448
Epoch 17/100
1407/1407 - 5s - 4ms/step - accuracy: 0.4901 - loss: 1.5546 - val_accuracy: 0.5214 - val_loss: 1.4636
Epoch 18/100
1407/1407 - 5s - 4ms/step - accuracy: 0.4881 - loss: 1.5510 - val_accuracy: 0.4800 - val_loss: 1.5907
Epoch 19/100
1407/1407 - 5s - 4ms/step - accuracy: 0.4899 - loss: 1.5502 - val_accuracy: 0.5176 - val_loss: 1.4785
Epoch 20/100
1407/1407 - 5s - 4ms/step - accuracy: 0.4920 - loss: 1.5488 - val_accuracy: 0.4818 - val_loss: 1.6018
Epoch 21/100
1407/1407 - 5s - 4ms/step - accuracy: 0.4913 - loss: 1.5451 - val_accuracy: 0.4386 - val_loss: 1.7932
Epoch 22/100
1407/1407 - 5s - 4ms/step - accuracy: 0.4978 - loss: 1.5363 - val_accuracy: 0.5406 - val_loss: 1.4315
Epoch 23/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5008 - loss: 1.5336 - val_accuracy: 0.4828 - val_loss: 1.6409
Epoch 24/100
1407/1407 - 5s - 4ms/step - accuracy: 0.4974 - loss: 1.5360 - val_accuracy: 0.4582 - val_loss: 1.7210
Epoch 25/100
1407/1407 - 5s - 4ms/step - accuracy: 0.4982 - loss: 1.5370 - val_accuracy: 0.5206 - val_loss: 1.4842
Epoch 26/100
1407/1407 - 5s - 4ms/step - accuracy: 0.4981 - loss: 1.5311 - val_accuracy: 0.4332 - val_loss: 1.7971
Epoch 27/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5012 - loss: 1.5295 - val_accuracy: 0.5262 - val_loss: 1.4974
Epoch 28/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5022 - loss: 1.5287 - val_accuracy: 0.4774 - val_loss: 1.6548
Epoch 29/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5035 - loss: 1.5218 - val_accuracy: 0.5318 - val_loss: 1.4795
Epoch 30/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5063 - loss: 1.5217 - val_accuracy: 0.5270 - val_loss: 1.4761
Epoch 31/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5068 - loss: 1.5230 - val_accuracy: 0.5444 - val_loss: 1.4335
Epoch 32/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5025 - loss: 1.5221 - val_accuracy: 0.4852 - val_loss: 1.6647
Epoch 33/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5093 - loss: 1.5165 - val_accuracy: 0.5344 - val_loss: 1.4169
Epoch 34/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5093 - loss: 1.5190 - val_accuracy: 0.5376 - val_loss: 1.4532
Epoch 35/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5091 - loss: 1.5119 - val_accuracy: 0.5532 - val_loss: 1.3917
Epoch 36/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5054 - loss: 1.5154 - val_accuracy: 0.5398 - val_loss: 1.4190
Epoch 37/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5099 - loss: 1.5056 - val_accuracy: 0.5346 - val_loss: 1.4488
Epoch 38/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5083 - loss: 1.5105 - val_accuracy: 0.5322 - val_loss: 1.4568
Epoch 39/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5082 - loss: 1.5119 - val_accuracy: 0.5436 - val_loss: 1.4121
Epoch 40/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5094 - loss: 1.5115 - val_accuracy: 0.5368 - val_loss: 1.4649
Epoch 41/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5081 - loss: 1.5090 - val_accuracy: 0.5044 - val_loss: 1.4983
Epoch 42/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5096 - loss: 1.5099 - val_accuracy: 0.5380 - val_loss: 1.4268
Epoch 43/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5129 - loss: 1.5043 - val_accuracy: 0.5048 - val_loss: 1.5062
Epoch 44/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5120 - loss: 1.5050 - val_accuracy: 0.4542 - val_loss: 1.7664
Epoch 45/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5133 - loss: 1.5039 - val_accuracy: 0.5268 - val_loss: 1.4855
Epoch 46/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5106 - loss: 1.5019 - val_accuracy: 0.4148 - val_loss: 1.8604
Epoch 47/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5127 - loss: 1.4985 - val_accuracy: 0.5444 - val_loss: 1.4056
Epoch 48/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5125 - loss: 1.4966 - val_accuracy: 0.5094 - val_loss: 1.5373
Epoch 49/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5143 - loss: 1.4949 - val_accuracy: 0.5362 - val_loss: 1.4208
Epoch 50/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5143 - loss: 1.4979 - val_accuracy: 0.4796 - val_loss: 1.6288
Epoch 51/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5155 - loss: 1.4957 - val_accuracy: 0.3612 - val_loss: 2.0287
Epoch 52/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5194 - loss: 1.4905 - val_accuracy: 0.5240 - val_loss: 1.4488
Epoch 53/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5161 - loss: 1.4924 - val_accuracy: 0.4716 - val_loss: 1.6363
Epoch 54/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5167 - loss: 1.4908 - val_accuracy: 0.4650 - val_loss: 1.6495
Epoch 55/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5183 - loss: 1.4824 - val_accuracy: 0.5384 - val_loss: 1.4051
Epoch 56/100
1407/1407 - 6s - 4ms/step - accuracy: 0.5167 - loss: 1.4888 - val_accuracy: 0.5540 - val_loss: 1.3830
Epoch 57/100
1407/1407 - 6s - 4ms/step - accuracy: 0.5184 - loss: 1.4877 - val_accuracy: 0.5354 - val_loss: 1.4180
Epoch 58/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5171 - loss: 1.4887 - val_accuracy: 0.5374 - val_loss: 1.4534
Epoch 59/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5182 - loss: 1.4875 - val_accuracy: 0.5454 - val_loss: 1.4055
Epoch 60/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5167 - loss: 1.4844 - val_accuracy: 0.5474 - val_loss: 1.4222
Epoch 61/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5199 - loss: 1.4856 - val_accuracy: 0.5118 - val_loss: 1.5086
Epoch 62/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5180 - loss: 1.4885 - val_accuracy: 0.4494 - val_loss: 1.7498
Epoch 63/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5176 - loss: 1.4872 - val_accuracy: 0.5372 - val_loss: 1.4138
Epoch 64/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5198 - loss: 1.4820 - val_accuracy: 0.4934 - val_loss: 1.5140
Epoch 65/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5161 - loss: 1.4851 - val_accuracy: 0.5130 - val_loss: 1.4862
Epoch 66/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5198 - loss: 1.4810 - val_accuracy: 0.5338 - val_loss: 1.4204
Epoch 67/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5194 - loss: 1.4792 - val_accuracy: 0.4700 - val_loss: 1.6575
Epoch 68/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5160 - loss: 1.4837 - val_accuracy: 0.5084 - val_loss: 1.5050
Epoch 69/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5192 - loss: 1.4807 - val_accuracy: 0.5208 - val_loss: 1.4308
Epoch 70/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5207 - loss: 1.4807 - val_accuracy: 0.5618 - val_loss: 1.3696
Epoch 71/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5194 - loss: 1.4813 - val_accuracy: 0.5502 - val_loss: 1.4032
Epoch 72/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5198 - loss: 1.4815 - val_accuracy: 0.5394 - val_loss: 1.4091
Epoch 73/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5184 - loss: 1.4832 - val_accuracy: 0.5680 - val_loss: 1.3713
Epoch 74/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5216 - loss: 1.4796 - val_accuracy: 0.5120 - val_loss: 1.4678
Epoch 75/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5201 - loss: 1.4804 - val_accuracy: 0.5260 - val_loss: 1.4587
Epoch 76/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5187 - loss: 1.4830 - val_accuracy: 0.5396 - val_loss: 1.4391
Epoch 77/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5205 - loss: 1.4768 - val_accuracy: 0.4786 - val_loss: 1.6172
Epoch 78/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5225 - loss: 1.4762 - val_accuracy: 0.4668 - val_loss: 1.6742
Epoch 79/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5231 - loss: 1.4794 - val_accuracy: 0.5404 - val_loss: 1.4194
Epoch 80/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5201 - loss: 1.4784 - val_accuracy: 0.5312 - val_loss: 1.4833
Epoch 81/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5260 - loss: 1.4753 - val_accuracy: 0.5474 - val_loss: 1.3953
Epoch 82/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5219 - loss: 1.4781 - val_accuracy: 0.5266 - val_loss: 1.5027
Epoch 83/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5218 - loss: 1.4758 - val_accuracy: 0.4086 - val_loss: 1.7569
Epoch 84/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5234 - loss: 1.4764 - val_accuracy: 0.5572 - val_loss: 1.3855
Epoch 85/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5235 - loss: 1.4732 - val_accuracy: 0.4470 - val_loss: 1.7147
Epoch 86/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5249 - loss: 1.4753 - val_accuracy: 0.5244 - val_loss: 1.4951
Epoch 87/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5244 - loss: 1.4735 - val_accuracy: 0.5156 - val_loss: 1.4672
Epoch 88/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5220 - loss: 1.4754 - val_accuracy: 0.5372 - val_loss: 1.4185
Epoch 89/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5213 - loss: 1.4695 - val_accuracy: 0.5052 - val_loss: 1.5248
Epoch 90/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5232 - loss: 1.4744 - val_accuracy: 0.5506 - val_loss: 1.4055
Epoch 91/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5229 - loss: 1.4714 - val_accuracy: 0.5144 - val_loss: 1.5199
Epoch 92/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5244 - loss: 1.4735 - val_accuracy: 0.5388 - val_loss: 1.4067
Epoch 93/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5228 - loss: 1.4743 - val_accuracy: 0.4720 - val_loss: 1.6866
Epoch 94/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5232 - loss: 1.4692 - val_accuracy: 0.4412 - val_loss: 1.7639
Epoch 95/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5230 - loss: 1.4752 - val_accuracy: 0.5200 - val_loss: 1.4558
Epoch 96/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5280 - loss: 1.4664 - val_accuracy: 0.5402 - val_loss: 1.4011
Epoch 97/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5252 - loss: 1.4694 - val_accuracy: 0.5694 - val_loss: 1.3483
Epoch 98/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5222 - loss: 1.4704 - val_accuracy: 0.5556 - val_loss: 1.3751
Epoch 99/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5265 - loss: 1.4664 - val_accuracy: 0.4704 - val_loss: 1.5755
Epoch 100/100
1407/1407 - 5s - 4ms/step - accuracy: 0.5221 - loss: 1.4705 - val_accuracy: 0.5398 - val_loss: 1.3948
313/313 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - accuracy: 0.5346 - loss: 1.4227
[1.4282184839248657, 0.5346999764442444]```

Excelente. Llegue a 85.22% de val_accuracy 😄

Alguna idea para mejorar la curva val?