Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

i canot get your result #17

Open
gaotihonglucky123 opened this issue May 8, 2019 · 0 comments
Open

i canot get your result #17

gaotihonglucky123 opened this issue May 8, 2019 · 0 comments

Comments

@gaotihonglucky123
Copy link

gaotihonglucky123 commented May 8, 2019

i get accuracy is 10%, if i just modify the x_Train by the way:
mean = [125.307, 122.95, 113.865]
std = [62.9932, 62.0087, 66.7048]
for i in range(3):
x_train[:,:,i] = ( x_train[:,:,i] - mean[i])/std[i]
x_test[:,:,i] = (x_test[:,:, i] - mean[i])/std[i]
but i get accuracy 52%,if i modify the x_Train by the way:
x_train /= 255
x_test /= 255

i donnot know why i cannot get the same result with you?
please help.thx.

my code is :
import keras
from keras import optimizers
from keras.datasets import cifar10
from keras.models import Sequential
from keras.layers import Conv2D,Dense, Flatten, MaxPooling2D
from keras.callbacks import LearningRateScheduler, TensorBoard

batch_size = 128
epochs = 10
iteration = 391
num_classes = 10
log_filepath = './lenet'

##kernel_initializer:?????
def build_model():
model = Sequential()
model.add(Conv2D(6, (5,5), padding = 'valid', activation = 'relu', kernel_initializer = 'he_normal', input_shape = (32, 32, 3)))
model.add(MaxPooling2D((2,2),strides = (2,2)))
model.add(Conv2D(16, (5,5), padding = 'valid', activation = 'relu', kernel_initializer = 'he_normal'))
model.add(MaxPooling2D((2,2), strides = (2,2)))
model.add(Flatten())
model.add(Dense(120, activation = 'relu', kernel_initializer = 'he_normal'))
model.add(Dense(84, activation = 'relu', kernel_initializer = 'he_normal'))
model.add(Dense(num_classes, activation = 'softmax', kernel_initializer = 'he_normal'))

sgd = optimizers.SGD(lr = 0.1, momentum = 0.9, nesterov = True)
model.compile(loss = 'categorical_crossentropy', optimizer = sgd, metrics = ['accuracy'])

return model

def scheduler(epoch):
learning_rate_init = 0.02
if epoch >= 80:
learning_rate_init = 0.01
if epoch >= 150:
learning_rate_init = 0.004
return learning_rate_init

if name == 'main':
(x_train, y_train), (x_test, y_test) = cifar10.load_data() ## values ???
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)

x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
# x_train /= 255
# x_test /= 255
mean = [125.307, 122.95, 113.865]
std = [62.9932, 62.0087, 66.7048]
for i in range(3):
	x_train[:,:,i] = ( x_train[:,:,i] - mean[i])/std[i]
	x_test[:,:,i] = (x_test[:,:, i] - mean[i])/std[i]
model = build_model()
print(model.summary())

tb_cb = TensorBoard(log_dir = log_filepath, histogram_freq = 0)
change_lr = LearningRateScheduler(scheduler)
cbks = [tb_cb, change_lr]

model.fit(x_train, y_train, batch_size = batch_size, epochs = epochs, callbacks = cbks, validation_data = (x_test, y_test), shuffle = True)

model.save('lenet.h5')
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant