代做Coding Assignment 4: Implementing a Convolutional Neural Network for CIFAR-10 using Keras代做留学生Pyth
- 首页 >> CSCoding Assignment 4: Implementing a Convolutional Neural Network for CIFAR-10 using Keras
July 28, 2024
1 Overview
In this assignment, you will implement a Convolutional Neural Network (CNN) to classify images from the CIFAR-10 dataset using the Keras library. Follow the instructions step-by-step to build, compile, train, and evaluate your model.
2 Instructions
2.1 Step 1: Import Required Libraries
Import the necessary libraries: numpy, matplotlib.pyplot, and required mod- ules from keras. Refer to the Keras documentation for more details.
2.2 Step 2: Load and Preprocess the Dataset
• Load the CIFAR-10 dataset using keras .datasets .cifar10 .load data().
• Normalize the images by converting pixel values to the range [0, 1]. This can be done by casting the image data to float32 and dividing by 255.0.
• One-hot encode the labels using the appropriate function from keras .utils.
• Refer to the Keras dataset documentation for more details.
2.3 Step 3: Data Augmentation
• Implement data augmentation using ImageDataGenerator from keras .preprocessing .image.
• Set parameters for the data augmentation:
– rotation range=15: Specify the degree range for random rotations.
– width shift range=0 .1: Specify the fraction of total width for hor- izontal shift.
– height shift range=0 .1: Specify the fraction of total height for vertical shift.
– horizontal flip=True: Set to True to randomly flip inputs hori- zontally.
• Fit the data generator on training images using .fit(train images).
• Refer to the Keras ImageDataGenerator documentation for more details.
2.4 Step 4: Define the Learning Rate Scheduler
• Implement a learning rate scheduler using LearningRateScheduler from keras .callbacks.
• Define a function lr schedule that adjusts the learning rate at specific epochs. The function should reduce the learning rate by half after epoch
3 and epoch 6.
• Pass the learning rate schedule function to LearningRateScheduler(lr schedule).
• Refer to the Keras LearningRateScheduler documentationfor more details.
2.5 Step 5: Implement ReduceLROnPlateau Callback
• Implement ReduceLROnPlateau callback to reduce the learning rate when a metric has stopped improving.
• Use the following parameters:
– monitor=’val loss’ : Monitor the validation loss.
– factor=0.5: Reduce the learning rate by half.
– patience=2: Number of epochs with no improvement after which learning rate will be reduced.
– min lr=1e-6: Lower bound on the learning rate.
– verbose=1: Enable verbosity.
• Refer to the Keras ReduceLROnPlateau documentation for more details.
2.6 Step 6: Build the CNN Model
• Build the CNN model using Sequential from keras .models.
• Add the following layers in sequence:
– Conv2D(32, (3, 3), activation=’relu’, padding=’same’, input shape=(32,
32, 3)): Add a 2D convolution layer with 32 filters, a kernel size of 3x3, ReLU activation, and same padding.
– BatchNormalization(): Add a batch normalization layer.
– Conv2D(32, (3, 3), activation=’relu’, padding=’same’): Add another 2D convolution layer with the same specifications as above.
– BatchNormalization(): Add another batch normalization layer.
– MaxPooling2D((2, 2)): Add a max pooling layer with a pool size of 2x2.
– Dropout(0.2): Add a dropout layer with a rate of 0.2.
– Conv2D(64, (3, 3), activation=’relu’, padding=’same’): Add a 2D convolution layer with 64 filters, a kernel size of 3x3, ReLU ac- tivation, and same padding.
– BatchNormalization(): Add a batch normalization layer.
– Conv2D(64, (3, 3), activation=’relu’, padding=’same’): Add another 2D convolution layer with the same specifications as above.
– BatchNormalization(): Add another batch normalization layer.
– MaxPooling2D((2, 2)): Add a max pooling layer with a pool size of 2x2.
– Dropout(0.3): Add a dropout layer with a rate of 0.3.
– Conv2D(128, (3, 3), activation=’relu’, padding=’same’): Add a 2D convolution layer with 128 filters, a kernel size of 3x3, ReLU ac-
tivation, and same padding.
– BatchNormalization(): Add a batch normalization layer.
– Conv2D(128, (3, 3), activation=’relu’, padding=’same’): Add another 2D convolution layer with the same specifications as above.
– BatchNormalization(): Add another batch normalization layer.
– MaxPooling2D((2, 2)): Add a max pooling layer with a pool size of 2x2.
– Dropout(0.4): Add a dropout layer with a rate of 0.4.
– Flatten(): Add aflatten layer to convert 2D matrices to a 1D vector.
– Dense(256, activation=’relu’): Add a dense layer with 256 units and ReLU activation.
– BatchNormalization(): Add a batch normalization layer.
– Dropout(0.4): Add a dropout layer with a rate of 0.4.
– Dense(10, activation=’softmax’): Add a final dense layer with
10 units and softmax activation for classification.
• Refer to the Keras Sequential model documentation for more details.
2.7 Step 7: Implement EarlyStopping Callback
• Implement EarlyStopping callback to stop training when a monitored metric has stopped improving.
• Use the following parameters:
– monitor=’val loss’ : Monitor the validation loss.
– patience=5: Number of epochs with no improvement after which training will be stopped.
– restore best weights=True: Restore the model weights from the epoch with the best value of the monitored quantity.
• Refer to the Keras EarlyStopping documentation for more details.
2.8 Step 8: Compile the Model
• Compile the model using the compile method.
• Use the following parameters:
– optimizer=’adam’ : Set the optimizer to Adam.
– loss=’categorical crossentropy’ : Set the loss function to cate- gorical crossentropy.
– metrics=[’accuracy’]: Set the metric to accuracy.
• Refer to the Keras model compile documentation for more details.
2.9 Step 9: Train the Model
• Train the model using the fit method with the data generator.
• Use the following parameters:
– batch size=128: Set the batch size to 128.
– epochs=10: Set the number of epochs to 10.
– validation data=(test images, test labels): Pass the test im- ages and labels for validation.
– callbacks=[lr scheduler, reduce lr, early stopping]: Pass the learning rate scheduler, reduce learning rate on plateau, and early stopping callbacks.
• Refer to the Keras model fit documentation for more details.
2.10 Step 10: Evaluate the Model
• Evaluate the model using the evaluate method on the test dataset.
• Print the test accuracy.
• Refer to the Keras model evaluate documentation for more details.
2.11 Step 11: Plot the Training and Validation Metrics
• Plot the training and validation accuracy and loss using matplotlib.
• Create two subplots: one for accuracy and one for loss.
• Use the plt .subplot, plt.plot, plt.legend, and plt .title functions to create the plots.
• Refer to the Matplotlib documentation for more details.
Submission
Submit your Jupyter notebook on Canvas. Ensure that your code is well- documented and includes explanations of each step. The total points for this assignment are 50. The deadline for submission is August 5th at 11:59 pm.