Categories
Misc

Is it possible to use transfer learning to create a CNN model to classify alzheimer’s MRI images(4 classes) by using a pre-trainned model used for pneumonia xray(2 classes).

Hello I have been trying to create two tensorflow models to experiment with transfer learning. I have a trainned a cnn model for lung xray images for pneumonia(2 classes) by using the kaggle chest x-ray dataset .

Here is my code

import tensorflow as tf

import numpy as np

from tensorflow import keras

import os

from tensorflow.keras.preprocessing.image import ImageDataGenerator

from tensorflow.keras.preprocessing import image

import matplotlib.pyplot as plt

gen = ImageDataGenerator(rescale=1./255)

train_data = gen.flow_from_directory(“/Users/saibalaji/Downloads/chest_xray/train”,target_size=(500,500),batch_size=32,class_mode=’binary’)

test_data = gen.flow_from_directory(“/Users/saibalaji/Downloads/chest_xray/test”,target_size=(500,500),batch_size=32,class_mode=’binary’)

model = keras.Sequential()

# Convolutional layer and maxpool layer 1

model.add(keras.layers.Conv2D(32,(3,3),activation=’relu’,input_shape=(500,500,3)))

model.add(keras.layers.MaxPool2D(2,2))

# Convolutional layer and maxpool layer 2

model.add(keras.layers.Conv2D(64,(3,3),activation=’relu’))

model.add(keras.layers.MaxPool2D(2,2))

# Convolutional layer and maxpool layer 3

model.add(keras.layers.Conv2D(128,(3,3),activation=’relu’))

model.add(keras.layers.MaxPool2D(2,2))

# Convolutional layer and maxpool layer 4

model.add(keras.layers.Conv2D(128,(3,3),activation=’relu’))

model.add(keras.layers.MaxPool2D(2,2))

# This layer flattens the resulting image array to 1D array

model.add(keras.layers.Flatten())

# Hidden layer with 512 neurons and Rectified Linear Unit activation function

model.add(keras.layers.Dense(512,activation=’relu’))

# Output layer with single neuron which gives 0 for Cat or 1 for Dog

#Here we use sigmoid activation function which makes our model output to lie between 0 and 1

model.add(keras.layers.Dense(1,activation=’sigmoid’))

hist = model.compile(optimizer=’adam’,loss=’binary_crossentropy’,metrics=[‘accuracy’])

model.fit_generator(train_data,

steps_per_epoch = 163,

epochs = 4,

validation_data = test_data

)

I have saved the model in .h5 format.

Then I created a new notebook loaded data from kaggle for alzheimer’s disease and loaded my saved pneumonia model. Copied its layer to a new model except last layer then Freezed all the layers in the new model as non trainable. Then added a output dense layer with 4 neurons for 4 classes. Then trainned only the last layer for 5 epochs. But the problem is val accuaracy remains at 35% constant. How can I improve it.

Here is my code for alzeihmers model

import tensorflow as tf

from tensorflow import keras

from tensorflow.keras.preprocessing.image import ImageDataGenerator

from tensorflow.keras.preprocessing import image

import matplotlib.pyplot as plt

import numpy as np

gen = ImageDataGenerator(rescale=1./255)

traindata = datagen.flow_from_directory(‘/Users/saibalaji/Documents/TensorFlowProjects/ad/train’,target_size=(500,500),batch_size=32)

testdata = datagen.flow_from_directory(‘/Users/saibalaji/Documents/TensorFlowProjects/ad/test’,target_size=(500,500),batch_size=32)

model = keras.models.load_model(‘pn.h5’)

nmodel = keras.models.Sequential()

#add all layers except last one

for layer in model.layers[0:-1]:

nmodel.add(layer)

for layer in nmodel.layers:

layer.trainable = False

nmodel.add(keras.layers.Dense(units=4,name=’dense_last’))

hist = nmodel.compile(optimizer = tf.keras.optimizers.Adam(learning_rate=0.002),loss = ‘categorical_crossentropy’, metrics = [‘accuracy’])

nmodel.fit(x=traindata,validation_data=testdata,epochs=5,steps_per_epoch=160)

submitted by /u/kudoshinichi-8211
[visit reddit] [comments]

Leave a Reply

Your email address will not be published. Required fields are marked *