Hey everyone!
So I’ve been fighting with this error for quite a few days and desperate. I’m currently working on a CNN with 70 images.
The size of X_train = 774164.
But, whenever I try to run this –> history = cnn.fit(X_train, y_train, batch_size=batch_size, epochs=epoch, validation_split=0.2) it leaks my memory and I get the next error:
W tensorflow/core/common_runtime/bfc_allocator.cc:457] Allocator (GPU_0_bfc) ran out of memory trying to allocate 5.04GiB (rounded to 5417907712)requested by op _EagerConst
If the cause is memory fragmentation maybe the environment variable ‘TF_GPU_ALLOCATOR=cuda_malloc_async’ will improve the situation.
Thank you in advance!
submitted by /u/BarriJulen
[visit reddit] [comments]