I have a memory leak with TensorFlow 1.14. I referred to various GitHub issues and Memory leak with TensorFlow to address my issue, and I followed the advice of the answer, that seemed to have solved the problem. However it does not work here. I even ported the code to Tensorflow 2.1 and 2.3 but still could not solve the problem.
Whenever I load the model then memory leak is there. I tried to clear the session after the model is loaded and used garbage collect API also but leak still persists.
In order to recreate the memory leak, I have created a simple example. I have used below function to check the memory used of the python process.
def memory_usage_func():
    import os
    import psutil
    process = psutil.Process(os.getpid())
    mem_used = process.memory_info()[0] >> 20
    print("Memory used:", mem_used)
    return mem_used
Below is the function to load the model and check memory usage:
for i in range(100):
    model = load_model('./model_example.h5', compile=False)
    del model
    memory_usage_func()
In above code, memory leak issue persists. Further, I tried to do prediction. For that, I created a session, load the model and run predict(). There also I face same memory leak issues. I used tf.keras.backend.clear_session() and gc.collect() after model is loaded. But, it is unable to clear the session and free the memory.
 
    