Dec. 31, 2023, 8:19 a.m. | /u/shakibahm

Machine Learning www.reddit.com

I have been comparing Colab's runtime. I found that for, for vanilla Keras CNN, TPU consistently lags behind A100, V100 or T4. Increasing batch-size didn't really improve it. Any specific configuration I should be investigating?

[Code](https://github.com/magurmach/TensorFlowLearning/blob/main/benchmarking/colab/CNN_on_fmnist_datase_Keras_Performance_evaluation.ipynb). [Blog post with details](https://medium.com/@008.shakib/comparing-keras-cnn-performance-in-colab-pro-runtimes-e7402499b61d).

a100 batch-size cnn colab found gpu keras machinelearning tpu training v100

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Alternance DATA/AI Engineer (H/F)

@ SQLI | Le Grand-Quevilly, France