Jan. 19, 2022, 11:57 a.m. | /u/Embarrassed-Raisin-1

Deep Learning www.reddit.com

I need help understanding what exactly the following piece of code does (it's from pointnet). I don't have tensorflow so I cannot run it to test it.

weights = tf.get_variable('weights', [256, 9], initializer=tf.constant_initializer(0.0), dtype=tf.float32) biases = tf.get_variable('biases', [9], initializer=tf.constant_initializer(0.0), dtype=tf.float32) biases += tf.constant([1,0,0,0,1,0,0,0,1], dtype=tf.float32) transform = tf.matmul(net, weights) transform = tf.nn.bias_add(transform, biases) 

From my understanding, after the first run the transform will be equal to biases, i.e. [1,0,0,0,1,0,0,0,1] , because weights initially are zero. During the training, the weights and …

biases deeplearning training

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Social Insights & Data Analyst (Freelance)

@ Media.Monks | Jakarta

Cloud Data Engineer

@ Arkatechture | Portland, ME, USA