Oct. 30, 2023, 12:52 p.m. | /u/LimboJimbodingo

Deep Learning www.reddit.com

It seems like they are the same but there is a difference in the total number of parameters

Pytorch Implementation

import torch
import torch.nn as nn
import torch.nn.functional as F
import torchaudio
import torch.nn.init as init

def init_weights(m):
if isinstance(m,nn.Conv1d):
nn.init.xavier_uniform_(m.weight.data, gain=nn.init.calculate_gain('relu'))
nn.init.xavier_uniform_(m.bias.data)


class EncoderBlock(nn.Module):
def __init__(self,input,n_filters,dropout_prob=0.3, max_pooling = True,padding="same"):
super(EncoderBlock,self).__init__()
self.dropout_prob = dropout_prob
self.conv1 = nn.Conv1d(input,n_filters,kernel_size=9,padding=padding)
self.conv2 = nn.Conv1d(n_filters,n_filters,kernel_size=9,padding=padding)
self.relu1 = nn.ReLU()
self.relu2 = nn.ReLU()
self.dropout = nn.Dropout(dropout_prob)
self.max_pooling = max_pooling
self.max_pool = nn.MaxPool1d(2)
self.batch_norm = nn.BatchNorm1d(n_filters,affine=True)
init.kaiming_normal_(self.conv1.weight, mode='fan_out', …

bias data deeplearning difference differences functional implementation import parameters pytorch relu tensorflow total

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne