June 6, 2023, 6:03 a.m. | /u/ithacasnowman

Machine Learning www.reddit.com

Background: Math/CS. Took a course on ML in 2005. Re-reading [Mitchell](https://www.cin.ufpe.br/~cavmj/Machine%20-%20Learning%20-%20Tom%20Mitchell.pdf).

I'm confused about whether a network of linear activation perceptrons can model non-linear functions.

According to the book I'm reading (linked above), *every* boolean function can be represented by some network of interconnected units based on the [perceptron](https://en.wikipedia.org/wiki/Perceptron#Definition). This means that XOR, which is non-linear, can be represented by it. I found an [example](https://medium.com/@stanleydukor/neural-representation-of-and-or-not-xor-and-xnor-logic-gates-perceptron-algorithm-b0275375fea1) on the web that models XOR using a network of perceptrons (1/0 activations after a …

author book function functions linear machinelearning motivation multiple network non-linear perceptron sigmoid

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Data Science Analyst

@ Mayo Clinic | AZ, United States

Sr. Data Scientist (Network Engineering)

@ SpaceX | Redmond, WA