June 29, 2022, 1:57 p.m. | /u/gabegabe6

Machine Learning www.reddit.com

I saw a paper called *EvilModel* on how to hide malicious code in a neural network as we have thousands or millions of parameters that we can alter.

This basic technique is based on the modification of the `float32` values (but can be adapted to `float16`) where we modify the fraction bits or part of the fraction.

- [Post/Tutorial on the process](https://www.gaborvecsei.com/Neural-Network-Steganography/)
- [GitHub repo for the project](https://github.com/gaborvecsei/Neural-Network-Steganography)
- [EvilModel paper](https://arxiv.org/abs/2107.08590)

As I saw with my experiments, we could easily …

implementation machinelearning network neural network software steganography

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Engineer

@ Parker | New York City

Sr. Data Analyst | Home Solutions

@ Three Ships | Raleigh or Charlotte, NC