April 25, 2024, 7:43 p.m. | Elijah Pelofske, Vincent Urias, Lorie M. Liebrock

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.15681v1 Announce Type: cross
Abstract: Generative pre-trained transformers (GPT's) are a type of large language machine learning model that are unusually adept at producing novel, and coherent, natural language. In this study the ability of GPT models to generate novel and correct versions, and notably very insecure versions, of implementations of the cryptographic hash function SHA-1 is examined. The GPT models Llama-2-70b-chat-h, Mistral-7B-Instruct-v0.1, and zephyr-7b-alpha are used. The GPT models are prompted to re-write each function using a modified version …

abstract adept arxiv automated code cs.ai cs.cr cs.lg function generate generative generative pre-trained transformer gpt gpt models hash implementation language large language machine machine learning machine learning model natural natural language novel study transformer transformer models transformers type variants versions

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Machine Learning Engineer - Sr. Consultant level

@ Visa | Bellevue, WA, United States