Jan. 31, 2024, 3:46 p.m. | Mikihiro Kasahara Taiki Oka Vincent Taschereau-Dumouchel Mitsuo Kawato Hiroki Takakura Aurelio Cortese

cs.LG updates on arXiv.org arxiv.org

While generative AI is now widespread and useful in society, there are potential risks of misuse, e.g., unconsciously influencing cognitive processes or decision-making. Although this causes a security problem in the cognitive domain, there has been no research about neural and computational mechanisms counteracting the impact of malicious generative AI in humans. We propose DecNefGAN, a novel framework that combines a generative adversarial system and a neural reinforcement model. More specifically, DecNefGAN bridges human and generative AI in a closed-loop …

cognitive computational cs.ai cs.cr cs.hc cs.lg decision domain fmri generative humans impact loop making misuse processes research risks security society

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Analyst (Digital Business Analyst)

@ Activate Interactive Pte Ltd | Singapore, Central Singapore, Singapore