March 26, 2024, 4:44 a.m. | YanMing Hu, TianChi Liao, JiaLong Chen, Jing Bian, ZiBin Zheng, Chuan Chen

cs.LG updates on arXiv.org arxiv.org

arXiv:2306.04212v2 Announce Type: replace
Abstract: Graph Neural networks (GNNs) have been applied in many scenarios due to the superior performance of graph learning. However, fairness is always ignored when designing GNNs. As a consequence, biased information in training data can easily affect vanilla GNNs, causing biased results toward particular demographic groups (divided by sensitive attributes, such as race and age). There have been efforts to address the fairness issue. However, existing fair techniques generally divide the demographic groups by raw …

abstract arxiv cs.cy cs.lg data designing fair fairness gnns graph graph learning graph neural networks however information networks neural networks performance results training training data type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Field Sample Specialist (Air Sampling) - Eurofins Environment Testing – Pueblo, CO

@ Eurofins | Pueblo, CO, United States

Camera Perception Engineer

@ Meta | Sunnyvale, CA