Aug. 2, 2022, 2:11 a.m. | Hou Shengren, Edgar Mauricio Salazar, Pedro P. Vergara, Peter Palensky

cs.LG updates on arXiv.org arxiv.org

Taking advantage of their data-driven and model-free features, Deep
Reinforcement Learning (DRL) algorithms have the potential to deal with the
increasing level of uncertainty due to the introduction of renewable-based
generation. To deal simultaneously with the energy systems' operational cost
and technical constraints (e.g, generation-demand power balance) DRL algorithms
must consider a trade-off when designing the reward function. This trade-off
introduces extra hyperparameters that impact the DRL algorithms' performance
and capability of providing feasible solutions. In this paper, a performance …

algorithms arxiv comparison deep rl energy performance rl scheduling systems

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US