In this paper, sample-aware policy entropy regularization is proposed to enhance the conventional policy entropy regularization for better exploration. Exploiting the sample distribution obtainable from the replay buffer, the proposed sample-aware entropy regularization maximizes the entropy of the weighted sum of the policy action distribution and the sample action distribution from the replay buffer for sample-efficient exploration. A practical algorithm named diversity actor-critic (DAC) is developed by applying policy iteration to the objective function with the proposed sample-aware entropy regularization. Numerical results show that DAC significantly outperforms existing recent algorithms for reinforcement learning.
![Seungyul Han and Youngchul Sung, "Diversity actor-critic: Sample-aware entropy regularization for sample-efficient exploration," to be presented at International Conference on Machine Learning (ICML) 2021, Jul. 2021 1 1](/wp-content/uploads/drupal/성영철교수님1.png)
![Seungyul Han and Youngchul Sung, "Diversity actor-critic: Sample-aware entropy regularization for sample-efficient exploration," to be presented at International Conference on Machine Learning (ICML) 2021, Jul. 2021 2 2](/wp-content/uploads/drupal/성영철교수님2.png)
![Seungyul Han and Youngchul Sung, "Diversity actor-critic: Sample-aware entropy regularization for sample-efficient exploration," to be presented at International Conference on Machine Learning (ICML) 2021, Jul. 2021 3 3](/wp-content/uploads/drupal/성영철교수님3.png)