Research on Interpretable Machine Learning Portfolio Based on Multi-factor Clustering
Keywords:
Interpretability, Multi-factor Mode, Stock Clustering, Random Forest, PortfolioAbstract
The 'black box' phenomenon and limited interpretability present significant obstacles in machine learning and deep learning for portfolio management. Additionally, standard metrics for interpretability in machine learning often struggle to effectively elucidate model features in portfolio decision contexts. This research aims to address these challenges by introducing a methodology for generating easily interpretable portfolios. The approach involves using Random Forest feature importance analysis within multi-factor models, followed by clustering based on stock factors. Portfolios are generated using the Mean-CVaR model, and the effectiveness of the proposed explainable portfolios is evaluated through comparative analysis with two machine learning interpretability tools: SHAP and Permutation methods.Downloads
Published
2024-01-21
Issue
Section
Research Articles
How to Cite
Shi, J., & Zhang, W. (2024). Research on Interpretable Machine Learning Portfolio Based on Multi-factor Clustering. Journal of Advances in Information Science and Technology, 2(1), 1-10. https://yvsou.com/journal/index.php/jaist/article/view/11