Artificial Intelligence (AI) technology has the potential to deliver opportunities for investors and investment firms. For investors, AI may expand access to higher-quality products and services, bring greater participation in markets, lower costs, improve the user experience, enhance decision making, and ultimately provide better outcomes. For firms, AI may bring greater efficiency and productivity, better resource allocation and customer service, and enhanced risk management and regulatory compliance.
However, the use of AI also carries potential risks, including “AI washing,” unsound retail investor-facing products and services, “black box” risk, model and data risk, lack of clear disclosures of AI-associated risks, bias and conflicts of interest, privacy concerns, inadequate due diligence and monitoring of third-party service providers, systemic risk, and enabling bad actors’ malicious practices. These risks will escalate if firms embrace a “move fast and break things” mentality to their development and deployment of AI-based products and services. To the extent AI is deployed at massive speed and scale, potential harms could affect a substantial number of people very quickly and ripple throughout the economy.
If complexity, opacity, unreliability, bias, conflicts of interest, or data insecurity infect AI applications, investors could receive suboptimal products and services, harming their financial security, and eroding their trust and confidence in AI-based tools and investment markets more broadly. However, if firms and regulators take proactive approaches to addressing these risks, AI’s potential could be fulfilled.