Over the past year I have become increasingly interested in artificial intelligence (AI), in part because I try to stay abreast of fintech and in part due to my curiosity of how the general public welcomes or shirks such a radical technology shift. While I have not yet adopted a Google Home device nor Amazon’s Alexa (these would terrify my dogs..it’s not for lack of want), I think it is safe to say that I spend a fair amount of time researching and reading about AI and where it is headed. I will save my hypothesis and excitement for AI serving as a genesis for a new renaissance for another post.
What I am most intrigued by, in a near term sense, is the potential for AI to be both transformative and disruptive in the financial services industry. Now, some of this might be old hat to people who have been following AI over the past 4-5 years. Financial institutions have started pilot projects using AI for decisions on lending in recent years, with much publicity. Since AI is driven on data and patterns, institutions have been using AI to review an applicant's spending and borrowing patterns and more accurately predict the credit worthiness. While the science and programming that supports this type of machine learning is sophisticated and developed in accordance with credit and lending models, the actual trust and implementation of such machine learning is still quite immature. Why is that? Why do we take great pains to build a tool, architect it to behave in a smarter and more consistent way than we would ourselves, and then not trust it?
One of the major factors bankers need to consider with AI is the lack of context within analysis. In lending and the analysis of cash flow, context wouldn’t be a factor that AI would stumble on with decisions. In theory, this would be a wonderful way to remove the human error involved with lending decisions, as well as help machine learning to get smarter and smarter with the increased access to information via cloud computing. (I am, of course, making the assumption that one would use a cloud computing platform that exponentially increases storage capacities for machine learning. Want to know more about cloud computing?)
Another area of financial institutions that may not require context would be compliance and compliance checklists. Would the adoption of AI be perfect for compliance adherence? By executing scripts for the automation of manual compliance tasks, the goal would be that organizations obtain a new level of efficiency while lowering their error rate.
But let’s revisit that trust thing I brought up previously. Since we are talking about money and the business of people's money, one thing we can count on is that there will likely never be a lack of human oversight. Your customers want to know that you’ve taken the context of their particular situation into consideration and that you’ve reviewed any analysis a computer has done. Here is where my excitement comes in. This brings an exciting opportunity for employees in financial services to upskill on AI technology, as well as an opportunity for financial services to forge a path for AI adoption in one of the tightest regulatory environments. Financial institutions are not typically known for being trailblazers in the technology arena. Could AI be the right opportunity to become tech-forward?
Stripping away my optimism and hope for AI in financial services in the future, it is worth mentioning that regulation/compliance governing bodies have not yet figured out how to govern AI in financial institutions, or even what the heck to do with AI for the industry. Could AI be the magical element that enables financial institutions and regulatory bodies to partner more together? Perhaps I haven’t given up on optimism just yet.