Artificial Intelligence and Financial Surveillance

There is no disputing the fact that the global technological wave of Artificial Intelligence (AI) has entered and is slowly taking over the Indian market. While AI has altered the dynamics of a multitude of sectors, what is particularly interesting to analyze is its impact and influence in the finance industry. This article will focus on this confluence of Financial Technology (FinTech) and its ethical and technical challenges by focussing on one of India’s most successful and innovative FinTech startups, Rubique.

Founded by Manavjeet Singh, Rubique has a customer base of over 200,000 people and ‘phygital’ operations in 190 cities. Rubique makes use of AI to make the process of applying and receiving a loan extremely easy and efficient. As Manavjeet Singh, CEO and co-founder puts it, “Rubique leverage[s] machine learning and AI with big data analytics to build a system that matches borrowers with the right lenders.”

Going further, we see that credit policies of all banks are linked to the Rubique system. This allows the system’s self-learning abilities to convert these policies into an evaluation matrix to check a customer’s eligibility to receive a loan and send borrowers loan offers automatically. The artificially intelligent system further learns from disbursement and monitoring data to improve the accuracy of the loan.

While this process does make disbursement and evaluation of loans a lot easier than before, it begs the question of what legislation governs the technology used.  When we talk about AI in the field of finance and banking, it is very important that we correctly establish the extent of transparency and accountability that goes with it. Services, like those offered by Rubique, involve automation of certain processes and also a massive amount of exchange of data from the client. This automation gives way to multiple questions – Does the AI system have autonomy to process the data according to its will? Who will be held responsible in a situation where the financial service doesn’t perform? The contention boils down to who owns the data that is essential for the AI system. Does liability lie with the the party that designed the system or the company that is using it?

 Everyone has the right to protect their data, and we consent to providing our data to various digital platforms when we agree to the long documents underlining terms and conditions. When we talk about AI in the field of finance and banking, it is very important that we correctly establish the extent of transparency and accountability that goes with it. But with AI technology developing at a fast pace, legislation finds it hard to keep up. This brings with it a lack of transparency which will and has made it tough for us to understand how and where our data will be used.

Yet, it seems as if the speed, ease and relatively frictionless processes that artificially intelligent systems bring with them overshadow the customer’s awareness for ethical treatment of their data. For example, if one come across a recommendation for a diversified portfolio of securities, one does not ponder over the possible implications this portfolio. While in awe of the developed and fast interface, we tend to skip the important details in fine print.

Companies can use this oversight to find a grey area wherein they can compromise on certain ethical principles, bringing us back to the issue of transparency with regards to AI. Further, companies do not allow public scrutiny of their AI programmes due to proprietary reasons. This means that their clients have no access to what kind of data the AI algorithms are using and to what extent.

Additionally, another possible flaw in the algorithms that AI uses are biases. These biases can arise either from a faulty data set that is used to train the algorithm or the programming of human bias into the data itself. AI is expected to be rational, fair and dispassionate and we expect it to deliver us a world of equal opportunity. We look towards a future where algorithms can make fair judgements without the fear of human bias based purely out of statistics. Yet, it does not seem to be the case that AI is fully equipped to deliver the same.

Hence, it is clear to us that AI is growing rampant and urgently needs proper legislation and regulation. The ownership of data in particular is highly contentious topic that our digital governance accommodate and regulate. In addition to this, it is our job as end users to understand and realise how to use this new technology responsibly. While the Indian government has responded to this with two national policy documents, but there are still massive gaps to be filled. AI is an incredibly useful technology and with appropriate regulation, it is sure to make many aspects of our life easier and more efficient than ever before.

Share

Vivaan is a first year undergraduate, majoring in Economics and Finance. He is interested in financial technology and investment analysis, and hopes to combine different aspects of liberal arts with business to innovate and come up with new ideas.

Leave a Reply

Your email address will not be published. Required fields are marked *

You don't have permission to register