*2.2. Challenges of AI*

Though AI is promising and doing a lot in fuelling digital financial inclusion, however, there are challenges associated with reaping the benefits from intelligent algorithms (Deloitte 2018b). Some of the challenges relate to data quality, responsibility requirements to roll out AI technology (Sundblad 2018). The prediction power of AI depends chiefly on the availability of quality data, However, limited availability of the right quality and quantity of data may act as an obstacle of the power of AI (Harkut and Kasat 2019). The prediction power of an algorithm depends highly on the quality of data fed as an input. Sometimes even in quality data, biases can be hidden (Sundblad 2018). In the financial sector, some reference data are often affected by quality issues (Sundblad 2018). The concept of AI is premised on having a data-quality program in place (Sundblad 2018). Moreover, the use of intelligent machines represents a challenge concerning liability (Harkut and Kasat 2019). The questions which remain unanswered are who/what shall be responsible in case something goes wrong? Financial institutions are sometimes reluctant to give machines full autonomy since the behavior of machines is not fully foreseeable (Deloitte 2018b; Sundblad 2018). In many cases, they tend to keep the human supervisor in place to validate the critical machine activities and decisions like blocking payments or releasing payments (Sundblad 2018). This, in a way, partially defeats the purpose of using machines in the first place (Sundblad 2018). In some instances, compliance and operational security standards are relatively strict and insufficient understanding of AI's inherent risks, the culture of the firm and regulation can all act as barriers to widespread adoption of AI in financial services firms (Harkut and Kasat 2019).
