top of page

Exploring Automated-Decision Making

Updated: May 17

by Thloni Mhango


On the 16 April 2024 ORSSA, in partnership with the NRF and Norton Rose Fulbright South Africa, held an industry engagement on Automated Decision Making ("ADM"). In the recent years there has been an exponential growth and proliferation of Artificial Intelligence ("AI") systems and diverse use cases in industry. There is also a growing need to distil these concepts and ensure that people understand how to leverage these tools and understand their limitations.


The legal industry in particular has started to engage on a few provisions for governance and litigation of the unethical use of AI, bias as well us misunderstood applications of these systems in various industries, including financial services, healthcare and public services.

 

The session kicked off with Michelle David, (Director and Chairman of the Norton Rose Fulbright opening the discussion, sharing her own personal views on how ADM's have become a part of modern day life, from the start of day with Chat GPT assisting with ideas for dinner, self-driving cars, co-pilot suggestions in emails to personalised adverts on browsers based on text, speech and search history. The world of data science has fast expanded its uses beyond traditional spaces and it was fitting for the first presentation to locate the home and birth of data science, machine learning and optimisation - OR in practice. David Clark, our deputy president of ORSSA presented on the definition of OR, ORSSA as a community as well as the role OR practitioners have been playing for years in the business of better and automated decision making and support.

 

Gilad Katzav (Associate at Norton Rose Fulbright ) shared insights from the perspective of a legal practitioner with case studies from industry where the use of ADM's had resulted in unfair outcomes for clients based on historical training data sets which had bias in credit scorecard processing that was performed by a third party vendor on behalf of a lending institution. This sparked engagements during the Q&A session, legal systems are holding institutions accountable for decisions they make that may have inherent or perceived bias, whether deliberate, automated or outsourced. The discussion highlighted the need for accountability in the use of data, its outputs as well as overlaying decisions with discretion. Something AI, hasn’t quite figured out yet.

 

Dr Arno Both (Head of Credit Research and Development at FirstRand) presented on the traditional process of credit in banking, the definition of trust and the role of the banks in allocating capital throughout the economy. He demonstrated the life-cycle of the credit process in banking, from identifying which clients banks would like to lend to, ways to market credit to them, tools for assessing their affordability, score card modelling, to the determination of future re-payments and the likelihood of expected default through behavioural modelling. In each component of the credit life-cycle, large financial institutions leverage automated decision tools differently to optimise their business through targeted marketing based on personal banking trends, projecting future income and affordability profiles and triggers to determine recovery action processes when credit goes bad.

 

Jess Reese, our resident OR practitioner and lead data scientist at Discovery Science Lab, presented on use cases on Large Language Models ("LLM") in the insurance industry for call centres. Typically, peoples experience of call centres can range from a great operator that is knowledgeable about the relevant product, but with a terrible bed-side manner or the friendliest operator that perhaps just started on the job yesterday but has limited company knowledge on the products and offerings. LLM-based systems trained on internal information and policy data for the company can be used as an in-house search engine, allowing operators to focus on the human element of engaging with the customer and then looking up specific (and hopefully, non-hallucinatory) information as requested by the client. The robot is not exposed to the customer, operators are trained on efficient prompts and there's the wonderful benefit of learning for everyone involved (the bots included).

 

David opened up the panel discussion which included the presenters as well as, Dr Martin Bekker (Computational social scientist and AI ethics researcher) and Thloni Mhango (Head of Financial Risk at FirstRand and OR practitioner). The audience had plenty of questions on whether smaller organisations could contend with the AI tools and in-house capabilities typically afforded by larger institutions; to which Jess remarked on the number of open-source tools available that are accessible to users. However, as with all things, caution was required in order to truly understand the training data used on open source systems as well as overlaying decisions outputs with discretion. Dr Bekker also challenged the audience to reflect on the ethical components of AI tools and systems, including how to leverage information systems and design ADM's that give reliable results. There were many thoughts on the role people play in the application of ADM's and Thloni charged  the audience to always remember that AI and its uses require discernment, whether in ensuring non-biased training, testing the outcomes of ADM models and ultimately ensuring that decisions result in positive outcomes for humanity.

 

In closing, there was consensus that the extensive use of ADM's and the applications thereof, are exciting and really allow humans more time to focus on creativity and critical thinking tasks. However, the take-home was that the model lifecycle for these models requires iteration, accountability and human intervention where the decisions impact society. The opportunities for scalability extend beyond the usage by large organisations given the growth of open source AI, smaller organisations can tap into these tools with the right third party due diligence and transform their operations.

41 views0 comments

Comments


bottom of page