AI, business, Emerging Technology, finance, News, Regulation

Creating Explainable AI With Rules

There’s a fascinating dichotomy in artificial intelligence between statistics and rules, machine learning and expert systems. Newcomers to artificial intelligence (AI) regard machine learning as innately superior to brittle rules-based systems, while the history of this field reveals both rules and probabilistic learning are integral components of AI.This fact is perhaps nowhere truer than in establishing explainable AI, which is central to the long-term business value of AI front-office use cases.

Granted, simple machine learning can automate backend processes. However, the full extent of deep learning or complex neural networks — which are much more accurate than basic machine learning — for mission-critical decision-making and action requires explainability.

Using rules (and rules-based systems) to explicate machine learning results creates explainable AI. Many of the far-reaching applications of AI at the enterprise level — deploying it to combat financial crimes, to predict an individual’s immediate and long-term future in health care, for example — require explainable AI that’s fair, transparent and regulatory compliant.

 Rules can explain machine learning results for these purposes and others.

Rules-Based Explanations

The learning capability of statistical AI excels at sophisticated pattern recognition to determine signals presaging events. In health care, machine learning is helpful for combining innumerable factors to determine the likelihood of a patient needing intubation within a defined time period or being at risk of heart failure. In finance, analysis of a markedly different, yet equally broad, range of data can denote complicated money laundering networks that span continents or the most advantageous person to offer a specific mortgage type.

Explainability issues arise because machine learning outputs are numerical; deep neural networks are so opaque that users don’t necessarily know which factor contributed to what aspect of the resulting score. There are several emergent techniques for increasing explainability and interpretability of machine learning results. After organizations gain insight into the black box of intricate machine learning models, the best way to explain those results to customers, regulators and legal entities is to translate them into rules that, by their very definition, offer full transparency for explainable AI. Rules can also highlight points of bias in models.

Symbolic Reasoning

Contemporary explainability techniques, or adjusting the weights and measures of model inputs to determine their effects on outputs, is merely the first step toward explainable AI. Once those factors are identified for why model results are produced, enumerating them with rules makes even the densest deep learning models transparent. What makes the knowledge-base side of AI so influential during its formative years is predicated on rules and may prove influential for deploying them to explain machine learning results.

Rules are repeatable, consistently producing the same output; statistical machine learning outputs are tenuous at best. Incorporating knowledge-based AI systems involving Prolog or other aspects of symbolic reasoning can codify intricate explanations of machine learning results into sustainable rules that are optimal for satisfying regulators and consumers in health care, finance and other verticals. Unlike machine learning, the semantic inferencing of the knowledgebase side of AI understands the meaning of its processing results, which is valuable for deriving rules for machine learning outputs.

Ongoing Feedback

The final component of using rules for explainable AI involves additional machine learning. It’s necessary to run machine learning on the results of action created from machine learning models to see if the outcomes were actually beneficial or to determine if there’s some way for them to be improved. Thus, when micro-segmenting which members of a suburban population with minimum education qualification and income are most appropriate for a particular annuity offering, for example, it’s important to run machine learning to see how that offering impacts both the financial institution and its customers to see if they really gained from it. Results of this feedback loop can influence the rules for explaining the results of the original machine learning models used to make this decision or indicate necessary changes to the model and its input data.

The Need For Explainable AI

The fundamental necessity for explainable AI spans regulatory compliance, fairness, transparency, ethics and lack of bias — although this is not a complete list. For example, the effectiveness of counteracting financial crimes and increasing revenues from advanced machine learning predictions in financial services could be greatly enhanced by deploying more accurate deep learning models. But all of this would be arduous to explain to regulators. Translating those results into explainable rules is the basis for more widespread AI deployments producing a more meaningful impact on society.

All credits to the source below by

Comments are closed.