FOLLOW US

Introduction

The insurance industry offers a very wide range of products, such as individual, group, vehicle insurance and many more. Each of these require compliance with different procedures, and legal requirements and the analysis of various types of information both when selling the policies and when paying out due claims.

This generates many processes that take a lot of time to handle manually. Hence the growing interest in this industry in the use of artificial intelligence algorithms.

Even a partial automation of the processes handled in the company can significantly reduce the time spent working on a single insurance offer. It will also allow for optimizing the policies parameters and reduce costs resulting from possible claims.

In this article, I would like to present a few areas in which artificial intelligence algorithms can be implemented, and the example results we were able to achieve in some of them.

Using artificial intelligence algorithms in insurance

In the case of vehicle damage, determining the amount paid under the insurance requires an analysis of the degree of damage to each part of the car. Often the appraiser will make an assessment based on the photos of the individual parts of the car. Then, on this basis, the amount to be paid out is estimated.
The use of image analysis algorithms and artificial intelligence can assist the appraiser in the process of assessing the degree of damage to specific parts of the car.


The Proof of Concept application that we created with our team lets you identify and mark specific parts of the car in the photos and assign an estimated degree of damage to each of them. Some additional features available from the application level could be: finding the nearest service point, calling a tow truck or finding a replacement car.

[Car photo used: https://www.pxfuel.com/en/free-photo-qfmwd]

Creating insurance offers for new clients requires the analysis of many factors. For group insurance, these are, for example: the structure of people employed in the company, the business profile, the region in which the company is based.

In addition, the seller must follow a set of rules when constructing such offers so that they are both beneficial to the company and attractive to the customer.

In this case, artificial intelligence can be used as a recommendation system that will help the seller choose the right options for a given customer. Such a solution would allow for a significant acceleration and facilitation of work required in preparing the offer.

Below are examples of recommendations made by the system when we know the age and gender of the customer (51-year-old man, 37-year-old woman, 39-year-old man). In this solution, the seller selects several options (row: masked), and on this basis, the system proposes others that should be included in the policy (row: recommendation).

The first row in each group represents the original policy that was accepted by the customer.

Additionally, based on historical data presenting which policies were accepted by the customers and which were denied, he could make even better choices on the proposed offers, thus increasing the sales effectiveness of new policies.

Changes to the terms of an insurance policy are most often proposed when the end date of the current agreement is approaching or when something bad is happening with regards to the policy (unexpected surge of events generating greater than predicted compensation payments). In such cases, the most common suggestions are:

The role of artificial intelligence algorithms in this process could be supporting the work on constructing a new offer, by indicating what in a given policy would be best changed in order to minimize the risk for the insurance company while maintaining the most favorable offer from the customer’s point of view.

Summary

In this article I presented example use cases of artificial intelligence algorithms in the insurance industry. It is worth noting that, apart from the cases discussed, there are also others, such as: a chatbot serving customers, risk estimation, or fraud and scam detection.

These three examples can be not only applied in the insurance industry but also many others, such as the financial industry, as described with more detail in the article The use of AI in the financial industry.

If the topics discussed interest you please feel free to reach out through my profile on LI, as well as follow our entries on the Isolution blog and our company LI and FB profiles.

Katarzyna Roszczewska

When beginning a journey with artificial intelligence algorithms, it is worth asking yourself two fundamental questions:

  1. What is the issue we are trying to solve?
  2. How to build a model that will assist in solving this issue?

To answer the first question, the only limiting factors are our imagination and the availability of data we can use for training the model. And for the second question, which model to choose will largely depend on the following: the problem we are solving, available resources and the time frame we have at our disposal.

This is best shown on an example. Let’s assume that we are trying to create an algorithm that helps segregate waste properly based on a photo. When it comes to selecting the algorithm itself, it will depend on the posed problem. We could use random decision forests or support vector machines after we extract the appropriate features from the image. We could also use convolutional neural networks, which operate directly on the image data. We can design the architecture of this network ourselves. It is worth, however, to consider a pre-trained model, which will allow for saving a lot of time. To solve this problem I have decided to adopt pre-trained artificial neural networks for image classification. How does this look in practice? Let’s go through the process step by step.

Data analysis and preparation

The data set that I decided to use can be found  here: Garbage Classification. It contains photos of waste divided into 6 categories: cardboard, glass, metal, paper, plastic and mixed waste. The last category contains photos of items that could largely be assigned to the other 5 groups. For this reason, we exclude it from further analysis. Below is a graph showing the number of photos available for each class.

A very important stage in the preparation of the data set is  to divide it into at least two subsets:  the training and validation subsets. An even better practice is to create three separate sets of data: training, validation and test. In this case, the results obtained on the test set are representative and show the real effectiveness of the system for new, previously unseen photos. In my case, 60% of the photos were used for training, 20% was a validation set, and another 20% went to the test set.

Below are sample photos for each class. Each photo is 512 x 384 pixels. When using a ready neural network, it is very important to adjust the size of the images in the set to the size of the input data accepted by the network. In the case of the Xception network, the size of the input layer is 299 x 299, while in the case of the VGG16 network, this size is 224 x 224. Therefore, before training the model, we need to scale our images.

Preparation and training of models

To solve the given problem, I used two popular network architectures: VGG16 and Xception. Both models I selected were pre-trained on the ImageNet collection, which contains pictures of objects belonging to 1000 classes. Therefore, the output layer responsible for the classification of the input image has 1000 outputs. In the case of the problem we are analyzing, the size of the output layer should be 5. Below is the code that allows for adapting the pre-trained model to our data set.

# model adaptation
21   base_net=tf.keras.applications.xception.Xception(weights='imagenet')
22   base_net_input  = base_net.get_layer(index=0).input
23   base_net_output = base_net.get_layer(index=-2).output
24   base_net_model  = models.Model(inputs=base_net_input, outputs=base_net_output)
25   
26   for layer in base_net_model.layers:
27      layer.trainable = False
28   
29   new_xception_model = models.Sequential()
30   new_xception_model.add(base_net_model)
31   new_xception_model.add(layers.Dense(5, activation='softmax', input_dim=2048))

Due to the limited data set used for training the model, I decided to expand it using data augmentation. Below is a fragment of the code responsible for training the selected model.

# hyperparameters
34   base_learning_rate = 0.0001
35   opt = optimizers.SGD(lr=1e-3, decay=1e-6, momentum=0.9, nesterov=True)
36   
37   new_xception_model.compile(optimizer=opt,
38                loss='sparse_categorical_crossentropy',
39                metrics=['accuracy'])
40   
41   # data preparation and augmentation
42   train_datagen = ImageDataGenerator(
43          rescale=1./255,
44          shear_range=0.2,
45          zoom_range=0.2,
46          horizontal_flip=True)
47   
48   valid_datagen = ImageDataGenerator(rescale=1./255)
49   
50   train_generator = train_datagen.flow_from_directory(
51      directory='training_data',
52      target_size=img_dim,
53      color_mode="rgb",
54      batch_size=batch_size,
55      class_mode="sparse",
56      shuffle=True,
57      seed=42
58   )
59   
60   valid_generator = valid_datagen.flow_from_directory(
61      directory='validation_data',
62      target_size=img_dim,
63      color_mode="rgb",
64      batch_size=batch_size,
65      class_mode="sparse",
66      shuffle=True,
67      seed=42
68   )
69   
70   # model training
71   garbage_recognition_model = new_xception_model.fit_generator(generator=train_generator,
72                      validation_data=valid_generator,
73                      epochs=epochs,
74                      callbacks=[checkpoint, tensorboard_callback, earlyStop]
75   )

During the training of selected models I used early stopping. The way it works is that if the recognition efficiency of photos from the validation set does not increase over a certain number of epochs, training is interrupted. Using this type of approach reduces the risk of the model overfitting to the data. Below are the learning curves for the training and validation sets. It is evident that in this case the Xception network did much better, achieving more than 80% efficiency in recognizing photos from the validation set.

Effectiveness of the created solutions

As I mentioned beforehand, it is best to determine the actual effectiveness of our model on a new data set that did not participate in training of our models. Therefore, below I present the results that were achieved by letting the models tackle the test set. This collection contained 480 photos. The results confirm the conclusion that in this case the model based on the pre-trained Xception network did much better. It achieved an efficiency of 83%, which is almost 10 percentage points more than the model based on the VGG16 architecture.

VGG16

ACC = 74%

Xception

ACC = 83%

Summary

The article shows how to use pre-trained network architectures and apply them to a specific problem that we want to solve. The process of data analysis and preparation, adaptation of pre-trained models to one’s needs and methods of assessing the effectiveness of the created solution are briefly discussed.

This is the first in a series of articles intended for people who want to start their adventure artificial intelligence algorithms. I invite you to follow our entries on the Isolution blog and our company profiles on LI and FB, as well as to subscribe to our newsletter.


According to a report on the use of artificial intelligence in the BFSI sector[1] (banking, financial services, insurance) the value of this market area was estimated at $2.5 billion in 2017. Forecasts show that by 2024 it can reach up to $25 billion. Companies in the area of ​​broadly defined financial services invest in solutions using artificial intelligence that allow them to increase their profits by building an attractive and competitive offer, as well as optimize processes. In this article I would like to talk about why artificial intelligence has become so popular over the last few decades and demonstrate in which areas it can be used in the financial industry.

AI, why now?

The term “artificial intelligence” was first coined in 1956 by John McCarthy, who defined it as “the science and engineering of making intelligent machines”. A little over 40 years have passed since the definition of artificial intelligence until we began to actively use it to solve puzzles, and over time, tackle business problems. In 1997, multiple world chess champion Garry Kasparov was beaten by a computer. This moment sparked many discussions about artificial intelligence and its possible applications began to appear. A little over 20 years later, in 2019, the DeepCubeA neural network[2] assembled a rubik’s cube in 1.2 s, performing this task three times faster than the best people in this field. This was proof of how much artificial intelligence algorithms have developed over the past years and an example of the possibilities that AI offers in terms of process optimization.

Nowadays, artificial intelligence algorithms are used to solve an increasing number of problems in various areas of our lives, from image recognition to autonomous driving. In recent years, we have observed a growing interest in this topic by companies from many industries. However, since this term has been known for almost 70 years, why is it only now that we are dealing with such huge progress in this area? There are several reasons:

The use of artificial intelligence algorithms in banking

The financial industry is characterized by access to huge data sets. The use of algorithms in the field of artificial intelligence allows to create solutions that will determine the company’s competitiveness on the market, as well optimize processes.

What solutions exactly? Here are some ideas:

a. recommendation system

Recommending the right products to current or potential customers based on an analysis of their behavior is a very important element of sales and up-selling for many companies, including banks. In addition to obvious financial gains, it also entails the potential satisfaction of customers who do not feel overwhelmed with random product advertisements, but are treated on an individual basis.

A great example of how personalization plays a big role in gaining customers and their trust is Netflix. Netflix machine learning algorithms, based on the available information, not only offer us titles that we will potentially want to watch, but also select movie trailers and series that we see on the main page immediately after logging in. If our profile shows that we like movies with Henry Cavil, the system will probably propose us the series he stars in, and on the main page we will see it’s trailer[3].  

Similarly, in the banking industry, proposing the right products can be based on a range of information, such as the consumer behavioral profile or information about the products we have on offer.

b. personal assistant

Another example of applying artificial intelligence algorithms to solutions that improve the bank’s work, but also reinforce a positive opinion among customers, is a chatbot, or in a more extensive version, a personal assistant. The advantage of this type of systems is their 24-hour availability and immediate answers to questions that are frequently asked by clients. This gives a number of benefits, of which the most important are:

In addition, in this solution it is possible to offer the customer the best offers for him in the context of his preferences and habits, as well as help in managing his home budget, e.g. by searching and sending information about subscriptions, which he may have forgotten.

c. support of stock market operations

Another example of the use of algorithms in the context of the financial sector is support for stock market decision making. Every investor faces a number of factors that influence the price of shares and future profits before making a decision on an investment.

Artificial intelligence algorithms allow you to shorten the monitoring time of these factors, which are sometimes measured in the hundreds. They also indicate information relevant from the point of view of the potential actions that we want to take.

Properly prepared solutions enable recognition of patterns present on the market, as well as forecasting prices of shares that interest us.

d. risk estimation

In the financial industry, one of the most important factors directly translating into amount of owned capital is the appropriate assessment of various types of risks, such as:

In this area, it is also possible to utilize artificial intelligence algorithms that will help us. We can make estimations on the potential risk of entering a financial cooperation or granting a loan, based on customer data such as financial history or creditworthiness. It is worth adding here that in this case it is also possible to use various types of alternative data, such as: regularity in making payments that are not included in the financial history or publicly available data from social media, which build the credibility of a potential client or colleague. In this case, it is also important that our decisions are transparent, therefore it is necessary to use interpretable models. The use of this type of models when assessing things such as creditworthiness positively affects the building of customer confidence in the company, because the decisions are fair, do not depend on human interpretation and are fully verifiable.

e. detecting fraud and scam attempts

The detection of potential fraud or financial scams is another key area in which the use of artificial intelligence algorithms should be considered. In this case, as in the case of risk estimation, any wrong decision generates large financial losses and disturbes customer confidence in the brand. Here, it is also very important to create a profile of typical client behavior based on his financial history. On this basis, it is possible to distinguish bank operations performed by customers from those generated by bots or to detect suspicious activity on the customer’s account based on his personal profile and habits.

Summary

The examples presented above are just a brief description. They do not exhaust the list of possible uses of artificial intelligence in the financial industry, but they give a good reference point. This topic is very broad, and therefore one article is not enough to show the full possibilities of artificial intelligence algorithms in the context of the financial industry. For this reason, we plan to publish another article in which my teammate will talk about specific cases and projects that we had the opportunity to implement for clients in this area. Everyone interested is invited to follow our entries on our Isolution blog and our company profiles on LI and FB.

[1] https://www.gminsights.com/industry-analysis/artificial-intelligence-ai-in-bfsi-market

[2] https://www.theregister.co.uk/2019/07/16/ai_rubiks_cube/

[3] https://netflixtechblog.com/artwork-personalization-c589f074ad76

In my previous post we went over the topic of autoencoders. It is certainly knowledge worth implementing in one’s life. Let’s now imagine a system where the communication between servers is done using Kafka. During the lifetime of the system it turned out that some events are quite harmful. We must detect them and transfer them to a separate process where they will be thoroughly examined.

Let’s start with a few assumptions:

Event processing with Kafka Streams

Kafka Streams is a library that allows for processing data between topics. The first step is to plug into [all_events] and build a process topology.

Filtering by key, events are directed to dedicated topics where they will be handled appropriately.

Autoencoder with DL4J

The library allows for the use of preset models built in Keras. This is a good solution when the AI team works at TF/Keras and they are responsible for building and customizing models. In this case, we will take a different route. We will create and train an autoencoder in Java.

The events have the same structure and values as in the example previously touched on. They are divided into two CSV files [normal_raw_events.csv] and [anomalous_raw_events.csv]

Incoming data is not standardized. We build a dedicated NormalizerMinMaxScaler which scales values to the range of [0.0-1.0].

The trained normalizer will serve as a pre-processor for a dedicated iterator that navigates the [normal_raw_events.csv] file.

The autoencoder will have the same structure as the previously mentioned example in Keras.

The model and normalizer are saved, ultimately they should end up on a dedicated resource from which the running application would download and build its configuration.

The trained model may show an average reconstruction error for typical events at 0.0188, while for anomalies the number will be 0.0834. By inputting MSE for 100 events from two groups on the chart, we can specify the cut-off threshold at [threshold = 0.045].

Kafka Stream Autoencoder Transformer

To map the model into the process topology I will use the ValueTransformer interface implemented in the AnomalyDetection class. In the constructor, we create a model with a normalizer and classes that will help with calculating the reconstruction error.

The transform method receives the events collected in the time window. They must be mapped to a format understandable to the [INDArray] model. For each of the events a reconstruction error is calculated, those that exceed the threshold receive the ANOMALOUS key.

Summary

Further readings

Mateusz Frączek, R&D Division Leader

Anomalies in systems occur rarely. The validation layers stand guard over correctness by catching them out and eliminating them from the process. A cash withdrawal request in a place that is unusual for the card owner or a sensor reading that exceeds the norms can be verified based on profiles or historical data. However, what happens when the event does not differ from the norm at first glance?

Multidimensional nature of events

Anomalies are not easy to detect. It is often the case that the values of the adopted characteristics subtly deviate from the correct distribution, or the deviation from the norm is only noticeable after taking into account a series of events and time characteristics. In such cases, the standard approach is to analyze the characteristics in terms of e.g. their mutual correlation. Mirek Mamczur elaborated on this very well in his post.

For our needs, we will generate an artificial data set where one of the classes will be considered anomalies. The events will have 15 characteristics and they will be clustered fairly close together with a standard deviation of 1.75.

To get the model to make an effort, we can force closer data generation by reducing [center_box].

Autoencoder (undercomplete)

An interesting feature of this architecture is its ability to encode data into its representations containing fewer features (latent representation). During training, the measure of faithful reproduction of input data is defined by the reconstruction error.

Having an n-dimensional x, the encoder f(x) = h compresses x to its m-dimensional form h,where m <n. The decoder will try to restore the data to the original number of dimensions g(h) = ~x. The measure of faithfulness of reproduction is the value of the loss function L(x , g(f (x))), the smaller it is, the closer our ~x is to x.

Following this lead, when we define the correct events in our system for model training purposes, the model should extract features from them and reproduce the event with some approximation based on them. When an abnormal event appears in the model, the reconstruction error should be noticeably greater due to the different characteristics of the data.

Keras

Using Pandas we build a DataFrame containing test data. After scaling, 20% of it will be used for validation.

For some models and problems that they solve (e.g. CNN classification), increasing the depth can help extract more information from the data. Autoencoders that are too deep will learn to copy X into Y without building their compressed representation, which we care about. In our example, it is worth checking how the MSE will change when we increase the depth and reduce the number of neurons.

The model was trained over 100 epochs. Going above this value did not show a tendency for improvement and the result stabilized at

0.017–0.018 MSE.

We will plot the cut-off threshold [threshold = 0.035], above which events will be classified as suspicious.

After some modifications, this block of code can be plugged into a system and serve as a validator. Everything within the MSE ≥threshold will be sent to a dedicated queue of suspicious events requiring analysis. The rest will be handled by the standard flow.

Summary

In the examples, I used an artificially generated data set for clustering. Data for two classes had features containing noticeable dispersion among themselves, which helped the model detect differences during training. In real situations, only some of the features will deviate from the norm. Autoencoders are one of the available possibilities when evaluating an event and should be used alongside other algorithms.

Further readings

Mateusz Frączek, R&D Division Leader