CPN News Recommender Engine: how our personalisation solution works

Instead of using a single algorithm, the CPN platform takes advantage of a variety of techniques to produce personalised recommendations. The platform’s A/B testing module allows publishers to test and optimise the used configurations.

The News Recommender we have developed for CPN is a hybrid engine, i.e., it doesn’t rely on a single technique or algorithm to feed recommendations to users, but uses a mix of different techniques in order to overcome the weaknesses of each approach and to offer recommendations based on different “points of view”.

In the CPN platform, publishers can create, via API calls, different kind of Recommenders:

  1. Content-based recommendations: based on unsupervised keyword extraction and named entity extraction, semantic uplifting techniques, etc.

  2. Collaborative filtering techniques: assessing users’ consumption history similarities

  3. Most Popular recommendations: recommendations based on trending items

  4. Random recommendations: some random items to add variety to the recommendations

  5. Composite recommender: quota-based combinations of the above techniques

  6. Sentiment-based recommender: a recommender that takes into account automatic sentiment analysis of the news content (is the news uplifting or depressing?)

Figure 1: CPN Recommender Architecture

Figure 1: CPN Recommender Architecture

The architecture of the recommender is depicted in the above picture. News is continuously ingested from publisher feeds (rss, xml, etc.) and processed to extract useful information from them (entity extraction, unsupervised topic extraction, etc.) The enriched items are stored into a NoSQL store on the CPN platform. The recommenders compute what are the most interesting articles for a specific user according to a user profile that is constantly being refined by collecting events generated by users, such as user-clicks, reading-time, explicit likes/dislikes and so on.

The advantage of the system is that it is very flexible: a high number of configurations and customisations are possible. It is unlikely that every publisher will be using the same combination of techniques and, probably, the configurations will need to be fine-tuned over time. For example one publisher can privilege content-based recommendation over other techniques according to their own specific business needs or measured impact on the users.

In order to help publishers to find the most suitable configuration to their needs we have developed the Recommender A/B testing module.

Figure 2: Recommender A/B testing

Figure 2: Recommender A/B testing

In figure 2 a high-level overview of the module is presented; the users are partitioned into groups, and every user is associated to a specific recommendation technique (or a specific hybrid combination). Additionally the publisher can also define what subset of news items must be fed to a specific group (for example, deciding what kind of content has to be delivered to users based on age groups).

The module is tightly integrated with the CPN Recommender module and exposes an API that allows every publisher to create recommenders and user groups. It is a “configuration” module, and recommenders and groups can be created and modified at any moment by the publisher or administrators of the platform. User groups are created by specifying a name for the group and then, by calling a specific api, users are added to groups by specifying their user IDs.

By using such a module, a publisher is able to test different versions of a recommender at the same time on different user groups in order to compare the behaviour of different approaches to content personalisation. Every publisher is able to create an unlimited number of groups and recommenders and to associate a specific recommender to a group.

A/B testing can produce concrete evidence of what actually works in personalisation. The module can be used for continuously testing new techniques and approaches in order to optimise conversion rates and gain a better understanding of customers.

Furthermore, the number of groups that can be tested is not restricted to only two (A/B). The module allows for unlimited number of recommenders to be tested concurrently in large scale experimentation campaigns.

Recommender A/B Testing: usage

The modules (via a REST api) allows every publisher to define:

  • Unlimited number of recommenders

  • Unlimited number of groups

  • Associating a specific recommender to a specific group

The structure of the module makes it very suitable for use especially in production environments. It makes it possible to continuously test new techniques and approaches in order to optimise the desired metrics or to test configuration changes of an existing recommender, and apply them only to a restricted number of users before releasing to the whole user-base.