Skip to main content

Correlated Cross-Occurrence (CCO): How to make data behave



Cross-occurrence allows us to ask the question: are 2 events correlated.

To use the Ecom example, purchase is the conversion or primary action, a detail page view might be related but we must test each cross-occurrence to make sure. I know for a fact that with many ecom datasets it is impossible to treat these events as the same thing and get anything but a drop in quality of recommendations (I’ve tested this). People that use the ALS recommender in Spark’s MLlib sometimes tell you to weight the view less than the purchase. But this is nonsense (again I’ve tested this). What is true is that *some* views lead to purchases and others do not. So treating them all with the same weight is pure garbage.

What CCO does is find the views that seem to lead to purchase. It can also find category-preferences that lead to certain purchases, as well as location-preference (triggered by a purchase when logged in from some location).  And so on. Just about anything you know about users or can phrase as a possible indicator of user taste can be used to get lift in quality of recommendation.

So in the example below purchase history is the conversion action, likes, and downloads are secondary actions looked at as cross-occurrences. Note that we don’t need to have the same IDs for all actions. 

BTW to illustrate how powerful this idea is, I have a client that sells one item a year on average to a customer. It’s a very big item and has a lifetime of one year. So using ALS you could only train on the purchase and if you were gathering a year of data there would be precious little training data. Also when you have a user with no purchase it is impossible to recommend. ALS fails on all users with no purchase history. However with CCO, all the user journey and any data about the user you can gather along the way can be used to recommend something to purchase. So this client would be able to recommend to only 20% of their returning shoppers with ALS and those recs would be low of quality based on only one event far in the past. CCO using all the clickstream (or important parts of it) can do quite well.

This may seem an edge case but only in degree, every ecom app has data they are throwing away and CCO addresses this.


A way to compare 2 events at the individual level. The comparison is called the Log-Likelihood Ratio or LLR. This would allow us to look across all page views and see which correlated with which purchases. , not all page view are created equal but we now had a way to find the important ones!

Comments

Popular posts from this blog

Python and Parquet Performance

In Pandas, PyArrow, fastparquet, AWS Data Wrangler, PySpark and Dask. This post outlines how to use all common Python libraries to read and write Parquet format while taking advantage of  columnar storage ,  columnar compression  and  data partitioning . Used together, these three optimizations can dramatically accelerate I/O for your Python applications compared to CSV, JSON, HDF or other row-based formats. Parquet makes applications possible that are simply impossible using a text format like JSON or CSV. Introduction I have recently gotten more familiar with how to work with  Parquet  datasets across the six major tools used to read and write from Parquet in the Python ecosystem:  Pandas ,  PyArrow ,  fastparquet ,  AWS Data Wrangler ,  PySpark  and  Dask . My work of late in algorithmic trading involves switching between these tools a lot and as I said I often mix up the APIs. I use Pandas and PyArrow for in-RAM comput...

Design of Large-Scale Services on Cloud Services PART 2

Decompose the Application by Workload Applications are typically composed of multiple workloads. Different workloads can, and often do, have different requirements, different levels of criticality to the business, and different levels of financial consideration associated with them. By decomposing an application into workloads, an organization provides itself with valuable flexibility. A workload-centric approach provides better controls over costs, more flexibility in choosing technologies best suited to the workload, workload specific approaches to availability and security, flexibility and agility in adding and deploying new capabilities, etc. Scenarios When thinking about resiliency, it’s sometimes helpful to do so in the context of scenarios. The following are examples of typical scenarios: Scenario 1 – Sports Data Service  A customer provides a data service that provides sports information. The service has two primary workloads. The first provides statistics for th...

Cloud computing: Update

Cloud service contracts are still too complex for many businesses to grasp the potential risks and liabilities,  Businesses are buying into cloud services without fully understanding what they're paying for and what they can expect from the service. "One of the big barriers to using cloud computing is a lack of trust. I think you should be able to know what you're getting and what it means — and it should be easy to ensure that the terms in your contract are reasonable: open, transparent, safe and fair. Even if you don’t have a law degree," "Sensible, plain language contracts" be designed to spell out clear service level agreements and what a businesses' rights are on a range of issues, such as which third parties would be able to access a businesses' information or whether a firm will be notified in the event of data being stolen. Drawing up model contracts for cloud services is a "key pillar" of the  Cloud Computing strategy ....