Infinite scrolling for personalized recommendations - database

I've been recently researching solutions that would allow me to display a personalized ranking of products in an online e-commerce store.
A natural solution for this problem would be to use a managed ML service such as
AWS Personalize.
Based on my understanding it can be implemented in terms of 2 service calls:
Recommendations - return up to ~500 products based on user's profile
Ranking - based on product ids (up to ~500) reorder the list according user's to profile
I was wondering if there exists and implementation / indexing strategy that would allow to display the entire product catalog (let's assume 10k products)?
I can imagine an implementation that would:
Return 50 products per page
Call recommendations to grab first 500 products
Alternatively, pick top 500 products platform-wise and rerank them according to the user.
For the remaining results, i.e. pages 11 to N a database query would be executed, excluding those 500 products by id. The recommended ordering wouldn't be as relevant anymore as te top recommendations have been listed and the user is less likely to encounter relevant results at the 11th page. As a downside such a query would need a relatively large array to be included as a part of the query.
Is there a better solution for this? I've seen many eccomerce platforms offering a "Recommended" order option for their product listing that allows infinite scrolling. Would that mean that such a store is typically using predefined ranks, that is an arbitrary rank assigned to each product by the content manager that is exactly the same for any user on the platform?

I don't think I've ever seen an ecommerce site that shows me 10K products without any differentiation. Most ecommerce sites use a process called "merchandising" to decide which product to show, to which customer, in which position/treatment, at which time, and in which context.
Personalized recommendations may be part of that process, but they are generally only a part. For instance, if you're browsing "Books about architecture", the fact the recommendation engine thinks you're really interested in CDs by Duran Duran is not super useful. And even within "books about architecture", there may be other attributes that are more likely to drive your buying behaviour.
Most ecommerce sites use a range of attributes to decide product ranking in a product listing page, for example "relevance", "price", "is the product on special offer?", "is the product in stock?", "do we make a big margin on this product?", "is this a best seller?", "is the supplier reliable?". Personalized recommendations are sprinkled into these factors, but the weightings are very much specific to the vendor.
Most recommendation engines will provide a "relevance score" or similar indicator. In my experience, this has a long tail distribution - a handful of products will score highly, and the rest will score very low relevancy scores. The ecommerce business I have worked with have a cut off point - a score of less than x means we'll ignore the recommendation, in favour of other criteria.
Again, in my experience, personalized recommendations are useful for squeezing the last few percentage points of conversion, but nowhere near as valuable as robust search, intuitive categorization, and rich meta data.
I'd suggest that if your customer has scrolled through 500 recommended products, it's unlikely the next 9500 need to be in "recommended" order - your recommendation mechanism is probably statistically not significant.

Related

Life Time Value Model (LTV)... where do I even start?

I recently was picked to lead a longitudinal LTV model for our analytics dept. The final deliverable will be for external stakeholders, so essentially how the users on our platform (can't specify the company) are providing life time value to our external partners.
We'll be building this model from the ground up. We have nothing in place for this currently, just a sea of data (assume very generic assets, e.g. users, sign ups, user interaction with platform, etc.)
So... where do I even start? I've just been reading random docs on google for the time being. Any specific resources that are good? Are there different LTV methodologies? What's the "best" one (please take that with a grain of salt)?
I know this is an extremely broad topic so any answers even loosely related to LTV will hold significant value. Thanks all
I haven't tried anything yet. Just reading up on a few resources.
First thing you want to do is lay out the reasoning for having LTV. What's it's gonna be used for and by whom. I'll give some examples, but your industry and your business will have to have it tailored to them.
Next, you have series of meetings with all the stakeholders so that they would agree on a good definition for LTV under a tight guidance of someone who understands the data, or at least what dimensions have to influence it and what format it has to be in.
An example would be: you have an app that offers seven products. The first two products are freebies. Another requires an email to get. The fourth product is just one buck per month, the fifth costs a hundred, but one-time payment, the sixths is 20$/month and the final product is an enterprise/b2b level solution.
An arbitrary model would be to have something like:
No products (guests) => LTV = 0
Product 1 => LTV + 1
Product 2 => LTV + 1
Product 3 => LTV + 3
Product 4 => LTV + 10/month of subscription
Product 5 => LTV + 1000
Product 6 => LTV + 200/month of subscription
Product 7 => LTV + 10k/month of subscription
Then the LTV stakeholders, - mainly business owners and PMs refine the model depending on what kinds of analysis they need conducted typically. That basically depends on what and how they report to their executives or the board.
This is if you want to go with a simple integer as an LTV. Most commonly used for weighting users. Going with integer is a very comfortable starting point since it allows for easy mathematical aggregations. Just to make your user-based analysis more robust. Say, you found out that 2% of your users encounter certain issue that blocks them from navigating somewhere or finishing a process. How should it be prioritized? Should it just be ignored? Should it be addressed immediately?
Well that depends on who those users are. If they're just free users or even just guests and the error is not blocking them from product onboarding, then it's worth to get the ticket to the backlog, but realistically it won't get released any time soon if ever.
However, if those users are enterprise customers, then the issue not only has to be hotfixed. It has to be hotfixed immediately. Probably paying overtime to the devs, qa and devops to work till late today.
Generally, LTV should be a user-level dimension. There are implementations of it as a session-level, but it's way more difficult.
From the technical standpoint, LTV is most commonly implemented on the tracking stage, so commonly in a TMS, say, GTM by a tracking specialist.
Another way it's implemented is in or after ETL, by the data engineers or data scientists.

Finding image duplicates for products within database workflow

I am looking for a solution to identify duplicate products from their images to speed up my product database workflow.
I accept product listings from many suppliers and have thousands of listings, totalling hundreds of thousands of images. The same product may be stocked by several suppliers. Each supplier may use the same images but with different watermarks or size. Each supplier may describe the product slightly differently.
On my website, I only want to list each individual product from one supplier. If I am sent a product that I already have, I want to efficiently identify the duplicate and ignore the new product.
I currently use some Regex and text searching to help me identify duplicates but it's not foolproof and is slow. I have read about hashing each image and searching that way, but my duplicate images aren't exactly the same.
NB. I am using Windows. I do not know Python or Java. I do have a range of technical knowledge but I haven't yet found anything that isn't "first become an expert in Java, then..."
Is there a Windows app or API or something out there that, given a set of images, can return back duplicates?

Binarize the ratings - MovieLens dataset

I am working on a personalised news recommendation engine based on click-behaviour of users. My features will be predefined news categories (such as politics, sport and etc).
Whenever user clicks on an article, I build/update user profile based on this article, then recommend another article from articles pool.
Regarding evaluation of this system, I need to have a dataset which contains binary user-item interactions (user clicked on recommended article or not) - which I couldn't find an appropriate dataset for this specific context. What I'm trying to do is, binarize Movielens dataset, then calculate precision and recall.
What I actually do in MovieLens dataset is as follows: if the rating for an item, by a user, is larger than the average rating by this user I assign it a binary rating of 1, 0 otherwise.
Is this approach right way to evaluate such kind of systems?
binarizing makes no difference. Precision and recall are relative so the fact that someone rated is all you need. The algo for a "good" rating is meaningless for testing purposes.
epinions has two dataset, one for ratings, the other binary for trust.
use MAP#k mean average precision for some number of recommendations. This will take account of the ranking in a group of recommendations, which is no, doubt how they will be used.
BTW there is already a recommender in open source that does this, and allows mixing multiple events/actions/indicators and can also use content similarity here. It is based on PredictionIO's framework, which is Spark based.

Filtering Functionality Similar to Ebay SQL Count Issue

I am stuck on a database problem for a client, wandering if someone could help me out. I am currently trying to implement filtering functionality so that a user can filter results after they have searched for something. We are using SQL Server 2008. I am working on an electronics e-commerce site and the database is quite large (500,000 plus records). The scenario is this - user goes to our website and types in 'laptop' and clicks search. This brings up the first page of several thousand results. What I want to do is then
filter these results further and present the user with options such as:
Filter By Manufacturer
Dell (10,000)
Acer (2,000)
Lenovo (6,000)
Filter By Colour
Black (7000)
Silver (2000)
The main columns of the database are like this - the primary key is an integer ID
ID Title Manufacturer Colour
The key part of the question is how to get the counts in various categories in an efficient manner. The only way I currently know how to do it is with separate queries. However, should we wish to filter by further categories then this will become very slow - especially as the database grows. My current SQL is this:
select count(*) as ManufacturerCount, Manufacturer from [ProductDB.Product] GROUP BY Manufacturer;
select count(*) as ColourCount, Colour from [ProductDB.Product] GROUP BY Colour;
My question is if I can get the results as a single table using some-kind of join or union and if this would be faster than my current method of issuing multiple queries with the Count(*) function. Thanks for your help, if you require any further information please ask. PS I am wandering how on sites like ebay and amazon manage to do this so fast. In order to understand my problem better if you go onto ebay and type in laptop you will
see a number of filters on the left - this is basically what I am trying to achieve. I don't know how it can be done efficiently when there are many filters. E.g to get functionality equivalent to Ebay I would need about 10 queries and I'm sure that will be slow. I was thinking of creating an intermediate table with all the counts however the intermediate table would have to be continuously updated in order to reflect changes to the database and that would be a problem if there are multiple updates per minute. Thanks.
The "intermediate table" is exactly the way to go. I can guarantee you that no e-commerce site with substantial traffic and large number of products would do what you are suggesting on the fly at every inquiry.
If you are worried about keeping track of changes to products, just do all changes to the product catalog thru stored procs (my preferred method) or else use triggers.
One complication is how you will group things in the intermediate table. If you are only grouping on pre-defined categories and sub-categories that are built into the product hierarchy, then it's fairly easy. It sounds like you are allowing free-text search... if so, how will you manage multiple keywords that result in an unexpected intersection of different categories? One way is to save the keywords searched along with the counts and a time stamp. Then, the next time someone searches on the same keywords, check the intermediate table and if the time stamp is older than some predetermined threshold (say, 5 minutes), return your results to a temp table, query the category counts from the temp table, overwrite the previous counts with the new time stamp, and return the whole enchilada to the web app. Otherwise, skip the temp table and just return the pre-aggregated counts and data records. In this case, you might get some quirky front-end count behavior, like it might say "10 results" in a particular category but then when the user drills down, they actually find 9 or 11. It's happened to me on different sites as a customer and it's really not a big deal.
BTW, I used to work for a well-known e-commerce company and we did things like this.

Automatic product classification and query weighting

I'm facing ranking problems using solr and I'm stucked.
Given a e-commerce site, for the query "ipad" i obtain:
ipad case for ipad 2
ipad case
ipad connection kit
ipad 32gb wifi
This is a problem, since we want to rank first the main products (or products by itself) and tf/idf ranks first the accessories due to descriptions like "ipad case compatible with ipad, ipad2, ipad3, ipad retina, ipad mini, etc".
Furthermore, using the categories we have no way of determining whether is an accessory or a product.
I wonder if using automatic classification would help. Another solution that improves this ranking (like Named Entity Recognition) would be appreciated.
Could you provide tagged data?
If you have >50k items a Naive Bayes with a bigram language model trained on the product name will almost catch all accessories with 99% accuracy. I guess you can train such a naive bayes with Mahout, however product names have a pretty limited bigram amount so this can be trained even on a smartphone easily and fast nowadays.
This is a typical mechanical turk task, shouldn't be that expensive to tag a few items. However if you insist on some semi-supervised algorithm, I found Iterative similarity aggregation pretty useful.
The main idea is that you give a few tokens like "case"/"power adapter" and it iteratively finds new tokens that are indicators of spam because they appear in the same context.
Here is the paper, but I have written a blogpost about this as well which sums up the intention in plain language. This paper also mentions the same "let the user find the right item" paradigm that Sean has proposed, so both can be used in conjunction.
Oh and if you need some advice of machine learning with Lucene&SOLR I can recommend you the talk of my friend Tommaso Teofili at ApacheCon Europe this year. You can find the slides on slideshare. There is also a youtube video of the talk out there, just search for it ;)
TF/IDF is just going to rank based on the words in the query vs words in the title as you have found. That sounds like it is not the right definition of "good result" and that you want to favor products over accessories.
Of course you can simply attach heuristics to patch the problem. For example, consider the title as a set of words, not multiset, so the appearance of "iPad" several times makes no difference. Or just boost the score of items that you know are products. This isn't learning per se, but are simple, directly reflect your business knowledge, and probably have some positive effect.
If you want to learn here, you probably need to use the one best source of knowledge about what the best results are: your users. You know what they click in response to each query. You can learn a term-item model that associates search terms to items clicked. You can view that as many types of problem -- actually a latent-factor recommender model could work well there.
Have a look at Ted's slides on how to use a recommender as a "search engine": http://www.slideshare.net/tdunning/search-as-recommendation

Resources