Database queries vs quick response - database

We are making a gamification component for our forum, which is being developed in Django. We would like users to receive badges right away after achieving certain goals. However, we are concerned about the amount of database queries that would be made. For example, take a badge that is given if a post gets a certain amount of views. If the condition for the badge is checked every time the post is viewed, that would be a lot of queries. Is our only other option to check at certain intervals or another event, like the user viewing their profile? That would be less optimal from the user perspective, because of the delay.

There is a different approach that may suit you, using webserver logging and post-processing that log to generate stats. I have used it in Apache with some projects that required pageview hit counts. A similar configuration for other webservers would work the same.
I use django.contrib.contentypes to write down the Content Type ID and Object ID of the object accessed on this example, but you can, of course, log anything you want.
So, in your Apache virtualhost conf file, adding a LogFormat directive like this:
LogFormat "%{X-H-CID}o|%{X-H-OID}o" hitcounter
And then attaching it to a CustomLog:
CustomLog path/to/your/logfile.log hitcounter
This will enable Apache to write down to the logfile the following HTTP headers: X-H-CID and X-H-OID, which represent the ContentType ID and Object ID of the object being hit. From a view, you may add the headers to the HttpResponse:
ctxt = RequestContext(request)
rendered = render_to_string(template, ctxt)
http_res = HttpResponse(rendered)
http_res['X-H-CID'] = content_type_id
http_res['X-H-OID'] = object_id
Replace content_type_id and object_id with your real object props.
This example should write a line like:
16|4353
where 16 is the Content Type ID and 4353 the Object ID. Finally, you can schedule a django custom command to process that logfile and perform the needed actions.

Related

WordPress : make categories automatically match according to external API Value

I'm managing a company website, where we have to display our products. We however do not want to handle the admin edit for this CPT, nor offer the ability to access to the form. But we have to read some product data form the admin edit page. All has to be created or updated via our CRM platform automatically.
For this matter, I already setup a CPT (wprc_pr) and registered 6 custom hierarchical terms: 1 generic for the types (wprc_pr_type) and 5 targeting each types available: wprc_pr_rb, wprc_pr_sp, wprc_pr_pe, wprc_pr_ce and wprc_pr_pr. All those taxonomies are required for filtering purposes (was the old way of working, maybe not the best, opened to suggestions here). We happen to come out with archive pages links looking like site.tld/generic/specific-parent/specific-child/ which is what is desired here.
I have a internal tool, nodeJS based, to batch create products from our CRM. The job is simple: get all products not yet pushed to the website, format a new post, push it to the WP REST API, wait for response, updated CRM data in consequence, and proceed to next product. Handle about 1600 products today on trialn each gone fine
The issue for now is that in order for me to put the correct terms to the new post, I have to compute for each product the generic type and specific type children.
I handled that by creating 6 files, one for each taxonomy. Each file is basically a giant JS object with the id from the CRM as a key, and the term id as a value. My script handles the category assertion like that:
wp_taxonomy = [jsTaxonomyMapper[crm_id1][crm_id2]] // or [] if not found
I have to say it is working pretty well, and that I could stop here. But I will have to take that computing to the wp_after_insert_post hook, in order to reaffect the post to the desired category on updated if something changed on the CRM.
Not quite difficult, but if I happen to add category on the CRM, I'll have to manually edit my mappers to add the new terms, and believe me that's a hassle.
Not waiting for a full solution here, but a way to work the thing. Maybe a way to computed those mappers and store their values in the options table maybe, or have a mapper class, I don't know at all.
Additional information:
Data from the CRM comes as integers (ids corresponding to a label) and the mappers today consist of 6 arrays (nested or not), about 600 total entries.
If you have something for me, or even suggestions to simplify the process, I'll go with it.
Thanks.
EDIT :
Went with another approach, see comment below.

Low resolution Email Preview in Salesforce for marketing cloud connect emails

Question about Marketing Cloud Connect - In case of Individual Email Result in Salesforce, an email preview is sent from MC to SF object along with a thumbnail preview of the email template (as base64 encoded image). This thumbnail has two problems.
a. isn’t clear enough for the sales agent to see what was sent to customer
b. can’t find what offer percentage was given to customer as the preview has ampscript value instead of actual value.
There is no way to change the default configs of MCC to increase the thumbnail size. To solve this and improve the image resolution, I’ve thought of below solutions. Is there any other possible way that you’ve done?
From customer send log, take ‘view_email_url’ and get the html using a visual force page and remove all the ‘https://click…xx.com/’ links to ensure click/ open counts aren’t impacted. Downside - # of api calls to make are higher
For every email, create a jpg preview of the email template and store it in MC and store this in SF in a custom object as ‘EmailName vs EmailPreviewUrl’. And, whenever a marketer creates a new email, they have to ensure that they create a jpg copy in MC and update associated record in Saleforce custom object. Downside - sales agent will not know what % of offer is given to customer (ps - % of offer is decided in MC automation based on raw order information we’ve about this customer). To manage this downside, we can send the offer details of each customer to SF using updateSingleSalesforceObject method everytime an email is sent. To do this, all the campaigns should be standardized to some extend.
Any other thoughts? is there any configs that I can flip to increase the image size?
To answer this question, we ended up creating a cloudpage code resource (similar to API) that returns the complete HTML of 'view_email_url' (aka view as web page) for a given email + Subscriber + datesent + BU. We used SSJS to query sendLog DE using these inputs to figure out the view_email_url value. To avoid counting view_email_url opens to tracking opens, we wrapped the open counter pixel inside a context variable - something like below.
%%[
IF _messagecontext != 'VAWP' THEN
]%%
<custom name="opencounter" type="tracking" />
%%[ ENDIF ]%%

protoPayload vs jsonPayload in logging

The log entries of my app result in jsonPayload while the gae request log entries use protoPayload. Just like in protoPayload, I added a requestId in my logging that shows up in jsonPayload. However, when using the log viewer where "Show entries from same request" action, I don't see my log entries since the filter uses protoPayload.requestId="xyz". I tried to use an or condition with jsonPayload.requestId="xyz" but that didn't help. Ideally I wouldn't even want to manually edit the clause as it will be painful to do everytime. Seems like per the following documentation, the requestId in each of these types of payloads don't map to the same underlying bigquery field.
https://cloud.google.com/logging/docs/export/bigquery
There is also a "trace" field directly on the log entry and that is same for all the related logs. However, there is no field called trace to search. Doing a text search does return all the entries. While this works, again the UX is bad as it requires first drilling down to the request log entry, copying the trace value and then doing a query.
So, are there any other options to tie the request log entry with the rest of the app log entries for that request easily?
There is a field called "trace" that is on the log entry which works. I think I was confused with the "traceId" within the protoPayload. Note that to get the "trace" field to show up with json payload, the field name should be "logging.googleapis.com/trace"

SuiteCommerce Advanced - Show a custom record on the PDP

I am looking to create a feature whereby a User can download any available documents related to the item from a tab on the PDP.
So far I have created a custom record called Documentation (customrecord_documentation) containing the following fields:
Related item : custrecord_documentation_related_item
Type : custrecord_documentation_type
Document : custrecord_documentation_document
Description : custrecord_documentation_description
Related Item ID : custrecord_documentation_related_item_id
The functionality works fine on the backend of NetSuite where I can assign documents to an Inventory item. The stumbling block is trying to fetch the data to the front end of the SCA webstore.
Any help on the above would be much appreciated.
I've come at this a number of ways.
One way is to create a Suitelet that returns JSON of the document names and urls. The urls can be the real Netsuite urls or they can be the urls of your suitelet where you set up the suitelet to return the doc when accessed with action=doc&id=_docid_ query params.
Add a target <div id="relatedDocs"></div> to the item_details.tpl
In your ItemDetailsView's init_Plugins add
$.getJSON('app/site/hosting/scriptlet.nl...?action=availabledoc').
then(function(data){
var asHtml = format(data); //however you like
$("#relatedDocs").html(asHtml);
});
You can also go the whole module route. If you created a third party module DocsView then you would add DocsView as a child view to ItemDetailsView.
That's a little more involved so try the option above first to see if it fits your needs. The nice thing is you can just about ignore Backbone with this approach. You can make this a little more portable by using a service.ss instead of the suitelet. You can create your own ssp app for the function so you don't have to deal with SCAs url structure.
It's been a while, but you should be able to access the JSON data from within the related Backbone View class. From there, within the return context, output the value you're wanting to the PDP. Hopefully you're extending the original class and not overwriting / altering the core code :P.
The model associated with the PDP should hold all the JSON data you're looking for. Model.get('...') sort of syntax.
I'd recommend against Suitelets for this, as that's extra execution time, and is a bit slower.
I'm sure you know, but you need to set the documents to be available as public as well.
Hope this helps, thanks.

Overriding unwanted page size in SOLR

I have a web application that uses SOLR as backend search service.
All requests that control searches are GET requests: user does something (type something, choose a page size, a sort critera) and, after pressing the search button, a corresponding servlet on the web app calls SOLR.
Now, the parameters sent to my servlet are exposed in the browser address bar; this is good because
1) each page on the webapp can be stored as permalink
2) the search can be controlled by changing directly the URL
On top of that, for a specific parameter, the page size, I would like to impose some constraint. I mean: if the select menu on the web app proposes 3 choices (5,10,15) I don't want other values.
Now, I know I can do that in my servlet but I was wondering if it is possible on SOLR side too...local params? don't know.
Briefly: the "rows" parameters on SOLR must be 5,10 or 20: if a value > 20 comes then 20 is applied.
Within Solr SearchHandlers you can predefine some configuration settings as outlined in the Configuration section. This allows you to set the following parameter behaviors
defaults - provides default param values that will be used if the param does not have a value specified at request time.
appends - provides param values that will be used in addition to any values specified at request time (or as defaults.
invariants - provides param values that will be used in spite of any values provided at request time. They are a way of letting the Solr maintainer lock down the options available to Solr clients. Any params values specified here are used regardless of what values may be specified in either the query, the "defaults", or the "appends" params.
However, these three options, do not provide for the specific behavior of limiting the rows value to 20 like you are seeking. I believe the only way to limit this within Solr would be to create a custom Request Handler and plug that into Solr. The article - Developing a Request Handler for Solr provides a good overview of the steps needed to create you own Request Handler.

Resources