How Does a Service Worker Know to Update? - reactjs

I'm trying to build a service worker in React to detect changes in an S3 bucket that's being hosted via Cloudfront, but I can only get the update function to trigger when the page reloads. I guess I don't really understand how the service worker knows to update things. I did disable all cache in Cloudfront to make sure there wasn't a conflict of some sort as well. I've followed countless tutorials and I've basically resorted to even copying and pasting the extremely common code you find on all tutorials, but with the same result. I've also viewed many articles on the lifecycle, but I still just don't understand how it knows there's an update. I'm currently using this example: https://deanhume.com/displaying-a-new-version-available-progressive-web-app/
Any help is appreciated.

I'm not sure if this will help, I've been looking into it for some time. On Angular, there is a json file (ngsw.json) that contains a hash table, this table looks like this:
"hashTable": {
"/index.html": "4f5c690352e4a44a0e4477aecdd8e1a8cc5ff6cf",
"/main.daf1c14bd2840a967b00.js": "3c430c33be512e07a2be37546ba347d6a4ca7f52",
"/polyfills.02f925d581acabc054de.js": "0baf1f319c628f0e0241fe2255b01713da669b06",
"/runtime.423876bdf008fc4fd61e.js": "e34e25e3c24c465898be3ca59ffbcad2ca31e9cf",
"/styles.d30753183b32d43cebfa.css": "a27cde0231bba0947197999242cc83d826f6fa1f"
},
From memory, the files contained within this hash table are the files that are triggering the update. An update is triggered once a hash has changed.
In some preferences files, I kind of remember that I was able to specify which files should trigger an update.
I am sorry I can't be more precise, but hopefully it will help you!

Related

How to run code in wordpress to handle database

Whenever I have to run code on the database, change posts, or terms or what have you, I am running it on a custom page template.
Since this has been working for me up to know, I didn’t think about it much. But I need to delete a ton of terms now from a custom taxonomy and I can’t do it on the test page very effectively. Meaning I get 504 gateway errors all the time, because the code takes too long to run, and deletes only a part of the terms.
So I am wondering, if I need to run custom code to change a lot of data, what is the most efficient method to use?
Many people use a plugin named Code Snippets for this. Otherwise it's often more efficient to use direct SQL Queries using phpMyAdmin for example.

Compile HTML string in AngularJS

So, a little background on this, I have an AngularJS/Ionic project that I'm attempting to do some improvement on, I'm placing all my HTML/JS/CSS into a database table, which in turn will be retrieved on a GET upon app start. HTML is then saved in a factory(also local storage). The goal, is to have an app that can be controlled from the HTML/JS/CSS that is within the Database table, so that we don't need to redeploy new versions for simple updates/fixes. Here's what I have tried:
The most successful method I used was using the below within the controller:
$scope.pageContent = $sce.trustAsHtml(htmlString)
and calling it within the HTML like below:
<div ng-bind-html="pagecontent"></div>
The only problem is, that it isn't compiling correct in a few ways - All my $scope values within the HTML aren't being linked to the parent controller scope.
My questions is: Is there a better way of handling this? I've seen a few people claiming using directives would be the way to go, but can't get a definitive answer on how to implement. Or, is how I'm moving forward working, and I'm just missing some key components... Any help on best way to implement would be greatly appreciated. Let me know if I left any information out, thanks!

How to move logs from DB to files in Magento EE?

I need to change log saving place from DB to files. I found only one way - rewrite Mage_Log module resource models and modify save() function for every resource model, but, as I understand right, it's not very good way. Please, tell me, how can I save logs into files, but not into DB (eg: visitor logs)?
You'll need to override app/code/core/Mage/Log/Model/Resource/Vistor.php.
In there, you'll find several places where it's writing this information into the DB which makes your site that much slower.
To answer your question, you'll need to change the following method(s):
_saveUrlInfo($visitor)
_saveVisitorUrl($visitor)
_saveCustomerInfo($visitor)
_saveQuoteInfo($visitor)
For those of you who may have landed on this page looking for information on how to disable the logs all together, you can find instructions here:
http://www.axertion.com/tutorials/2012/12/how-to-disable-magento-logging-to-the-database/

BigQuery UI - Datasets missing

I have a weird issue with my BigQuery UI (going on https://bigquery.cloud.google.com/queries/my-project-name). I don't know why, but I see no datasets for my projects, when I'm fully aware they exist. My code can still hit these datasets and their tables. There is just no way for me to see them.
In the UI itself, I can still query them if I type the whole query by hand, but being able to see my structure for my schema could be helpful.
When I check my network tab in the developer tools on chrome, I notice that I receive "Failed to load ressource: net::ERR_CACHE_MISS". I then decided to do everything I could to reset my own cache. I cleared my cookies, went incognito, I tried other browsers, even other computers. NOTHING brings back my datasets.
Anyone encountered this and has any ideas how to force my cache to hit?
I had the same problem a while back. When I got the error, I struggled with it and I ended up finding a way to reset this. Seems like it's something cached server-side that makes this incorrect cache-hit. The way to reset the server-side cache is to hit a URL with a project that doesn't exist, so something like https://bigquery.cloud.google.com/queries/bogus-nonexistant-project should reset it all
Did you recently assign a new string ID to your project that previously only had a numeric ID? If so, this is a known issue that has been reported recently, and I'm still working to resolve.
The issue is that the frontend cache gets stuck with the old numeric ID for the project and our frontend JS has a bug where it errors out instead of updating the cache to contain the new string ID. LiY's workaround of going to a bogus, uncacheable URL is the suggested workaround to unstick the cache until this bug is resolved.
(And if you didn't recently assign a new string ID to your project, then I'd love to hear more details about what might have caused this issue so it won't happen to anyone else!)

Need ideas on retrieving data from a website

I'm stumped and need some ideas on how to do this or even whether it can be done at all.
I have a client who would like to build a website tailored to English-speaking travelers in a specific country (Thailand, in this case). The different modes of transportation (bus & train) have good web sites for providing their respective information. And both are very static in terms of the data they present (the schedules rarely change). Here's one of the sites I would need to get info from: train schedules The client wants to provide users the ability to search for a beginning and end location and determine, using the external website's information, how they can best get there, being provided a route with schedule times for the different modes of chosen transport.
Now, in my limited experience, I would think the way to do that would be to retrieve the original schedule info from the external site's server (via API or some other means) and retain the info in a database, which can be queried as needed. Our first thought was to contact the respective authorities to determine how/if this can be done, but this has proven to be problematic due to the language barrier, mainly.
My client suggested what is basically "screen scraping", but that sounds like it would be complicated at best, downloading the web page(s) and filtering through the HTML for relevant/necessary data to put into the database. My worry is that the info on these mainly static sites is so static, that the data isn't even kept in a database to build the page and the web page itself is updated (hard-coded) when something changes.
I could really use some help and suggestions here. Thanks!
Screen scraping is always problematic IMO as you are at the mercy of the person who wrote the page. If the content is static, then I think it would be easier to copy the data manually to your database. If you wanted to keep up to date with changes, you could then snapshot the page when you transcribe the info and run a job to periodically check whether the page has changed from the snapshot. When it does, it sends an email for you to update it.
The above method could also be used in conjunction with some sort of screen scaper which could fall back to a manual process if the page changes too drastically.
Ultimately, it is a case of how much effort (cost) is your client willing to bear for accuracy
I have done this for the following site: http://www.buscatchers.com/ so it's definitely more than doable! A key feature of a web scraping solution for travel sites is that it must send you emails if anything went wrong during the scraping process. On the site, I use a two day window so that I have two days to fix the code if the design changes. Only once or twice have I had to change my code, and it's very easy to do.
As for some examples. There is some simplified source code here: http://www.buscatchers.com/about/guide. The full source code for the project is here: https://github.com/nicodjimenez/bus_catchers. This should give you some ideas on how to get started.
I can tell that the data is dynamic, it's to well structured. It's not hard for someone who is familiar with xpath to scrape this site.

Resources