Consecutive XML HTTP Requests seem to block on Google App Engine - google-app-engine

I am working on an application on Google App Engine. Roughly this is what I do:
The user screen is split into 2 parts (actually 3, but lets leave that out for now). The left part (this takes upto 75% of the screen) has a document with some words highlighted. When one of these highlighted words are clicked the right part displays various meanings of it, example usage etc. The way this works is clicking the word send an XML HTTP Request to the server, where the sample usage(s)/meaning(s) are retrieved from the datastore. This data is returned and displayed.
My problem:
After I click on a few words consecutively, the application seems to "hang" - say, I click on 5 words in quick succession, clicking on the 6th word (or any word after that) doesn't replace the info regarding the 5th word on my right panel.
Since some data store columns (at least single valued properties) are indexed by default I'm guessing retrieval is not the bottleneck here. It is probably the requests.
Is such an issue known with the GAE? Any workarounds possible?
Kind of in a soup with this - the application was supposed to go live today. Urgent help required!
Thanks! :)

You're probably being limited to two simultaneous requests by your browser - not by appengine. If you click on a third link before the first two have had a chance to return, make sure your app can deal with requests returning for links that should no longer be displayed.
If you were hitting a limit on appengine, you'd see exceptions in your server logs. If you're not seeing those exceptions, it's probably a client-side issue.

Sorry for the late ack (for some reason I received a notification for the responses a day late, by which we had managed to fix a few things). It does look like the problem was at the data end - our code was doing some inserts, and it turns out you can't do too many of them quickly - the logs reported a transaction time-out error. The reason we couldn't spot it earlier in the logs was we were writing simply too much info out and this was buried in somewhere.
The clicks on the user-side were pulling data from this table.
Unfortunately, the GAE simulator doesn't simulate any timeout error - so even though we had tested with comparable volumes of data before deployment this error never happened during development.
Thanks again for your responses!
And yet again, I apologize for responding late.

Related

NDB query().iter() of 1000<n<1500 entities is wigging out

I have a script that, using Remote API, iterates through all entities for a few models. Let's say two models, called FooModel with about 200 entities, and BarModel with about 1200 entities. Each has 15 StringPropertys.
for model in [FooModel, BarModel]:
print 'Downloading {}'.format(model.__name__)
new_items_iter = model.query().iter()
new_items = [i.to_dict() for i in new_items_iter]
print new_items
When I run this in my console, it hangs for a while after printing 'Downloading BarModel'. It hangs until I hit ctrl+C, at which point it prints the downloaded list of items.
When this is run in a Jenkins job, there's no one to press ctrl+C, so it just runs continuously (last night it ran for 6 hours before something, presumably Jenkins, killed it). Datastore activity logs reveal that the datastore was taking 5.5 API calls per second for the entire 6 hours, racking up a few dollars in GAE usage charges in the meantime.
Why is this happening? What's with the weird behavior of ctrl+C? Why is the iterator not finishing?
This is a known issue currently being tracked on the Google App Engine public issue tracker under Issue 12908. The issue was forwarded to the engineering team and progress on this issue will be discussed on said thread. Should this be affecting you, please star the issue to receive updates.
In short, the issue appears to be with the remote_api script. When querying entities of a given kind, it will hang when fetching 1001 + batch_size entities when the batch_size is specified. This does not happen in production outside of the remote_api.
Possible workarounds
Using the remote_api
One could limit the number of entities fetched per script execution using the limit argument for queries. This may be somewhat tedious but the script could simply be executed repeatedly from another script to essentially have the same effect.
Using admin URLs
For repeated operations, it may be worthwhile to build a web UI accessible only to admins. This can be done with the help of the users module as shown here. This is not really practical for a one-time task but far more robust for regular maintenance tasks. As this does not use the remote_api at all, one would not encounter this bug.

will gatling actually perform the operation or will it check only the urls' response time?

I have a gatling test for an application that will answer a survey and upon answering this survey, the application will identify possible answers that may pose a risk and create what we call riskareas. These riskareas are normally created in the background as soon as the survey answering is finished. My question is I have a gatling test with ten users who will go and answer the survey and logout, I used recorder to record the test; now after these ten users are finished I do not see any riskareas being created in the application. Am I missing something--should the survey be really answered by gatling (like it does in selenium) user or is it just the urls that the gatling test will touch ?
I am new to gatling please help.
Gatling should be indistinguishable from a user in a web browser (or Selenium) as far as the server is concerned, so the end result should be exactly the same as if you'd gone through the process yourself. However, writing a Gatling script is a little more work than writing a Selenium script.
For performance reasons, Gatling operates at a lower level than Selenium. Gatling works with the actual data that is sent and received from the server (i.e, the actual GETs and POSTs sent to the server), rather than with user-level interactions (such as clicking links and filling forms).
The recorder will generally produce a relaitvely "dumb" script. It records the exact data that was sent to the server, and makes no attempt to account for things that may change from run to run. For example, the web application you are testing might have hidden form fields that contain session information, or the link addresses might contain a unique identifier or a session id.
This means that your script may not be doing what you think it's doing.
To debug the script, the first thing to do is to add checks on each of the requests, to validate that you are getting the response you expect (for example, check that when you submit page 1 of the survey, you are taken to page 2 - check for something that you'd only expect to find on page 2, like a specific question).
Once you know which requests are failing, look at what data was sent with the request, and try to figure out where it came from. You will probably find that there are session ids, view state, or similar, that must be extracted from the previous page.
It will help to enable request and response logging, as per the documentation.
To simplify testing of web apps, we wrote some helper functions to allow tests to be written in a more Selenium-like way. Once you understand what your application is doing, you may find that it simplifies scripting for you too. However, understanding why your current script doesn't work the way you expect should be your first step.

How to ensure that a bot/scraper does not get blocked

I coded a simple scraper , who's job is to go on several different pages of a site. Do some parsing , call some URL's that are otherwise called via AJAX , and store the data in a database.
Trouble is , that sometimes my ip is blocked after my scraper executes. What steps can I take so that my ip does not get blocked? Are there any recommended practices? I have added a 5 second gap between requests to almost no effect. The site is medium-big(need to scrape several URLs)and my internet connection slow, so the script runs for over an hour. Would being on a faster net connection(like on a hosting service) help ?
Basically I want to code a well behaved bot.
lastly I am not POST'ing or spamming .
Edit: I think I'll break my script into 4-5 parts and run them at different times of the day.
You could use rotating proxies, but that wouldn't be a very well behaved bot. Have you looked at the site's robots.txt?
Write your bot so that it is more polite, i.e. don't sequentially fetch everything, but add delays in strategic places.
Following guidelines set in robots.txt is a good first step. There are tools such as import.io and morph.io. There are also packages/ plugins for servers. For example x-ray; a node.js which have options to assist in quickly writing responsible scrapers e.g. throttle, delays, max connections etc.

Precise tracing of IIS and/or SQL

I am experiencing a performance bottle-neck in this website: http://oceanosdecolor.es/ and I'm not able to find it. If you try, you'll see any page (for example, homepage) takes a long time to load.
The first time you execute a page, the site reloads to detect client device, but that's only the first time and then it keeps client device so it doesn't reload again.
I log traces of the execution to database but I don't get useful information as, according to the log, the execution of the whole homepage happens in the same 1 second, but I can see that the homepage takes more time to load.
The IIS log (when trying locally) doesn't help as this also gives information in seconds, not miliseconds, and again, it says everything happens in the same second, and anyway running locally is much faster than on the server.
So, I ask for help in any tool to monitor performance with more accuracy or any technique I could use.
Thank you
I think your answer might not lie with IIS or SQL Server. According to the Developer Tools in Chrome, your actual page execution and sending out the HTML takes 400ms on first load from my location. The problem is you have a tangle of CSS files (many of which are not being found and causing extremely long delays). Also you have a lot of requests.
I would install Yahoo's YSlow for your favourite browser. This will give you a whole bunch of recommendations for what is running slow on your site from an end-user perspective.
To use the Developer Tools on Chrome: right click on your page, hit "Inspect Element" and then go to the "Network" tab and then hard-refresh your browser (shift-F5).
A few of the problems I see are: commun.css (2.5 seconds and failed), layout.css (400ms and failed), jquery-ui-1.8.10.custom.min.js (800ms and failed).
Find the reason for these failing and fix it and I'm sure your site will load faster. Also try use CSS image sprites wherever possible to cut down on the number of requests.

Need ideas on retrieving data from a website

I'm stumped and need some ideas on how to do this or even whether it can be done at all.
I have a client who would like to build a website tailored to English-speaking travelers in a specific country (Thailand, in this case). The different modes of transportation (bus & train) have good web sites for providing their respective information. And both are very static in terms of the data they present (the schedules rarely change). Here's one of the sites I would need to get info from: train schedules The client wants to provide users the ability to search for a beginning and end location and determine, using the external website's information, how they can best get there, being provided a route with schedule times for the different modes of chosen transport.
Now, in my limited experience, I would think the way to do that would be to retrieve the original schedule info from the external site's server (via API or some other means) and retain the info in a database, which can be queried as needed. Our first thought was to contact the respective authorities to determine how/if this can be done, but this has proven to be problematic due to the language barrier, mainly.
My client suggested what is basically "screen scraping", but that sounds like it would be complicated at best, downloading the web page(s) and filtering through the HTML for relevant/necessary data to put into the database. My worry is that the info on these mainly static sites is so static, that the data isn't even kept in a database to build the page and the web page itself is updated (hard-coded) when something changes.
I could really use some help and suggestions here. Thanks!
Screen scraping is always problematic IMO as you are at the mercy of the person who wrote the page. If the content is static, then I think it would be easier to copy the data manually to your database. If you wanted to keep up to date with changes, you could then snapshot the page when you transcribe the info and run a job to periodically check whether the page has changed from the snapshot. When it does, it sends an email for you to update it.
The above method could also be used in conjunction with some sort of screen scaper which could fall back to a manual process if the page changes too drastically.
Ultimately, it is a case of how much effort (cost) is your client willing to bear for accuracy
I have done this for the following site: http://www.buscatchers.com/ so it's definitely more than doable! A key feature of a web scraping solution for travel sites is that it must send you emails if anything went wrong during the scraping process. On the site, I use a two day window so that I have two days to fix the code if the design changes. Only once or twice have I had to change my code, and it's very easy to do.
As for some examples. There is some simplified source code here: http://www.buscatchers.com/about/guide. The full source code for the project is here: https://github.com/nicodjimenez/bus_catchers. This should give you some ideas on how to get started.
I can tell that the data is dynamic, it's to well structured. It's not hard for someone who is familiar with xpath to scrape this site.

Resources