Display loading screen while server is generating page - database

I have an ASP script that generates page content in response to some GET parameters.
Sometimes the page generation takes a bit of time (running database queries etc) and I'd like to display something to the user while the page is loading. What is the standard way of doing this?
I'm not using AJAX on the page at the moment.

Is there a reason you're not using AJAX? I had a similar problem at an internship I did last summer. At first I decided to ignore AJAX, partially due to being lazy and not wanting to have to learn javascript/ajax usage. However, it became increasingly obvious that without ajax, the user experience was being significantly hampered (due to the same sort of thing you're talking about here... a longish server side operation).
If you're in the position to "AJAXify" your application, then I suppose you could add a loading image when the request is initially made, and then replace it with the given content when the asynchronous call returns. Jquery makes this kind of thing pretty easy with its various AJAX facilities and callback functions.
Of course, you're probably already aware of all of this... so please forgive me if I'm just restating the obvious!

You can use the Response.Flush to force something to the browser:
Response.Write("<div id=""preloader"">Loading, please wait...</div>")
Response.Flush()
'long running code...
'long running code...
'long running code...
Response.Write("<script type=""text/javascript"">document.getElementById(""preloader"").style.display = ""none"";</script>")

Related

How to run code in wordpress to handle database

Whenever I have to run code on the database, change posts, or terms or what have you, I am running it on a custom page template.
Since this has been working for me up to know, I didn’t think about it much. But I need to delete a ton of terms now from a custom taxonomy and I can’t do it on the test page very effectively. Meaning I get 504 gateway errors all the time, because the code takes too long to run, and deletes only a part of the terms.
So I am wondering, if I need to run custom code to change a lot of data, what is the most efficient method to use?
Many people use a plugin named Code Snippets for this. Otherwise it's often more efficient to use direct SQL Queries using phpMyAdmin for example.

Saving code in database, what are pitfall I should be careful about

I am designing a system which takes user submitted code and saves it in database. Code can be in any language, ruby, python, elixir, javascript, etc. There's no restriction on language. Code saved in database is never meant to be run. It will be displayed in blog article or converted into file for download. Similar example might be GitHub gist or Cacher, both takes user submitted code and displays on website.
How do I make sure User submitted code is sanitised and secure to be displayed on webpage with code highlighter?
What processing do I need to do on code such that I can safely display it? I don't want to impose strict restrictions on users.
Any gotcha I need to be aware?
Any idea how those website implement this feature?
I am using Elixir and Phoenix framework. Is there any pitfalls I should be careful about? I am thinking of using Phoenix.HTML module to escape codes. I just wanna be sure that my approach doesn't have known loop holes.
I think you are looking for this https://www.owasp.org/index.php/XSS_(Cross_Site_Scripting)_Prevention_Cheat_Sheet

will gatling actually perform the operation or will it check only the urls' response time?

I have a gatling test for an application that will answer a survey and upon answering this survey, the application will identify possible answers that may pose a risk and create what we call riskareas. These riskareas are normally created in the background as soon as the survey answering is finished. My question is I have a gatling test with ten users who will go and answer the survey and logout, I used recorder to record the test; now after these ten users are finished I do not see any riskareas being created in the application. Am I missing something--should the survey be really answered by gatling (like it does in selenium) user or is it just the urls that the gatling test will touch ?
I am new to gatling please help.
Gatling should be indistinguishable from a user in a web browser (or Selenium) as far as the server is concerned, so the end result should be exactly the same as if you'd gone through the process yourself. However, writing a Gatling script is a little more work than writing a Selenium script.
For performance reasons, Gatling operates at a lower level than Selenium. Gatling works with the actual data that is sent and received from the server (i.e, the actual GETs and POSTs sent to the server), rather than with user-level interactions (such as clicking links and filling forms).
The recorder will generally produce a relaitvely "dumb" script. It records the exact data that was sent to the server, and makes no attempt to account for things that may change from run to run. For example, the web application you are testing might have hidden form fields that contain session information, or the link addresses might contain a unique identifier or a session id.
This means that your script may not be doing what you think it's doing.
To debug the script, the first thing to do is to add checks on each of the requests, to validate that you are getting the response you expect (for example, check that when you submit page 1 of the survey, you are taken to page 2 - check for something that you'd only expect to find on page 2, like a specific question).
Once you know which requests are failing, look at what data was sent with the request, and try to figure out where it came from. You will probably find that there are session ids, view state, or similar, that must be extracted from the previous page.
It will help to enable request and response logging, as per the documentation.
To simplify testing of web apps, we wrote some helper functions to allow tests to be written in a more Selenium-like way. Once you understand what your application is doing, you may find that it simplifies scripting for you too. However, understanding why your current script doesn't work the way you expect should be your first step.

Need ideas on retrieving data from a website

I'm stumped and need some ideas on how to do this or even whether it can be done at all.
I have a client who would like to build a website tailored to English-speaking travelers in a specific country (Thailand, in this case). The different modes of transportation (bus & train) have good web sites for providing their respective information. And both are very static in terms of the data they present (the schedules rarely change). Here's one of the sites I would need to get info from: train schedules The client wants to provide users the ability to search for a beginning and end location and determine, using the external website's information, how they can best get there, being provided a route with schedule times for the different modes of chosen transport.
Now, in my limited experience, I would think the way to do that would be to retrieve the original schedule info from the external site's server (via API or some other means) and retain the info in a database, which can be queried as needed. Our first thought was to contact the respective authorities to determine how/if this can be done, but this has proven to be problematic due to the language barrier, mainly.
My client suggested what is basically "screen scraping", but that sounds like it would be complicated at best, downloading the web page(s) and filtering through the HTML for relevant/necessary data to put into the database. My worry is that the info on these mainly static sites is so static, that the data isn't even kept in a database to build the page and the web page itself is updated (hard-coded) when something changes.
I could really use some help and suggestions here. Thanks!
Screen scraping is always problematic IMO as you are at the mercy of the person who wrote the page. If the content is static, then I think it would be easier to copy the data manually to your database. If you wanted to keep up to date with changes, you could then snapshot the page when you transcribe the info and run a job to periodically check whether the page has changed from the snapshot. When it does, it sends an email for you to update it.
The above method could also be used in conjunction with some sort of screen scaper which could fall back to a manual process if the page changes too drastically.
Ultimately, it is a case of how much effort (cost) is your client willing to bear for accuracy
I have done this for the following site: http://www.buscatchers.com/ so it's definitely more than doable! A key feature of a web scraping solution for travel sites is that it must send you emails if anything went wrong during the scraping process. On the site, I use a two day window so that I have two days to fix the code if the design changes. Only once or twice have I had to change my code, and it's very easy to do.
As for some examples. There is some simplified source code here: http://www.buscatchers.com/about/guide. The full source code for the project is here: https://github.com/nicodjimenez/bus_catchers. This should give you some ideas on how to get started.
I can tell that the data is dynamic, it's to well structured. It's not hard for someone who is familiar with xpath to scrape this site.

Best screen scraper, simple html dom or snoopy?

which one is better for screen scraping? simple html dom or snoopy ??
i use simple html dom and find it comfortable..
does snoopy has any advantage over simple html dom?
my requirements : if i wanna scrape contents from a page(after login)..
simple html dom is easy but it takes a lotta time to print the results..
Is Snoopy that well known / mature of a package?
If it's not, then all other things being equal, I'd probably go with generic HTML DOM code - especially if the scraping is somewhat simple.
But only you know when your code is starting to get too big, unmanageable, etc., at which point it might be better to look at another tool out there like Snoopy.
(Which, admittedly, I don't have experience with; it's apparently at http://sourceforge.net/projects/snoopy/ for those not familiar with it - "Snoopy is a PHP class that simulates a web browser. It automates the task of retrieving web page content and posting forms, for example.")
The real reason I'm posting, even though I don't know Snoopy per se and thus can't definitively answer your question, is to ask if you've considered using Selenium (http://www.seleniumhq.org/) instead of Snoopy.
Selenium is a fairly well-known testing tool, and it occurred to me that one of the nice things about using that for what you're doing (if you can) is that it has built in tests.
The reason that's good is that screen scraping is kind of an inherently brittle task - if the target site changes something, blam, your scraping fails. So it's kind of a nice design to have an automated scrape/test-that-scraping-worked system.
Something to think about, anyway.
I've stumbled into BeautifulSoup, which is Python-based. I suppose there are a bunch of others too.
Looks like Snoopy is PHP-based, and hence can be run server-side only. Is this what you are really looking for? What are your requirements? Please elaborate on that.

Resources