Creating and serving temporary HTML files in Azure - file

We have an application that we would like to migrate to Azure for scale. There is one place that concerns me before starting however:
We have a web page that the user is directed to. The code behind on the page goes out to the database and generates an HTML report. The new HTML document is placed in a temporary file along with a bunch of charts and other images. The user is then redirected to this new page.
In Azure, we can never be sure that the user is going to be directed to the same machine for multiple reasons: the Azure load balancer may push the user out to a different machine based on capacity, or the machine may be deprovisioned because of a problem, or whatever.
Because these are only temporary files that get created and deleted very frequently I would optimally like to just point my application's temp directory to some kind of shared drive that all the web roles have read/write access to, and then be able to map a URL to this shared drive. Is that possible? or is this going to be more complicated than I would like?
I can still have every instance write to its own local temp directory as well. It only takes a second or two to feed them so I'm ok with taking the risk of whether that instance goes down during that microsecond. The question in this regard is whether the redirect to the temp HTML file is going to use http 1.1 and maintain the connection to that specific instance.
thanks,
jasen

There are 2 things you might want to look at:
Use Windows Azure Web Sites which supports some kind of distributed filesystem (based on blob storage). So files you store "locally" in your Windows Azure Web Site will be available from each server hosting that Web Site (if you use multiple instances).
Serve the files from Blob Storage. So instead of saving the HTML files locally on each instance (or trying to make users stick to a specific instance), simply store them in Blob Storage and redirect your use there.

Good stuff from #Sandrino. A few more ideas:
Store the resulting html in in-role cache (which can be collocated in your web role instances), and serve the html from cache (shared across all instances)
Take advantage of CDN. You can map a "CDN" folder to the actual edge-cache. So you generate the html in code once, and then it's cached until TTL expiry, when you must generate the content again.

I think azure blob is best place to to store your html files which can be accessed by multiple instances. You can redirect user to that blob content or you can write custom page to render content from blob.

Related

Shoud I store my PDFs for download in Frontend or Backend?

I create a homepage with a download center for PDFs.
Is it better to store it in the frontend or in the backend or is this irrelevant ?
Edit for the comment:
-> it will be a website planned for 3.000 user per month
-> not sure what PWA is, but the page gets a responsive design for computer and smartphones
-> the PDFs should be accessible for everyone, like a manual or a sample of contracts
Given that information needs to be accessible to different users I would say that it has to be stored in a server and managed by the backend, which would be equivalent to saying "It should be in the backend" on your own terms. This is the simple answer, but as far as I can see, two questions arise here:
How to store the data
Once established that the data resides on the backend part of the system you would have to choose between having the PDFs stored in the file system and the backend serving files statically or having the PDFs stored as BLOBS in the database. Both have their advantages and drawbacks, more information here.
Should it be accessible offline
If the user needs to access some information while offline then you would have to store those PDFs on his device. Another reason to do that would be if the PDFs are very large and they don't change that often but they could be fetched by an user multiple times in a day and you don't want to have the backend busy serving the same file everytime.

Where to put SQLite database file in Azure App Service?

Q1: Where do you think is the right place to put a SQLite database file (database.sqlite) in Azure Web App file system? For example:
D:\home\data\database.sqlite
D:\home\site\database.sqlite
D:\home\site\wwwroot\database.sqlite
other?
Q2: What else should be taken into consideration in order to make sure that the database file won't be accessible to public users as well as not being accidentally overwritten during deployments or when the app is scaled up/down? (The Web App is configured for deployments from a Local Git Repository)
Q3: Where to learn more about the file system used in Azure App Service, the official source URL? E.g. how it's shared between multiple VMs within a single Web App, how does it work when the App is scaled up/down, what's the difference between D:\home (persistent) vs D:\local (non-persistent)...
Note that SQLite does not work in Azure Blob Storage, so that one is not an option. Please, don't suggest alternative storage solutions, this question is specifically about SQLite.
References
Appropriate Uses For SQLite
In a Web App, your app is deployed to d:\home\site\wwwroot. This is the area where you may write files. As an example, the ghost deployment writes its SQLite database to d:\home\site\wwwroot\content\data\ghost.db. (easy to see this, if you open up the kudu console via yourapp.scm.azurewebsites.net):
This file area is shared amongst your web app instances. Similar to an SMB file share, but specific to web apps (and different than Azure's File Service).
The content under wwwroot is durable, unless you delete your app service. Scaling up/down impacts the amount of space available. (I have no idea what happens if you scale down and the smaller size has less disk space than what you're consuming already).
I would say the best location would be app_data folder in the site/wwwroot folder. Create the folder if it doesn't exist.
Web Apps can connect to storage accounts so you can in fact use blob storage and connect that to your web app. So in terms of learning more about it then you need to be looking at the appropriate page of documentation.
In your Web App settings you can then select which storage account to use. You can find this under Settings > Data Connections where you can select Storage from the drop down box.

GWT how to store information on google App Engine?

In my GWT application, a 'root' user upload a specific text file with data and that data should be available to anyone who have access to the app (using GAE).
What's the classic way to store a data that will be available to all users? I don't want to use any database (objectify!?) since this is a relatively small amount of information and it changes from time to time by root.
I was wondering if there was such static MAP on the 'engine level' (not user's session) that this info can be stored (and if the server is down - no bigi, root will upload again)
Thanks
You have three primary options:
Add this file to your /war/ directory and deploy with the app. This is what we typically do with all static files that rarely change (like .css file, images, etc.) This file will be available to all users, whether they are authenticated or not.
Add this file to your /war/WEB-INF/ directory and deploy with the app. This file will be available to your server-side code, so you can read it on the server-side and show to a user. This way you can decide which users can see this file and which users should not have access to it.
Upload this file to Google Cloud Storage. You can do it through an app, or you can simply upload it manually to a bucket using a GCS console or gsutil command-line tool. Then you simply provide a link to your users. The advantage of this option is that you do not have to redeploy your app when a file changes.
The only reason to go with the first two options is to have this file under version control. If you don't need that, I would recommend going with the GCS option.

AngularJS and Ruby on Rails - Uploading multiple files directly to Amazon S3

I'm writing an app where users can write Notes, and each note can have many files attached to it.
I would like users to be able to click 'Browse', select multiple files, which will be uploaded when the user clicks 'Save Note'.
I want these files to be uploaded directly into Amazon S3 (or some other cloud storage solution?) without going through my server, so I don't have to worry about uploads blocking my server.
What is the best way to accomplish this?
I have seen many examples to upload directly into Amazon S3, but none of them support multiple files. Will I somehow have to do this all in Javascript by looping through a collection of files selected with the Browse button?
Thanks!
Technically, your javascript residing in the browser could make HTTP RESTful calls to AWS and store data in S3, but then you would be exposing the security credentials to connect to AWS in the script.. not good.
I guess the only way is to process it thru a web-server which can securely access AWS and store the notes.. or, you could just write those notes to a local disk (where the webserver sits), and schedule tools like s3cmd to automatically synch them with S3 buckets.

Setup for server side for application which need easy acces to data source

I need to make a couple of mobile applications which will all access a shared online resource using e.g. REST API.
What is the cheapest/easiest setup for the server side resource?
The server should store data as either xml/json/sqlite and expose an API to access this data, preferably in a secure manner.
Is Google App Engine appropriate? Any others?
What would be a recommended way to implement?
What I want to do is to have a database online (not important which format - content will not bee too big, ~5000 records with around 5-10 text fields each), have a simple management console for editing this content and then let mobile devices connect in order to check if they have the latest data and update if required.
The data should not be publicly available but key may be hardcoded into device applications.

Resources