GWT how to store information on google App Engine? - google-app-engine

In my GWT application, a 'root' user upload a specific text file with data and that data should be available to anyone who have access to the app (using GAE).
What's the classic way to store a data that will be available to all users? I don't want to use any database (objectify!?) since this is a relatively small amount of information and it changes from time to time by root.
I was wondering if there was such static MAP on the 'engine level' (not user's session) that this info can be stored (and if the server is down - no bigi, root will upload again)
Thanks

You have three primary options:
Add this file to your /war/ directory and deploy with the app. This is what we typically do with all static files that rarely change (like .css file, images, etc.) This file will be available to all users, whether they are authenticated or not.
Add this file to your /war/WEB-INF/ directory and deploy with the app. This file will be available to your server-side code, so you can read it on the server-side and show to a user. This way you can decide which users can see this file and which users should not have access to it.
Upload this file to Google Cloud Storage. You can do it through an app, or you can simply upload it manually to a bucket using a GCS console or gsutil command-line tool. Then you simply provide a link to your users. The advantage of this option is that you do not have to redeploy your app when a file changes.
The only reason to go with the first two options is to have this file under version control. If you don't need that, I would recommend going with the GCS option.

Related

What is the easiest way to push a file on the GCP AppEngine?

To verify the ownership of a domain to a mail service, I need to put a file with a specific name for verification. Is there a better way than pushing it into my app source repository?
For security reasons you would have to put the file in your source and do a deployment to App Engine. If you’ve worked with a traditional web server in the past where you basically dump files into a folder and serve them this will be a bit of a change. The App Engine files are going to execute only. If you want to get in to adding other files on the fly you would need a Cloud Storage Bucket, but I don’t think that will do it for your domain verification.

Where to put SQLite database file in Azure App Service?

Q1: Where do you think is the right place to put a SQLite database file (database.sqlite) in Azure Web App file system? For example:
D:\home\data\database.sqlite
D:\home\site\database.sqlite
D:\home\site\wwwroot\database.sqlite
other?
Q2: What else should be taken into consideration in order to make sure that the database file won't be accessible to public users as well as not being accidentally overwritten during deployments or when the app is scaled up/down? (The Web App is configured for deployments from a Local Git Repository)
Q3: Where to learn more about the file system used in Azure App Service, the official source URL? E.g. how it's shared between multiple VMs within a single Web App, how does it work when the App is scaled up/down, what's the difference between D:\home (persistent) vs D:\local (non-persistent)...
Note that SQLite does not work in Azure Blob Storage, so that one is not an option. Please, don't suggest alternative storage solutions, this question is specifically about SQLite.
References
Appropriate Uses For SQLite
In a Web App, your app is deployed to d:\home\site\wwwroot. This is the area where you may write files. As an example, the ghost deployment writes its SQLite database to d:\home\site\wwwroot\content\data\ghost.db. (easy to see this, if you open up the kudu console via yourapp.scm.azurewebsites.net):
This file area is shared amongst your web app instances. Similar to an SMB file share, but specific to web apps (and different than Azure's File Service).
The content under wwwroot is durable, unless you delete your app service. Scaling up/down impacts the amount of space available. (I have no idea what happens if you scale down and the smaller size has less disk space than what you're consuming already).
I would say the best location would be app_data folder in the site/wwwroot folder. Create the folder if it doesn't exist.
Web Apps can connect to storage accounts so you can in fact use blob storage and connect that to your web app. So in terms of learning more about it then you need to be looking at the appropriate page of documentation.
In your Web App settings you can then select which storage account to use. You can find this under Settings > Data Connections where you can select Storage from the drop down box.

AngularJS and Ruby on Rails - Uploading multiple files directly to Amazon S3

I'm writing an app where users can write Notes, and each note can have many files attached to it.
I would like users to be able to click 'Browse', select multiple files, which will be uploaded when the user clicks 'Save Note'.
I want these files to be uploaded directly into Amazon S3 (or some other cloud storage solution?) without going through my server, so I don't have to worry about uploads blocking my server.
What is the best way to accomplish this?
I have seen many examples to upload directly into Amazon S3, but none of them support multiple files. Will I somehow have to do this all in Javascript by looping through a collection of files selected with the Browse button?
Thanks!
Technically, your javascript residing in the browser could make HTTP RESTful calls to AWS and store data in S3, but then you would be exposing the security credentials to connect to AWS in the script.. not good.
I guess the only way is to process it thru a web-server which can securely access AWS and store the notes.. or, you could just write those notes to a local disk (where the webserver sits), and schedule tools like s3cmd to automatically synch them with S3 buckets.

Creating and serving temporary HTML files in Azure

We have an application that we would like to migrate to Azure for scale. There is one place that concerns me before starting however:
We have a web page that the user is directed to. The code behind on the page goes out to the database and generates an HTML report. The new HTML document is placed in a temporary file along with a bunch of charts and other images. The user is then redirected to this new page.
In Azure, we can never be sure that the user is going to be directed to the same machine for multiple reasons: the Azure load balancer may push the user out to a different machine based on capacity, or the machine may be deprovisioned because of a problem, or whatever.
Because these are only temporary files that get created and deleted very frequently I would optimally like to just point my application's temp directory to some kind of shared drive that all the web roles have read/write access to, and then be able to map a URL to this shared drive. Is that possible? or is this going to be more complicated than I would like?
I can still have every instance write to its own local temp directory as well. It only takes a second or two to feed them so I'm ok with taking the risk of whether that instance goes down during that microsecond. The question in this regard is whether the redirect to the temp HTML file is going to use http 1.1 and maintain the connection to that specific instance.
thanks,
jasen
There are 2 things you might want to look at:
Use Windows Azure Web Sites which supports some kind of distributed filesystem (based on blob storage). So files you store "locally" in your Windows Azure Web Site will be available from each server hosting that Web Site (if you use multiple instances).
Serve the files from Blob Storage. So instead of saving the HTML files locally on each instance (or trying to make users stick to a specific instance), simply store them in Blob Storage and redirect your use there.
Good stuff from #Sandrino. A few more ideas:
Store the resulting html in in-role cache (which can be collocated in your web role instances), and serve the html from cache (shared across all instances)
Take advantage of CDN. You can map a "CDN" folder to the actual edge-cache. So you generate the html in code once, and then it's cached until TTL expiry, when you must generate the content again.
I think azure blob is best place to to store your html files which can be accessed by multiple instances. You can redirect user to that blob content or you can write custom page to render content from blob.

Give a folder certain permissions on App engine?

I'm completely new to app engine and I need to give a certain directory in my application permission 733 How would I do that?
Files uploaded with your application are accessible only by your application - or in the case of static content, by everyone, logged in users, or admins only, depending on your authentication settings. Other applications cannot access your files, so the idea of file permissions makes no sense in the context of an App Engine app.
Note that you cannot write to the filesystem from your application. Any dynamically created data must live in the datastore, the blobstore, or memcache.
Post some more details about what you're trying to do, and we can advise how it would be done without writing files.

Resources