Why cloud application's file system is ephemeral - 12factor

The "Beyond 12 factor APP" and "Considerations for Designing and Running an Application in the Cloud "(https://docs.cloudfoundry.org/devguide/deploy-apps/prepare-to-deploy.html)
states file system is ephemeral.
However I got different result when testing with openstack:
create VM using openstack server create with centos qcow2 image, no
external storage
ssh to the VM, create file under /home/centos
reboot VM
after VM startup, the file is still there.
Did I understand something wrong?
quote from the book:
cloud-friendly applications don’t just run in the cloud;they embrace
elastic scalability, ephemeral filesystems
in the "Logs" chapter: Cloud applications can make no assumptions about the file system on which they run, other than the fact that it is ephemeral.
quote from "Considerations for Designing and Running an Application in the Cloud " :
"Avoid Writing to the Local File System": "Local file system storage is short-lived."..."When an application instance crashes or stops, the resources assigned to that instance are reclaimed by the platform including any local disk changes made since the app started. When the instance is restarted, the application will start with a new disk image. Although your application can write local files while it is running, the files will disappear after the application restarts."

The meaning is that when running containerized applications you can't trust the file system to be longed lived between restarts, as it may be purged, or you might be running next time on a different instance.
It doesn't mean the data is guaranteed to disappear - just that it isn't guaranteed to stay - very much like a temp folder on a regular server

ephemeral(non-persistent) storage is given by default to the guests, if persistent storage is required for the apps, Cinder can be used.

Related

What should I do when Heroku clear my database after the deployment? [duplicate]

I have a small Node.js / Express app deployed to Heroku.
I'd like to use a lightweight database like NeDB to persist some data. Is it possible to periodically backup / copy a file from Heroku if I used this approach?
File-based databases aren't a good fit for Heroku due to its ephemeral filesystem (bold added):
Each dyno gets its own ephemeral filesystem, with a fresh copy of the most recently deployed code. During the dyno’s lifetime its running processes can use the filesystem as a temporary scratchpad, but no files that are written are visible to processes in any other dyno and any files written will be discarded the moment the dyno is stopped or restarted. For example, this occurs any time a dyno is replaced due to application deployment and approximately once a day as part of normal dyno management.
Depending on your use case I recommend using a client-server database (this looks like a good fit here) or something like Amazon S3 for file storage.

OpenShift 3 Webapp With Access to File System

I have a Tomcat 8 web app running on OpenShift 3.
I want to be able to read and write files on 'the file system'.
I have been wading through documentation and looking for examples of how to achieve this.
I see that there are many types of persistent storage, for example NFS, EBS, GlusterFS etc.
So, my first question is.
What is the best file system to use for simple read/write access to text based xml files?
Preferably like a *nix file system.
Any example would be much appreciated...
The free OpenShift 3 Starter service ONLY allows 'filesystem storage' to EBS (Amazon Elastic Block Storage). Which can only be written to ONCE.
To get access to GlusterFS of NFS you have to go to the paid service which starts at $50 per month. They are the only filesystems that allow multiple writes to a file.

Restore app-engine entities locally

Hi guys I've dumped (made a backup) of my Appengine datastore entities,following this tutorial, now I wonder if there is a way to restore the data locally ? so I can do some test and debug.
In windows, the datastore is in the directory
C:\Users\UserName\AppData\Local\Temp\AppName
In OSx this question can help you
In this directory are storade the datastore.db (the local storage), change the name (the app should not be running, and if is locked, kill all the python process)
Now go to the appengine dashboard
click in your app link
click in Blob Viewer (i'm assumming that you did the backup into a blobstore)
click in the file name
click in download
rename the file to datastore.db
copy to the previous path
start the app
Remote API (as koma mentions) is the main GAE-documented approach, and it's a good approach. Alternatively, you can download the entities using the cloud download tool, write your own store reader/deserializer, and execute it within your dev server local instance: http://gbayer.com/big-data/app-engine-datastore-how-to-efficiently-export-your-data. Read the part about the New Approach...
While these options are not automatic and require engineering, I really wanted to point out the side effect of doing this: We have been facing performance issues in the local development server for months now, specifically when the datastore has more than 1,000 entities with over 50 indexes. Just search for "require_indexes slow" and you'll see what I'm talking about.
I'm sure you have a solid reason to import lots of data locally for testing and debugging, just wanted to let you know your application will perform extremely slow, and debug mode will be impossibly slow; we can't even use debug mode with our setup anymore.
If you want to get some test data in your local db, you could copy some using the remote api

HTML5 client-side database location

I want to create a standalone todo list with HTML5. Ideally, it would be a file that sits on a USB drive and could leverage a database (either embedded in the html file somewhat like tiddlywiki) or would access another file.
I've read about Web SQL and SQL Lite, but it seems like they save the database information to a specific location on the local computer, and the user has no control over whether to place it elsewhere. The app wouldn't really be portable if it saves a different instance for every machine upon which it's run.
Is there a solution to force the database to reside on the portable drive?
Client & Server Storage
SQLite
SQLite is a database engine that needs to be specifically installed on a machine or packaged with software. This type of technology is usually used on the server side with a server programming language like PHP. Therefore, I do not believe that server storage solution is for you.
HTML5 Client Storage
Meanwhile, Web SQL is an HTML5 feature for client storage. The databases are managed by the client (the browser) through JavaScript. The implementation of WebSQL is very similar to SQLite. Note that Firefox supports IndexedDB instead of WebSQL. Sadly, you can't force the browser to store the databases onto an external drive.
The Solution
At this point, the most viable solution for you is to create an application that will be executed on the external drive. If you really want to use HTML5 you could go with Titanium. It leverages the power of web technologies to create native cross-platform applications. From Titanium's documentation you can package a SQLite database with your application.

Silverlight Isolated Storage Settings Being Overwritten by Another Silverlight Application

I have a Silverlight application deployed on both our staging and production servers. This application caches information using isolated storage settings. If I browser to
http://stagingserver/pagewithsilverlight.aspx
everything works fine, data is stored to isolated storage on my machine just fine. However if I browse to
http://productionserver/pagewithsilverlight.aspx
everything works EXCEPT that nothing is saved to isolated storage on my machine. Both pages contain copies of the same xap file. Why would it behave one way when hosted on one server and behave a different way when hosted on a different server?
Edit: Additional note, I have verified that data is not being saved to isolated storage by looking at the C:\Documents and Settings\username\Local Settings\Application Data\Microsoft\Silverlight\is directory while running my application.
Edit #2: After further investigation (process monitor), the data is being written to isolated storage, but then being overwritten by another SilverLight application on the same page. This didn't show up on our staging environment since the second application was not deployed there. Somehow, both applications are being given the same isolated storage location. They are unique xap files, so how are they being given the same IsolatedStorage.ApplicationSettings location?
Could it be that your applications are using GetUserStoreForSite() instead of GetUserStoreForApplication() on the IsolatedStorageFile class?

Resources