I am a new bee to the cloud and trying to understand few common issues which people solved already in the field.
As of now i have created a docker image for my java based web application. Also i have created a oracle database 11g XE instance with a database imported by default. Finally i pushed these docker images to the AWS repository and deployed to EC2 instances as a docker containers. I am able to access my web application using the public IP and everything is looking good except one thing. When the EC2 instance went down or recreated for some reason, the database container will be recreated with the original database, i will be losing all the data's which have been created after setting up my application for the first time.
I know this is a common issue in the container world, i just want to know how people solve this issue.
I am very new to using MS Access with web forms.
I want to create a web form (using php) and have the data inserted into an MS Access database. However, the database (.accb file) is currently stored on a network drive.
My question is:
Will the database (.accb file) have to be moved to the web server in order for me to accomplish this? I need to keep the database on the network drive so that everyone in the office can use it. That is, the office staff needs real time access to the database as it is being updated. If the database (.accb file) is moved to the web server, then the non-technical users won't be able to access it. Plus this will create all sorts of security issues with the I.T department.
I am aware that I can create a web app with MS Access. However, this will be very tedious given that the current database is quite old.
I am a new user to openstack trove. As far as I found (from the process of creating datastore and database in trove) trove works like this: For each datastore instance there is a nova-compute image that this instance will be launch on that (and also a cinder storage assigned to this instance) Therefore there is not a centralized database which could be manage by openstack administrator. As far as I know there are two types of cloud database: 1)virtual machine image database and 2)dbaas. For dbaas it should not be like having a virtual machine instance per each database and database provisioning should be manageable by system administrator (not client). Therefore could somebody explain for me how trove works and how could we consider it dbaas and not virtual machine image database?
Regards.
Trove is an openstack service that provides an API for creating a relational or non-relational database to a user. This database can be used within a deployed application. So in your terms it is a database as a service solution.
Trove provisions an instance from an image for a single tenant. This is not meant to be a centralized database for openstack.
To get more details about Trove
https://wiki.openstack.org/wiki/Trove
I am working on a phonegap project to build a cross platform mobile app, and came to know from a website that the app's database can be deployed/built with "database.com".
The procedure is well explained but I have one question -
how to sync the database available on database.com with a database on a local server?
i.e. for e.g., if a client has his database (of his desktop application) on his local server and he requires a mobile app of the same now, what is the procedure to be followed in "database.com" to sync his server's database with the database on "database.com"?
PS: I need to use "database.com" for my database because I want to maintain it on cloud, and I do not have capability to maintain a local server.
You might need a service for data syncing if it is to be more than once. I work on a project that does exactly this.
www.overcast-suite.com
Otherwise, model your tables to Salesforce Custom Objects, export the data on the local server to CVS and use the Data Loader to import.
So I'm inexperienced in hosting DB's and I've always had the luxury of someone else getting the db setup.
I was going to help a friend out with getting a webpage setup, I've got experience in Asp.Net MVC so I'm going with that. They want to setup a search page to query a db and display the results. My question I have is in getting the DB setup and hosted. They currently just have the Access DB on a local computer. There is basically only one table that would need to be queried for the search.
What is the best approach to getting this table/db accessible? They would like to keep the main copy of the db on the local machine, so copying the entire db over to the hosted site would be time consuming, could the lone table needed be solely copied to the host? Should I try to convince them to make changes on the hosted db and just make copies of that for their local machines? Any suggestions are welcome, Again I'm a total noob when it comes to hosting databases.
Thanks
Added: They are using a MS Access 2000, and the page will have access restrictions. Thanks for the responses.
How about SQL Server Express? I think you can do a remote connect from Access and just push the data over from Access.
I wouldn't use Access on a web server in any case.
I would strongly recommend against access from web work, its just not designed for it and given that SQL server express is free there is no reason not to give it a go.
You can migrate the data over by using the SQL server upsizing wizard, here is a link for help on using that feature
http://support.microsoft.com/kb/237980
It depends on what you mean by web work? Access 2010 can build scalable browser neutral web applications. They can scale to 1000's to users. In fact, you can even park the web sites on Microsoft's new cloud hosting options, and scale out to as many users as you need.
Here is a video of an application I wrote in access 2010. Note how at the half way I run the same application including the Access forms in a standard web browser. This application was built 100% inside of the Access client. The end result needs no ActiveX or Silverlight to run.
http://www.youtube.com/watch?v=AU4mH0jPntI
So, the above shows that access can now be used to build scale web sites (you can ignore the confusing answers by the other two posters here they are not quite up to speed on how access works or functions).
However, for your case, I would continue to have the access database on the desktop. You can simply link to tables that are hosted on the web server. Those tables can exist in MySql, or sql server. As long as the web site supports external ODBC connections (many do), then you can thus have the desktop application use the live data from the web server. If connections to the live data at all times is a issue, then you could certainly setup something to send up new records (or the whole table) on some kind of interval or perhaps the reverse, and pull down new records on a interval from the web site (depends which way you need to go). So, connecting to MySql or sql server is quite easy as long as the web hosting and site permits external ODBC connections. I do this all the time, and it works quite well.
As mentioned, new for access 2010 is web site building ability but that does requite Access Web services running on SharePoint.
You don't need to upgrade to Access 2010. One option is to use the EQL Data plugin to sync the database up to the server. Then you can write an asp.net, php, or whatever application that queries the table using the EQL API and prints the results however you want. This kb article describes how to use the EQL API from a web app.
The nice thing is that the database is still totally usable (and at full speed) even when you're not online, and then you can sync the new data up to the web occasionally. It only uploads the changes, not the entire database every time, so it's fast.
Disclaimer: I work at EQL Data so I'm a bit biased. But this kind of use case is the whole reason the company exists.