DB data lost for each cloudfoundry deploy of grails application - database

I'm developing a grails 2.0.3 application using sts.
I develop and before close sts I usually deploy my application on cloudFoundry.
I'm using HSQLDB and this is DataSource.groovy:
dataSource {
pooled = true
driverClassName = "org.h2.Driver"
username = "mcg"
password = "mcg"
}
hibernate {
cache.use_second_level_cache = true
cache.use_query_cache = true
cache.provider_class = 'net.sf.ehcache.hibernate.EhCacheProvider'
}
// environment specific settings
environments {
development {
dataSource {
dbCreate = "update" // one of 'create', 'create-drop','update'
url = "jdbc:h2:file:qhDB"
}
}
test {
dataSource {
dbCreate = "update"
url = "jdbc:h2:file:testDb"
}
}
production {
dataSource {
dbCreate = "update"
url = "jdbc:h2:file:prodDb"
}
}
}
My problem is that each time I deploy my application to cloudfoundry the db becomes empty on the cloud.
Some suggestions?

#kenota is correct, but there's the additional risk that the entire instance can crash and get rebuilt, so you would lose all filesystem files, even in /tmp. You're much better off using MySQL or PostgreSQL - both are trivial to use in CloudFoundry and will perform much better. In addition if you have enough traffic that you need multiple web server instances, you will share one database instead of multiple file-based databases that all have different data.

By doing this:
url = "jdbc:h2:file:prodDb"
You are asking H2 to use file to store data. But problem is, you are using relative path, so the file will be created in the current working directory of web application, which is usually unpacked web app root.
If you run it on tomcat, the file will be located at: /opt/tomcat7/webapps/app/prodDb If you will redeploy your application with deleting previous one, the database file will be deleted as well.
I think that is exactly what is happening on cloudfoundry.
You should define absolute path to store your database:
url = "jdbc:h2:file:/tmp/prodDb"

I solve using MySQL service on cloudfoundy.

Related

ASP.NET Core using Distributed SQL Server Cache for session state

I am trying to use distributed SQL Server Cache for session state for a ASP.NET 6 application.
The code example on Microsoft documentation shows how to set up session state using in memory cache:
https://learn.microsoft.com/en-us/aspnet/core/fundamentals/app-state?view=aspnetcore-6.0
But our application will be deployed to multiple servers. So I am looking for an approach to use distributed SQL Server Cache for session state.
Here is the code I am using in Program.cs:
builder.Services.AddDistributedSqlServerCache(options =>{
options.ConnectionString = builder.Configuration.GetConnectionString(
"DistCache_ConnectionString");
options.SchemaName = "dbo";
options.TableName = "TestCache";}); // we have the corresponding table "TestCache" set up
builder.Services.AddSession(options =>{
options.IdleTimeout = TimeSpan.FromMinutes(10);
options.Cookie.IsEssential = true;});
...
app.UseSession();
When trying to set a session value in a controller, id is empty and a is null. And the "TestCache" table is also empty.
HttpContext.Session.SetString("name", "test");
var id = HttpContext.Session.Id;
var a = HttpContext.Session.GetString("name");
Did I miss anything for configuration? I searched online and found some examples using similar code, but all of them seem to use .NET Core 2.* or .NET Core 3.*
Did anything change for .NET 6?

how to use multiple database connections in sails.js?

I'm new in sails and I want to know that how can I use multiple database connections and use all of them in my app? now I'm using MySQL but I want to add mongo to add some data to it
Yes, you can do this with sails.
If you are using a version < v1
First thing to do is add a connection to each db in the connections.js file located in the config folder and ensure they both have a different name.
For example
someDBServer1: {
adapter: 'sails-mongo'
...
},
someDbServer2: {
adapter: 'sails-mysql',
...
},
Then in each Model, you can set the respective connection. If for example you have your User data stored in one database and your images stored in another.
You can set the connection in the User model like this:
module.exports = {
connection : 'someDBServer1',
attributes: {...
And in the Images model using the other connection:
module.exports = {
connection : 'someMongodbServer2',
attributes: {...
For sails version >= v1
The setup is very similar.
Databases are stored in datastores. Datastores are defined in the Sails config config/datastores.js.
Then as above rather than setting the connection in the specific Model, you set the datastore.
For more info on this see Sails ORM Documentation or Sails v1 ORM Documentation.
You can add all of your adapters in your config/datastores.js, if you want all of your models use a single adapter, then you just need to change the default model settings in config/models.js, if you want to override settings for a particular model, then you need to change the model's definition file.
you can find more informations in the docs here Model settings.

Remote API, Objectify and the DevServer don't like transactions?

I am using objectify 4 to write to the HRD datastore. Everything works fine in unit tests and running the application in devserver or production.
But when I try connect using the REMOTE API to the devserver datastore, an error is thrown when the code starts a XG transaction. While connecting with the Remote API, it seems to think that HRD is not enabled.
This is how I connect ...
public static void main(String[] args) {
RemoteApiOptions options = new RemoteApiOptions().server("localhost", 8888).credentials("foo", "bar");
//options = options.
RemoteApiInstaller installer = new RemoteApiInstaller();
StoredUser storedUser = null;
try {
installer.install(options);
ObjectifyInitializer.register();
storedUser = new StoredUserDao().loadStoredUser(<KEY>);
log.info("found user : " + storedUser.getEmail());
// !!! ERROR !!!
new SomeOtherDao().doSomeDataManipulationInTransaction();
} catch (Throwable e) {
e.printStackTrace();
} finally {
ObjectifyFilter.complete();
installer.uninstall();
}
}
When new SomeOtherDao().doSomeDataManipulationInTransaction() starts a transactions on multiple entity groups I get the error thrown :
transactions on multiple entity groups only allowed in High Replication applications
How can I tell the remote api that this is a HRD environment ?
If your application is using the High Replication Datastore, add an
explicit s~ prefix (or e~ prefix if your application is located in the
European Union) to the app id
For Java version, add this prefix in the application tag in the appengine-web.xml and then deploy the version where you have activated the remote_api servlet
Example
<application>myappid</application>
become
<application>s~myappid</application>
Source: https://developers.google.com/appengine/docs/python/tools/uploadingdata#Python_Setting_up_remote_api
I had 'unapplied job percentage' set to 0 and transactions using the remote api failed as if the devserver was running with Master/Slave and not HRD. Raising the 'unapplied job percentage' above zero fixed the problem.

Handling the Drupal settings.php file when using Git across multiple servers

We are using Drupal 6 for a number of our web sites. We are moving them all into Git for version control. Each site will have a dev server, a test server and a live server. I'm wondering about best practices for handling the settings.php file since the database connection info will obviously differ between the servers. I've seen solutions ranging from switch statements to an include file. The include file solution stated here http://drupaldork.com/2011/11/local-settings-development-sites seems to be a good solution, but I'm wondering what you end up leaving in the ACTUAL settings.php file. In other words if each server has a "local" settings file like settings.local.php which contains the connection info for that particular server, do you remove the connection info from the real root settings.php? Or do you leave it? If you leave it, what do you put there? Does it matter because it just ends up getting overridden by the local settings file anyway? Should the connection info in the main root settings.php be some kind of default connection info?
One of the way, i would prefer is not to keep settings.php in Git.
https://help.github.com/articles/ignoring-files
In our case, we keep codebase under Git but settings.php files are ignored. So that Prod, Sandbox & local environment will have their own settings.php files.
We keep 2 settings.php file includes in the repo but not the base settings.php.
My settings.php file for production is as normal. Just databsae settings and default stuff.
For development my settings.php file has the database settings and an include to a file that is stored in the repo called settings.dev.php.
# Additional site configuration settings.
if (file_exists('/Users/User/Sites/site.com/sites/default/settings.dev.php')) {
include_once('/Users/User/Sites/site.com/sites/default/settings.dev.php');
}
Settings.dev.php includes switches to turn off caching and set environment indicator:
// Secure Pages
$conf['securepages_enable'] = FALSE;
// Environment Indicator
$conf['environment_indicator_color'] = 'blue';
$conf['environment_indicator_enabled'] = TRUE;
$conf['environment_indicator_text'] = 'Development Server';
// Robots disable
$conf['robotstxt'] = 'User-agent: *
Disallow: /';
// Turn off Caching and such
$conf['cache'] = FALSE;
$conf['page_compression'] = FALSE;
$conf['preprocess_css'] = FALSE;
$conf['css_gzip'] = FALSE;
$conf['preprocess_js'] = FALSE;
$conf['javascript_aggregator_gzip'] = FALSE;
Settings.php is ignored in the repo but the settings.dev.php is included. We also keep a settings.stage.php in the repo. Setting values in settings.php file for prod needs to be done very carefully as it can interfere with some modules and prevents you from being able to quickly change settings if needed. But you can do the same thing with a settings.prod.php.

Web site using using active directory groups slows to a crawl intermittently

I have an asp.net mvc web site on an intranet. Access to the site is determined by groups in active directory. There are 4 different groups each having different access in the site. I have been having occasional problems with the site running slowly. The site will run fine for several days then suddenly slow to a crawl. I have both a test site and a production site. When the slowdown occurs both sites are affected equally. I also have a site that test site that has no active directory access and it runs with no problems while these two sites are crawling. The sites I am having trouble are running under a user account because the application has to reach out to another share on the intranet in order to print and merge pdf files. The sites are running under the same application pool. When the problem occurs, all pages are equally slow even pages with no database activity. When the problem occurs I reset IIS, restart the web sites, and recycle the threads. The only thing that actually resolves the problem is restarting the server. Sometimes it takes a additional restart to get the site back to normal. Here are a few things I have tried. It seems the problem is occurring less often but still occurs.
1. Reduce the numbers of times that I query active directory.
2. Reset IIS when the problem occurs. This has not been helping.
3. Recycle application pools.
4. Restart the sql server service
5. Made sure fully qualified names are used when referring to servers. This seems to have reduced the problems some. Not sure though. I am using IIS 7 on a windows 2008 server, 64 bit.
user = ConfigurationManager.AppSettings["TravelCardUser.AD_GroupName"];
approver = ConfigurationManager.AppSettings["TravelCardApprover.AD_GroupName"];
maintenance = ConfigurationManager.AppSettings["TravelCardMaintenance.AD_GroupName"];
admin = ConfigurationManager.AppSettings["TravelCardAdmin.AD_GroupName"];
testuser = ConfigurationManager.AppSettings["TestUser"];
List<string> adgroups = new List<string>();
adgroups.Add(admin);
adgroups.Add(approver);
adgroups.Add(maintenance);
adgroups.Add(user);
this.groups = adgroups;
List<string> groupmembership = new List<string>();
foreach (var group in groups)
{
if (!String.IsNullOrEmpty(testuser))
{
this.username = testuser;
}
else
{
this.username = currentloggedinuser;
}
using (var ctx = new PrincipalContext(ContextType.Domain))
using (var groupPrincipal = GroupPrincipal.FindByIdentity(ctx, group))
using (var userPrincipal = UserPrincipal.FindByIdentity(ctx, username))
{
if (groupPrincipal != null)
{
try
{
if (userPrincipal.IsMemberOf(groupPrincipal))
{
groupmembership.Add(group);
}
}
catch (Exception ex)
{
string theexception = ex.ToString();
}
}
}
}
Here is my ldap connection string.
<add name="ADConnectionString_UserRole" connectionString="LDAP://locationX/cn=TravelCardUser,ou=LocationXgroupsGroups,dc=acme,dc=us,dc=com" />
Ther server slows down every 3 or 4 days so I shut down the application pools to my applications and used Sysinternals to monitor process for 3 days.
http://technet.microsoft.com/en-us/sysinternals/bb896653
I am seeing that processes related to sequal server and team foundation server are grabbing resources but not releasing them. By the way I ran my asp.net code through Red Gate Memory Profiler and there are no memory leaks. Now I have to figure out what to do about the problem with memory usage.

Resources