ResetDatabase Fullsync transaction does not sync local client database - convertigo

I've added a ResetDatabase Transaction in my fullsync connector in order to clean up the whole couchDB database.
Surprinsigly, clients are not sync when I run this transaction. I need to manually clean local client database in order to refresh all the datas. Is it a normal behaviour ? Am I missing something ?

The client database cleaning when the CouchDB server has been reset has only been implemented since Convertigo 7.6.0: https://github.com/convertigo/convertigo/issues/22

Related

SymmetricDS: sync client nodes to each other

I have symmetricDS configured so that there is one master node in the cloud, and then two "store" (client) nodes in remote locations.
If I insert data in the cloud, it is syncd to both clients. If I insert data in a client, it is syncd to the cloud.
However, data added on client1 never makes it to client2 and data added on client2 never makes it to client1...
Any ideas on this?
Thanks
Yes you would want a second set of triggers (maybe prefix each with the name cloud_*) that has an additional flag turned on sym_trigger.sync_on_incoming_batch=1. This will cause changes coming in as part of replication from client 1..n to be captured and resent to all other clients.
This can be more efficient that a client to client group link solution because usually the clients do not all have access over a network to sync to each other. So the change would sync to the cloud and then be redistributed to the other clients.

Block all connections to a database db2

I'm working on logs. I want to reproduce a log in which the application fails to connect to the server.
Currently the commands I'm using are
db2 force applications all
This closes all the connections and then one by one I deactivate each database using
db2 deactivate db "database_name"
What happens is that it temporary blocks the connections and after a minute my application is able to create the connection again, due to which I am not able to regenerate the log. Any Ideas how can I do this?
What you are looking for is QUIESCE.
By default users can connect to a database. It becomes active and internal in-memory data structures are initialized. When the last connection closes, the database becomes inactive. Activating a database puts and leaves them initialized and "ready to use".
Quiescing the database puts them into an administrative state. Regular users cannot connect. You can quiesce a single database or the entire instance. See the docs for some options to manage access to quiesced instances. The following forces all users off the current database and keeps them away:
db2 quiesce db immediate
If you want to produce a connection error for an app, there are other options. Have you ever tried to connect to a non-estisting port, Db2 not listening on it? Or revoke connect privilege for that user trying to connect.
There are several testing strategies that can be used, they involve disrupting the network connection between client and server:
Alter the IP routing table on the client to route the DB2 server address to a non-existent subnet
Use the connection via a proxy software that can be turned off, there is a special proxy ToxiProxy, which was designed for the purpose of testing network disruptions
Pull the Ethernet cable from the client machine, observe then plug it back in (I've done this)
This has the advantage of not disabling the DB2 server for other testing in progress.

How can I get Azure to notify me when a db Copy operation is complete?

Our deployment process involves two db copy procedures, one where we copy the production db to our rc site for rc testing, and then one where we copy the production db to our staging deployment slot for rollback purposes. Both of these can take as long as ten minutes, even though our db is very small. Ah, well.
What I'd like to do is have a way to get notified when a db Copy operation is done. Ideally, I could link this to an SMS alert or email.
I know that Azure has a big Push Notification subsystem but I'm not sure if it can hook the completion of an arbitrary db copy, and if there's a lighterweight solution.
There are some information about copy database in this page, http://msdn.microsoft.com/en-us/library/azure/ff951631.aspx. If you you are using T-SQL you can check the copy process through the query likes SELECT name, state, state_desc FROM sys.databases WHERE name = 'DEST_DB'. So you can keep running this query and send SMS when it shows finished.

Data synchronization issue when use connection pool on tomcat server

I am developing a Servlet applicaiton. It obtains a database connection from the connection pool supported by the Tomcat container to query and update database data.
I run into a problem. The Servlet gets a database connection and then add a new table row or delete a table row. After that, it commits the change. Later, a connection is obtained to execute queries. I find that the data returned from the queries using the second connection do not reflect the change made with the first database connection.
Isn't it strange? The changes made with the first database connection have been committed successfully. Why the new rows inserted do not appear in the later query? Why the rows deleted still appear in the later query?
Does it relate to the setting of transaction level?
Can anyone help?
03-12: More Information (#1):
I use MySQL Community Server 5.6.
My servlet runs on Tomcat 7.0.41.0.
The Resource element in the conf/server.xml is as follows:
<Resource type="javax.sql.DataSource"
name="jdbc/storewscloud"
factory="org.apache.tomcat.jdbc.pool.DataSourceFactory"
driverClassName="com.mysql.jdbc.Driver"
url="jdbc:mysql://localhost:3306/myappdb"
maxActive="100"
minIdle="10"
maxWait="10000"
initialSize="10"
removeAbandonedTimeout="60"
removeAbandoned="true"
logAbandoned="true"
username="root"
password="xxxxxxxxxx"
/></li>
I do not use any cache explicitly.
Every time the servlet gets a database connection, it turns the auto-commit mode of the connection off.
When the servlet is invoked, a database connection is obtained. The servet uses it to update data in the database. After that, it commits the changes. Then, it uses Apache HttpClients to invoke the same servlet to do some other thing which also obtains a database connection and execute query. The later query returns 'old' data. If I refresh the web page, the latest data are shown. It looks like some party, mysql jdbc driver or connection object, cache the data somewhere. I have no clue.
03-12: More Information (#2):
I did an experiment getting a connection without using the connection pool. The result is correct. So, the problem is caused by the connection pool.
To make the query return right data using the 2nd connection from the pool, I need to not only commit the data changes using the 1st connection from the pool but also CLOSE the 1st connection.
It seems that the data changes made are not completely saved in the database even the commit() is called until the close() is called.
Why?
I found that there is a new version of C3P0 connection pool released recently. I gave it a try. It works! The problems I had do not occur. Therefore, I use it to replace the bundled connection pool of the Tomcat server. For those who encounter the same problem as I do, C3P0 maybe a solution for you too.
C3P0 Project URL

How to check if an JPA/hibernate database is up with second-level caching

I have a JSP/Spring application using Hibernate/JPA connected to a database. I have an external program that check if the web server is up every 5 minutes.
The program call a specific URL to check if the web server is still running. The server returns "SUCCESS". Obviously if the server is now, nothing is returned. The request timesout and an alert is raised to inform the sysadmin that something is wrong...
I would like to add another layer to this process: I would like the server to return "ERROR" if the database server is down. Is there a way using Hibernate to check if the database server is alive and well?
What I tought to do was to take an object and try to save it. This would work, but I think it's probably too much for what I want. I could also read(load) an object from the database. But since we use second-level caching for all our objects, the object will be loaded from the cache, and not the database.
What I'm looking for is something like:
HibernateUtils.checkDatabase()
Does such a function exist in Hibernate?
You could use a native query, e.g.
Query query = sess.createNativeQuery("select count(*) from mytable").setCacheable(false);
BigDecimal val = (BigDecimal) query.getSingleResult();
That should force hibernate to use a connection from the pool. If the db is down, I don't know exactly what will be the error returned by hibernate given that it can't get the connection from the pool. If the db is up, you get the result of your query.

Resources