Running Liquibase with Java/Spring against a Snowflake database. The first deployment works fine, I let Liquibase create the DatabaseChangeLogTable and the DatabaseChangeLogLockTable. They get created, and written to and the database objects are created.
The second time I try to run it, it will acquire the change log lock, but then sit for a long time at liquibase.util : Computed checksum for xxxx. Then timeout after 5 minutes (due to other config settings). If I drop the DatabaseChangeLogTable and DatabaseChangeLogLockTable (interactively), and update the lock status to false, it works fine again. Any ideas on why it can't seem to finish when the DatabaseChangeLogTable and DatabaseChangeLogLockTable are already there? When I log into the database using the same credentials that Liquibase is using, I can select and update those tables just fine.
Could you try using clearChecksums?
clearCheckSums clears all checksums and nullifies the MD5SUM column of the DATABASECHANGELOG table so they will be re-computed on the next database update.Changesets that have been deployed will have their checksums re-computed, and pending changesets will be deployed. For more details about this approach, please visit this link
Related
I am looking for a way to record the status of the pipeline in a DB table. Assuming this is a very common use case.
Is there any way where I can record
status and time of completion of the complete pipeline.
status and time of completion of selected individual activities.
the ID of individual runs/execution.
The only way I found was using SQLActivity that is dependent on an individual activity but even there I cannot access the status or timestamp of the parent/node.
I am using a jdbc connection to connect to a remote SQLServer. And the pipeline is for coping S3 files into the SQLServer DB.
Hmmm... I haven't tried this but I can hit you with some pointers to possibly achieve the desired results. However, you will have to do research & figure out actual implementation.
Option 1
Create a ShellCommandActivity, which has depends on set to last activity in your pipeline. Your shell will use aws-cli to list-runs details of the current run, you can use filters to achieve this.
Use Staging Data to move output of previous ShellActivity to SQLActivity to eventually insert into the destination SQLServer.
Option 2
Use AWS lambda to run aws-cli data-pipeline list-runs periodically, with filters, & update the destination table with latest activities. Resource
We're developing an application based on Yocto, distro Poky 1.7, and now we've to implement the logger, so we have installed the one already provided by our meta-oe layer:
Syslog-ng 3.5.4.1
libdbi 0.8.4.1
libdbi-drivers 0.8.3
Installation has been done without any problems and Syslog-ng can run correctly, except that it doesn't write to an existing sqlite database.
In the syslog-ng.conf file there is just one source, default unix stream socket /dev/log, and one destination, a local sqlite database (of just 4 columns). A simple program that writes 10 logs with the use of the C API 'syslog()' is used for test purposes.
If the database already exists, empty or not, when running the demo program, no log message is written into the database;
If the database doesn't exist, Syslog-ng creates it and is able to write log message until the board is rebooted. After that, we fall back into condition one, so no more log message could be save into the db.
After some days spent on this issue, I've found that this behaviour could be due to this sql statement (in function afsql_dd_validate_table(...) in afsql.c ):
SELECT * FROM tableName WHERE 0=1
I know that this is a useful statement to check existance of the table called 'tableName', and the WHERE 0=1 a always false condition to avoid to parse the whole table.
Enabling Syslog-ng debug it seems that previous statement doesn't return any information about columns, so Syslog-ng think they don't exist and try adding them, causing an error since they already exist. That's why it doesn't write anything to the database.
Modifying the sql query with this one
SELECT * FROM tableName
I'm still unable to write any log message to the database if it is empty, but now it's possible to make all working in the right way if when the database is created a dummy record (row) is added.
But this should not be the right way to work, has anyone faced thi issue and found a solution on how to make Syslog-ng logging with empty sqlite database?
Many thanks to everybody
Regards
Andrea
I am working on an ios project that has a Sybase (ultralite) database that is synchronized with a Sybase Sql Anywhere 12 database using mobilink.
Everything was properly, until i decided today to add some fields to the main database so that they synchronize to the main database.
I have updated the schema of the consolidated database from the main engine, then i have updated the schema of the remote database from the consolidated engine, and then i mapped the added fields together, and I deployed a new ultralite database.
Please note that it's not the first time I do a similar task, i always add fields, and sync databases..
after the update, when i synchronize using the blank ultralite database, mobilink will fail giving only this error: Synchronization Failed: -1305 (MOBILINK_COMMUNICATIONS_ERROR) %1:201 %2: %3:0
I have researched Error Number 201 in sybase and it points to: SQLE_NOT_PUBLIC_ID
and in the sybase documentation the error's probably cause is:
"The option specified in the SET OPTION statement is PUBLIC only. You cannot define this option for any other user."
I have tried to redeploy, I have tried to move the engine to a windows pc, all give the same error.. and i have no clue where this SET OPTION statement came from and how can i solve it..
Any hints are appreciated!
The problem was just caused by small network timeout value while setting up mobilink parameters.
info.stream_parms = (char*) #"host=192.168.0.100;port=3309;timeout=1"
i just changed the value from timeout=1 to timeout=300 and it worked!
I was planning to use SSIS logging to get task level details (duration of running, error message thrown-if any, user who triggered the job ) for my package.
SSIS was creating dbo.syssisLog table under System table and it was working just fine. Suddenly it stops creating table under System table and start creating under Users table. Also now it is not logging some events which were logged previously when created under System table. Events like: PackageStart and User:PackageStart/User:PackageEnd event for some tasks.
Can anyone please guide me what's going wrong here ?
The table showing under System versus User tables is fairly meaningless but if you want the table to show the same, set it as a MS shipped table
EXECUTE sys.sp_MS_marksystemobject 'sysssislog'
The way database logging works in the package deployment model, is that SSIS will attempt to log to dbo.sysdtslog90/dbo.sysssislog (depending on your version) but if that table doesn't exist, it will create it for you. There is a copy of that table in the msdb catalog which is marked as a system object. When SSIS creates its own copy, it just has the DDL somewhere in the bowels of the code that does logging. You'll notice it also creates a stored procedure sp_ssis_addlogentry to assist in the logging.
As for your observation for inconsistent logging behaviour, all I can say is I've never seen that. The only reason it won't log an event is if the event doesn't occur - either a precursor condition didn't happen or the package errors out. If you can provide a reproducible scenario where it does and then doesn't log events, I'll be happy to tell you why it does/doesn't do it.
I had a package that worked perfectly until i decided to put some of its tasks inside a sequence container (More on why I wanted to do that - How to make a SSIS transaction in my case?).
Now, i keep on getting an error -
[Execute SQL Task] Error: Failed to acquire connection "MyDatabase". Connection may not be configured correctly or you may not have the right permissions on this connection.
Why could this be happening and how do I fix it ?
I started writing my own examples to reply to your question. Then I remember that I met Matt Mason when I talked at a SQL Saturday in New Hampshire. He is the Microsoft Program Manager for SSIS.
While I spent 3 years between 2009 and 2011 writing nothing else but ETL code, I figured Matt had an article out there.
http://www.mattmasson.com/2011/12/design-pattern-avoiding-transactions/
Here is a high level summary of the approaches and the error you found.
[ERROR]
The error you found is related to MSDTC having issues. This must be configured and working correctly without any issues. Common issues are firewalls. Check out this post.
http://social.msdn.microsoft.com/Forums/sqlserver/en-US/3a5c847e-9c7e-4628-b857-4e6edaa7936c/sql-task-transaction-required?forum=sqlintegrationservices
[SOLUTION 1] - Use transactions at the package, task or container level.
Some data providers do not support MSDTC. Some tasks do not support transactions. This may be slow in performance since you are adding a new layer to support two phase commits.
http://technet.microsoft.com/en-us/library/aa213066(v=sql.80).aspx
[SOLUTION 2] - Use the following tasks.
A - BEGIN TRAN (EXECUTE SQL)
B - YOUR DATA FLOW
C - TEST THE RETURN CODE
1 - GOOD = COMMIT (EXECUTE SQL)
2 - FAILURE = ROLLBACK (EXECUTE SQL)
You must have the RetainSameConnection property set to True on the connection.
This forces all calls thru one session or SPID. All transaction management is now on the server.
[SOLUTION 3] - Write all you code so that it is restartable. This does not mean you go out and use check points.
One solution is to always use UPSERTS. Insert new data. Update old data. Deletes are only a flag in a table. This pattern allows a failed job to be executed many times with the same final state being achieved.
Another solution is to handle all error rows by placing them into a hospital table for manual inspection, correction, and insertion.
Why not use a database snapshot (keeps track of just changed records)? Take a snapshot before the ETL job. If an error occurs, restore the database from the snapshot. Last step is to remove the snapshot from the system to clean up house.
In short, I hope this is enough ideas to help you out.
While the transaction option is nice, it does have some down falls. If you need an example, just ping me.
Sincerely
J
What package protection level are you using? Don't Save Sensitive? Encrypt Sensitive with User Key? I'd recommend changing it to use Encrypt Sensitive with Password and enter a password. The password won't disappear.
Have you tried testing the connection to the database in the connection manager?