Sybase loses data when using ASE ISQL utility? - database

I administer Sybase ASE(15.0.7) database thats run on Solaris(11). I am pretty new specifically to Sybase ASE , but I have pretty well overall knowledge with working on databases such as SQL Server. Lately, while I was doing tasks such as uploading programmers scripts and etc, I was told to do not use it with ASE ISQL utility and go straight from command line utility (isql) because it would lose part of the data otherwise . I was pretty confused how could it possibly lose anything while handing script to the DB.I tried to discuss this with the folks at work saying that it sounds pretty wierd.
None of us are the real Sybase heavily experienced admins and generally they could not give me any argued answers on the case. So they just claim thats ASE isql is a no-no.
Could that really be true?

This is absolutely not true. The Sybase command-line utility 'isql' is used very intensely by Sybase customers.
I think the confusion may come from the fact that isql does not perform 'autocommit', as is common in client tools for many other databases.
As a result, when you start an explicit transaction (BEGIN TRANSACTION) in the default unchained transaction mode , or when you run in chained transaction mode, and when you exit 'isql', then the transaction was not committed, so the ASE server will roll it back. This may be interpreted as 'data being lost' but that's not what really happens.
So, in ASE you should explicitly COMMIT a transaction, or it will eventually be rolled back.
(just for completeness, in the default unchained transaction mode, if you don't use BEGIN TRANSACTION then each DML command will commit immediately when it's ready. That's not the same as autocommit, although it is sometimes called that way.)

Related

Is SQL "inherently transactional"?

I just read in Wikipedia, that SQL is inherently transactional.
I took this to mean that every statement made to a SQL DBMS is treated as a transaction by default. Is this correct?
An example I can think of that might make this question relevant would be if you considered an update statement like this:
UPDATE Employee SET salary=salary * 1.1 WHERE type='clerk';
If this were being processed and there was some failure that caused the DBMS to shutdown, on restart, in a transactional sense, wouldn't the rows that records that were updated be rolled back?
At least in SQL Server, if you run a transaction, opening the line with
BEGIN TRAN
and you don't commit or rollback the transaction, it will ask you (if you try to exit the window) if you want to commit the transactions. If something caused everything to crash, and we had an open transaction (meaning, nothing to close it), it would not be considered committed.
Your question demonstrates another reason why many developers will use a COMMIT TRAN only if there are no errors, so every transaction by default, will rollback.
Disclaimer here: I am ONLY referring to SQL Server - I cannot say this would hold true for other SQL databases.
The answer is no. SQL is a language, what you describe is ACID behaviour. Though many database systems behave that way, it is still perfectly possible to create one that uses SQL as language and allows statements to be partially executed.

Sybase: Kick all connections before load

We have an automated script to restore a Sybase database and then run our automated tests against it. Quite often we have a web server or interactive query tool connected to this database. These connections prevent the Sybase load with "...must have exclusive use of database to run load".
How do I kick/kill/terminate all connections?
I'd like something similar to Sql Server's alter database single_user with rollback immediate. This is a local Sybase instance so we have full admin rights.
Without knowing exactly what condition the script checks for, there are two things you need to do to guarantee exclusive use of a database (i) run "master..sp_dboption 'your-db-name', 'single user', false" to put it in single-user mode, and (ii) kill all existing users first with the "kill" command.
This is not difficult to put in a stored procedure -- kill all connections using your database as their current database or having a lock in that database, and then try to set it to single-user mode. Then check if the single-user mode succeeded -- you should allow for a repeated try since it is possible that a new user has connected again just when you're setting it to single-user mode.
This is all not difficult to implement, but you will need some understanding of the ASE system tables. But primarily, I think you need to figure out exactly what it is your load script assumes to be the case and what it checks for.
There may be other solutions as well: if you can just lock the tables affected by your load script for example, that may also be a solution (and a simpler one). But this may or may not be possible, depending on what the load script exactly does and what is expects. So that would be question #1 to answer.
HTH,
Rob V.

Are Distributed Transactions a good idea for enabling rollback of database upgrades in Windows Installer Custom Actions?

I've outgrown the Sql Server custom actions available in WiX, so I'm taking the bold step of creating my own using Deployment Tools Foundation. I want to be a good citizen and make sure that mine support rollback. But what's the best way of doing it?
I need to support SQL Server 2005 and later, all editions.
The problem, as I see it, is that Windows Installer works in two phases: it does the work, storing undo information as it goes. Then, when all the pieces are in place it either commits (deleting the undo information) or does a rollback.
This means that standard transactions won't do the job. They would have to be completed inside my Execute custom action, and I wouldn't get a chance to roll them back later.
I've considered taking a copy-only backup of the database that I can restore in the rollback action if necessary but I think this approach, whilst simple has shortcomings. I don't know how big our databases will get, for example - so I can't guarantee that there will be space available to hold the backup on the target machine. Also, backup and restore can take a while to complete, and I don't want typical installs (where rollback doesn't happen) to be unnecessarily slow.
So that brings me to my current favoured idea: make sure the Distributed Transaction Coordinator is started up, then initialise a Distributed Transaction before making changes, then either committing it or rolling it back in the appropriate custom actions.
It seems I can uses the members of the TransactionInterop class to export a cookie that will enable me to share the transaction between my different custom actions.
Can anyone with experience of this kind of thing say if it is likely to work?
Some database/instance operations cannot be done inside a transaction (eg. CREATE/ALTER/DROP ENDPOINT), and other operations cannot be done inside a distributed transaction (eg. SAVE TRANSACTION). So you won't be able to do them at all in your proposed plan. Also your DB upgrade scripts will have to all work correctly when run inside an uncommitted transaction.
I would say that there are fewer risks of going down the backup/restore path (or alternatively creating a database snapshot and restoring from the snapshot on rollback, with the drawback of requiring EE).
Also an option is to have an undo script for every do script run during upgrade, and have the undo script run during rollback and remove the effects of the installation. I understand that this is a hard problem, probably doubles the amount of scripts that have to be developed (and tested...) and requires some serious developer discipline.
I've done quite a few installers with SQL scripts over the years and I've kind of come to the opinion that it's only suited for simple databases like here's my VB app with a local MSDE / MySQL database or here's my local store for code table lookups and temporary commits while we wait to sync it somewhere else.
Once you get into industrial strength heavy lifting enterprise app type situations I like to get my DB configuration out of the installer and into the application as a first run type story. You can do a lot heavier lifting with C# there and not be constrained by MSI.

Has open source ever created a single file database that auto handles transactions?

Has open source ever created a single file database that has better performance when handling large sets of sql queries that aren't delivered in formal SQL transaction sets? I work with a .NET server that does some heavy replication of thousands of rows of data from another server and it does so it a 1-by-1 fashion without formal SQL transactions. So, therefore I cannot use SQLite or FirebirdDB or JavaDB because they all don't automatically batch the transactions and therefore the performance is dismal. Each insert waits for the success of the previous one, etc. So, I am forced to use a heavier database like SQLServer, MySQL, Postgres, or Oracle.
Does anyone know of a flat file database (that has a JDBC connect driver) that would support auto batching transactions and solve my problem?
The main think I dont like about the heavier databases is the lack of the ability to see inside the database with a one-mouse-click operation, like you can with SQLLite.
I tried creating a SQLite database and
then set PRAGMA read_uncommitted=TRUE;
and it didn't result in any
performance improvement.
I think that Firebird can work for this.
Firebird have good dotnet provider and many solution for replication
May be you can read this article for Firebird transaction
Try hypersonic DB - http://hsqldb.org/doc/guide/ch02.html#N104FC
If you want your transactions to be durable (i.e. survive a power failure) then the database will HAVE to write to the disc after each transaction (this is usually a log of some sort).
If your transactions are very small this will result in a huge number of writes, and very poor performance even on your battery backed raid controller or SSD, but worse performance on consumer-grade hardware.
The only way of avoiding this is to somehow disable the flush at txn commit (which of course breaks durability). I have no idea which ones support this, but it should be easy to find out.

SQL Server UNDO

I am a part time developer (full time student) and the company I am working for uses SQL Server 2005. The thing I find strange about SQL Server that if you do a script that involves inserting, updating etc there isn't any real way to undo it except for a rollback or using transactions.
You might say what's wrong with those 2 options? Well if for example someone does an update statement and forgets to put in a WHERE clause, you suddenly find yourself with 13k rows updated and suddenly all the clients in that table are named 'bob'. Now you have the wrath of 13k bobs to face since that "someone" forgot to use a transaction and if you do a rollback you are going to undo critical changes that were needed in other fields.
In my studies I have Oracle. In Oracle you can first run the script then commit it if you find that there isn't any mistakes. I was wondering if there was something that I missed in SQL Server since I am still relatively new in working developer world.
I don't believe you missed anything. Using transactions to prevent against these kind of errors is the best mechanism and it is the same mechanism Oracle uses to protected the end user. The difference is that Oracle implicitly begins a transaction for you whereas in SQL Server you must do it explicitly.
SET IMPLICIT_TRANSACTIONS is what you are probably looking for.
I'm no database/sql server expert and I'm not sure if this is what you're looking for, but there is the possibility to create snapshots of a database. A snapshot allows you to revert the database to that state at any time.
Check this link for more information:
http://msdn.microsoft.com/en-us/library/ms175158.aspx
I think transactions work well. You could rollback the DB (to a previous backup or point in the log), but I think transactions are much simpler.
How about this: never make changes to a production database that have not 1st been tested on your development server, and always make a backup before trying anything that is un-proven.
From what I understand, SQL Server 2008 added an Auditing feature that logs all changes made by users to the various databases and also has the option to roll them back after the fact.
Again, this is from what I've read or overheard from our DBA, but might be worth looking into.
EDIT: After looking into it, it appears to only give the ability to rollback on schema changes, not data modifications (DDL triggers).
If I am doing something with any risk in SQL Server, I write the script like this:
BEGIN TRAN
Insert .... whatever
Update .... whatever
-- COMMIT
The last line is a comment on purpose: I first run the lines before, then make sure there's no error, and then highlight just the word Commit and execute that. This works because in Management Studio you can select a part of the T-SQL and just execute the selected portion.
There are a couple of advantages: Implicit Transactions works too, but it's not the default for SQL Server so you have to remember to turn it on or set options to do that. Also, if it's on all the time, I find it's easy for people to "forget" and leave uncommitted transactions open, which can block others. That's mainly because it's not the default behavior and SQL Server folks aren't used to it.

Resources