Error when I select the DDL of A table - sybase

Every time I try to get the DDL of a table, I got this error
Cannot create temporary table '#tmp'. Prefix name '#tmp' is already in use by another temporary table '#tmp'.
There is already another cursor with the name 'ccolumn' at the nesting level '0'.
There is already another cursor with the name 'cindex' at the nesting level '0'.
There is already another cursor with the name 'indexes' at the nesting level '0'.
There is already another cursor with the name 'cprotect' at the nesting level '0'.
Attempt to insert duplicate key row in object '#tmp00000450018282794' with unique index '_tmp_19748930791'
How can I solve it ?

#tmp tables are session specific temporary working tables used by applications to store data being manipulated. If you recieved an error that the table already exists, it indicates that either a previous attempt to generate the DDL was not yet complete when a new one was started, or the previous attempt has hung, and needs to be killed.
Assuming you are in a development environment, I would first try closing and restarting Powerbuilder to see if that clears the error. If it does not clear, the next thing to try would be to restart the database server that Powerbuilder is using as it's data source.
Alternatively you could manually connect to the database (via isql) and try killing the process that isn't closing out, and droping the #tmp table manually.

Related

How to write a code to timetravel using a specific transaction ID

I would like to use a timetravel feature on snowflake and restore the original table.
I've deleted and created the table using following command:
DROP TABLE "SOCIAL_LIVE"
CREATE TABLE "SOCIAL_LIVE" (...)
I would like to go back to the original table before dropping table.
I've used following code (hid the transaction ID to 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx')
Select "BW"."PUBLIC"."SOCIAL_LIVE".* From "BW"."PUBLIC"."SOCIAL_LIVE";
select * from SOCIAL_LIVE before(statement => 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx');
Received an error message:
Statement xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx cannot be used to specify time for time travel query.
How can we go back to the original table and restore it on snowflake?
The documentation states:
After dropping a table, creating a table with the same name creates a
new version of the table. The dropped version of the previous table
can still be restored using the following method:
Rename the current version of the table to a different name.
Use the UNDROP TABLE command to restore the previous version.
If you need further information, this page is useful:
https://docs.snowflake.net/manuals/sql-reference/sql/drop-table.html#usage-notes
You will need to undrop the table in order to access that data, though. Time-travel is not maintained by name alone. So, once you dropped and recreated the table, the new table has its own, new time travel.
Looks like there's 3 common reasons that error is seen, with solutions:
the table has been dropped and recreated
see this answer
the time travel period has been exceeded
no solution: target a statement within the time travel period for the table
the wrong statement type is being targeted
only certain statement types can be targeted. Currently, these include SELECT, BEGIN, COMMIT, and DML (INSERT, UPDATE etc). See documentation here.
Statement xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx cannot be used to specify time for time travel query.
Usually we will get above error when we trying to travel behind the object creation time. Try with time travel option with offset option.

"Attempting to set a non-NULL-able column's value to NULL."

Azure SQL Server. But I've seen this with SQL Server 2017. I am creating System-Versioned tables. After I create the table, I'll create a MERGE statement to deposit data into the table (the MERGE statement will be used again to keep the table updated). Many times I will get an error message stating: Attempting to set a non-NULL-able column's value to NULL. If I simply drop the table and recreate, I don't see the error again.
This doesn't happen with every table I create, but it's frequent. Until recently I have never seen this error before. Any ideas what's causing it?
i have had the same problem
"The only way I got around this was to just remove the index from the temporal history table" - it was my solution

SSIS returns an incorrect error

I created SSIS package, which created table MyTable in SQL Server with a column BaseVariantVersionID. Program first inserts data into this table.
At the end of the package, I should drop column BaseVariantVersionID from the table.
First debug is OK. But on second attempt, SSIS returns a validation error. It doesn't allow to recreate the table with BaseVariantVersionID, because on next step package cannot made insert in column, which now is not presented.
Maybe you know some property to disable current db checking?
Update
I drop column after all steps.
And on first step I recreated DB with column.
But system returns error - looks like it use existing table for validation.
This could be several issues I can think of.
You must absolutely delete the table if it already exists prior to creating it at the beginning of the package.
Example Execute SQL Task:
IF OBJECT_ID('MyTable', 'U') IS NOT NULL
DROP TABLE MyTable
GO
CREATE TABLE MyTable (etc...) GO
ALTER TABLE MyTable ADD COLUMN (etc...) GO
Second, you can set DelayValidation = True in the Data Flow Task Properties Window (this is on the bottom right usually after clicking a Dataflow Task in the design area). That delays validation until run time.
If at the moment, you have a missing field error, you can add the column manually in SQL Server Management Studio, then double-click the task with the error and any missing field error should disappear (now that the column exists). After that you can save the package and exit.
For an Execute SQL Task, you can set BypassPrepare to True. Sometimes that will allow you to design and build a package which doesn't validate at design time, but will validate okay at run time.
But I have to question the need to create columns and tables at run time. Are you sure you need to do this? It is a more typical use case to have SSIS move data around in existing table structures, not create them at run time.
If I read your description correctly, you're dropping the column on the first pass, then getting an error trying to recreate the table on the second pass? You will get an error on run #2 if you try to create a table that already exists, even if it doesn't have "SomeColumn" in it.
You'll need to drop the table if you're going to want to recreate it or change your code to add the column back if the table exists.

Sql Server rows being auto inserted without a trigger

I have a table in sql server, for which, if I delete a row from it, a new row is inserted with the same data, and userid as the one I deleted. There are no triggers on this table. In addition, I did a search of all database objects that reference this table, and there are no triggers anywhere in the database that reference this table, only some stored procedures, none of which have any code that would cause this behavior.
To be clear, if I run this query:
delete from my_table where id = 1
the row with the id of 1 will be deleted, but a new row will be inserted that has the same userid, and date as the deleted row. No application code involved, just a straight sql delete statement run directly on the database causes this.
What else besides a trigger could be causing this to happen? I've never encountered something like this before.
It took me a long time, but I discovered this was being caused by a "rogue" linq-to-sql dll that was running in spite of it's parent app being killed.
The good news is, there isn't some weird non-trigger way to insert rows on delete in SQL, so we can all resume our normal lives now, knowing all is as it was.

SQL Server wiped my table after (incorrectly) creating a new column .. what the heck happened?

I added a new column to an existing table in the SQL Server Management Studio table designer. Type INT, not null. Didn't set a default value.
I generated a change script and ran it, it errored out with a warning that the new column does not allow nulls, and no default value was being set. It said "0 rows affected".
Data was still there, and for some reason my new column was visible in the "columns" folder on the database tree on the left of SSMS even though it said "0 rows affected" and failed to make the database change.
Because the new column was visible in the list, I thought I would go ahead and update all rows and add a value in.
UPDATE MyTable SET NewColumn = 0
Boom.. table wiped clean. Every row deleted.
This is a big problem because it was on a production database that wasn't being backed up unbeknownst to me. But.. recoverable with some manual entry, so not the end of the world.
Anyone know what could have happened here.. and maybe what was going on internally that could have caused my update statement to wipe out every row in the table?
An UPDATE statement can't delete rows unless there is a trigger that performs the delete afterward, and you say the table has no triggers.
So it had to be the scenario I laid out for you in my comment: The rows did not get loaded properly to the new table, and the old table was dropped.
Note that it is even possible for it to have looked right for you, where the rows did get loaded at one point--if the transaction was not committed, and then (for example) later when your session was terminated the transaction was automatically rolled back. The transaction could have been rolled back for other reasons, too.
Also, I may have gotten the order incorrect: it may create the new table under a new name, load the rows, drop the old table, and rename the new one. In this case, you may have been querying the wrong table to find out if the data had been loaded. I can't remember off the top of my head right now which way the table designer structures its scripts--there's more than one way to skin this cat.

Resources