I'm in the process of converting an MsSQL stored procedure into Oracle, but I've run into an issue that's making me question both implementations.
The Sql Server version creates a temp table inside the stored procedure to track validation errors on what can potentially be a somewhat large dataset (hundreds of thousands of records). Each validation query selects the invalid IDs into the temp table with an appropriate error message (specific to the query). Once all validation is done, the errors are inserted into a real table (which doesn't have a column to store the IDs). I can then easily insert the valid rows by filtering out the IDs from the temp error table.
I hope that makes sense. And just to reiterate, the reason I don't simply use the "real" error table is that it doesn't contain a column for me to store the IDs of the invalid rows (I can't change this).
I know that I can use a normal/global temporary table in Oracle, but the more I read into it, the more it sounds like this is bad practice. What's a good alternative to do this in Oracle? Collections?
Thanks.
Why not just insert the errors into the real target table when the error occurs instead of into a #TEMP table then moving them later? You did not list your Oracle version but you can create a global temporary table on the system then in your stored procedure a private to your session temporary table would be created on insert. However, temp tables are normally not needed in Oracle. The entire process design sounds suspect. It is possible in Oracle to bulk insert the data capturing individual row errors from the bulk operation. If you or some members of you team have a decent understanding how Oracle works then this process seems like a candidate for refactoring (redesign).
Related
I have created new dataset using snowflake connector and used the same as source dataset in lookup activity.
Then I am trying to INSERT the record into snowflake using following query.
'INSERT INTO SAMPLE_TABLE VALUES('TEST',1,1,CURRENT_TIMESTAMP,'TEST'-- (all values are passed)
Result: The row getting inserted into snowflake but my pipeline got failed stating the below error.
Failure happened on 'Source' side. ErrorCode=UserErrorOdbcInvalidQueryString,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=The following ODBC Query is not valid: 'INSERT INTO SAMPLE_TABLE VALUES('TEST',1,1,CURRENT_TIMESTAMP,'TEST');'
Could you please share you advise or anylead to solve this problem.
Thanks.
Rajesh
Lookup, as the name suggests, is for searching and retrieving data, not for inserting. However, you can enclose your INSERT code in a procedure and execute it using the Lookup activity.
However, I strongly do not recommend such an action, remember that when inserting data into Snowflake, you create at least one micro-partition with a size of 16MB, if you insert one line at a time, the performance will be terrible and the data will take up a disproportionate amount of space. Remember Snowlfake is not a transaction database (! OLTP).
Instead, it's better to save all the records in an intermediate file and then import the entire file in one move.
You can use the lookup activity to perform operations other than selects, it just HAS to have an output. I've gotten around it with a postgres database doing create tables, truncates, one off inserts by just concatenating a
select current_date;
after the main query.
Note, the sql script activity will definitely be better for this, we are waiting on postgres support in that though.
I have a SQL Server and I need to log any changes made to a set of tables and their fields. Information needed is the user, the date time, the related table / field and the new value.
I saw the Change Data Capture (CDC) feature which seems perfect but it requires a non-standard version - and I have (and I may only have) the standard version.
The single solution I see is to use trigger, but it may cause performance troubles (it blocks the related table while the log is being inserted). Is there any other solution?
If you don't want triggers to do this, then define a stored procedure that will insert entries in log, after inserting the data into the table successfully.
it blocks the related table while the log is being inserted
Most probably you are using FOR INSERT for trigger, i think you should try AFTER INSERT
I am not sure if this is possible. I have an original data set that is approximately 1.5MM records. I want to do a number of things to this dataset in preparation to using it in a report with parameters. I am using SSRS and SQL Server 2008 R2.
What I was thinking of doing is creating a temp table #XYZ that would have a subset of the original 1.5MM records and would have the additional fields I need for reporting.
I can do all of that in a stored procedure. Can I use that temp table without copying it to a table in the db.
Just so you understand, two people may want to query the data at approximately the same time and I do not want to have conflicts with dropping or updating tables.
A temporary table is unique to a connection/session and gets dropped when the proc ends. If you run the same proc from two different windows in SSMS each connection gets its own temporary table, you won't have a problem...unless you use a global temporary table with two pound signs ##XYZ
What is best practice for handling the dropping of a temp table. I have read that you should explicitly handle the drop and also that sql server should handle the drop....what is the correct method? I was always under the impression that you should do your own clean up of the temp tables you create in a sproc, etc. But, then I found other bits that suggest otherwise.
Any insight would be greatly appreciated. I am just concerned I am not following best practice with the temp tables I create.
Thanks,
S
My view is, first see if you really need a temp table - or - can you make do with a Common Table Expression (CTE). Second, I would always drop my temp tables. Sometimes you need to have a temp table scoped to the connection (e.g. ##temp), so if you run the query a second time, and you have explicit code to create the temp table, you'll get an error that says the table already exists. Cleaning up after yourself is ALWAYS a good software practice.
EDIT: 03-Nov-2021
Another alternative is a TABLE variable, which will fall out of scope once the query completes:
DECLARE #MyTable AS TABLE (
MyID INT,
MyText NVARCHAR(256)
)
INSERT INTO
#MyTable
VALUES
(1, 'One'),
(2, 'Two'),
(3, 'Three')
SELECT
*
FROM
#MyTable
CREATE TABLE (Transact-SQL)
Temporary tables are automatically dropped when they go out of scope, unless explicitly dropped by using DROP TABLE:
A local temporary table created in a stored procedure is dropped automatically when the stored procedure is finished. The table can be referenced by any nested stored procedures executed by the stored procedure that created the table. The table cannot be referenced by the process that called the stored procedure that created the table.
All other local temporary tables are dropped automatically at the end of the current session.
Global temporary tables are automatically dropped when the session that created the table ends and all other tasks have stopped referencing them. The association between a task and a table is maintained only for the life of a single Transact-SQL statement. This means that a global temporary table is dropped at the completion of the last Transact-SQL statement that was actively referencing the table when the creating session ended.
I used to fall into the crowd of letting the objects get cleaned up by background server processes, however, recently having issues with extreme TempDB log file growth has changed my opinion. I'm not sure if this has always been the case with every version of SQL Server, but since moving to SQL 2016 and putting the drives on a PureStorage SSD array, things run a bit differently. Processes are typically CPU bound rather than I/O bound, and explicitly dropping the temp objects results in no issues with log growth. While I haven't dug in too deeply as to why, I suspect it's not unlike garbage collection in the .NET world where it's synchronous when called explicitly and asynchronous when left to the system. This would matter because the explicit drop would release the storage in the log file, and make it available at the next log backup, whereas this appears to not be the case when not explicitly dropping the object. On most systems this is likely not a big issue, but on a system supporting a high volume ERP and web storefront with many concurrent transactions, and heavy TempDB use, it has had a big impact. As for why to create the TempDB objects in the first place, with the amount of data in most of the queries, it would spill over into TempDB storage anyway, so it's usually more efficient to create the object with the necessary indexes rather than let the system handle it automatically.
In a multi-threaded scenario where each thread creates its own set of tables and the number of threads is throttled, not dropping your own tables means that the governor will consider your thread done and spawn more threads... however the temp tables are still around (and thus the connections to the server) thus you'll exceed the limits of your governor. if you manually drop the temp tables then the thread doesn't finish until they've been dropped and no new threads are spawned, thus maintaining the governor's ability to keep from overwhelming the SQL engine
As per my view. No need to drop temp tables explicitly. SQL server will handle to drop temp tables stored in temp db in case of shorage of space to process query.
Everyday a company drops a text file with potentially many records (350,000) onto our secure FTP. We've created a windows service that runs early in the AM to read in the text file into our SQL Server 2005 DB tables. We don't do a BULK Insert because the data is relational and we need to check it against what's already in our DB to make sure the data remains normalized and consistent.
The problem with this is that the service can take a very long time (hours). This is problematic because it is inserting and updating into tables that constantly need to be queried and scanned by our application which could affect the performance of the DB and the application.
One solution we've thought of is to run the service on a separate DB with the same tables as our live DB. When the service is finished we can do a BCP into the live DB so it mirrors all of the new records created by the service.
I've never worked with handling millions of records in a DB before and I'm not sure what a standard approach to something like this is. Is this an appropriate way of doing this sort of thing? Any suggestions?
One mechanism I've seen is to insert the values into a temporary table - with the same schema as the target table. Null IDs signify new records and populated IDs signify updated records. Then use the SQL Merge command to merge it into the main table. Merge will perform better than individual inserts/updates.
Doing it individually, you will incur maintenance of the indexes on the table - can be costly if its tuned for selects. I believe with merge its a bulk action.
It's touched upon here:
What's a good alternative to firing a stored procedure 368 times to update the database?
There are MSDN articles about SQL merging, so Googling will help you there.
Update: turns out you cannot merge (you can in 2008). Your idea of having another database is usually handled by SQL replication. Again I've seen in production a copy of the current database used to perform a long running action (reporting and aggregation of data in this instance), however this wasn't merged back in. I don't know what merging capabilities are available in SQL Replication - but it would be a good place to look.
Either that, or resolve the reason why you cannot bulk insert/update.
Update 2: as mentioned in the comments, you could stick with the temporary table idea to get the data into the database, and then insert/update join onto this table to populate your main table. The difference is now that SQL is working with a set so can tune any index rebuilds accordingly - should be faster, even with the joining.
Update 3: you could possibly remove the data checking from the insert process and move it to the service. If you can stop inserts into your table while this happens, then this will allow you to solve the issue stopping you from bulk inserting (ie, you are checking for duplicates based on column values, as you don't yet have the luxury of an ID). Alternatively with the temporary table idea, you can add a WHERE condition to first see if the row exists in the database, something like:
INSERT INTO MyTable (val1, val2, val3)
SELECT val1, val2, val3 FROM #Tempo
WHERE NOT EXISTS
(
SELECT *
FROM MyTable t
WHERE t.val1 = val1 AND t.val2 = val2 AND t.val3 = val3
)
We do much larger imports than that all the time. Create an SSIS pacakge to do the work. Personally I prefer to create a staging table, clean it up, and then do the update or import. But SSIS can do all the cleaning in memory if you want before inserting.
Before you start mirroring and replicating data, which is complicated and expensive, it would be worthwhile to check your existing service to make sure it is performing efficiently.
Maybe there are table scans you can get rid of by adding an index, or lookup queries you can get rid of by doing smart error handling? Analyze your execution plans for the queries that your service performs and optimize those.