I have a table where one of the columns is a path to an image and I need to create a directory for the record being inserted.
Example:
Id | PicPath |<br>
1 | /Pics/1/0.jpg|<br>
2 | /Pics/2/0.jpg|
This way I can be sure that the folder name is always valid and it is unique (no clash between two records).
Question is: how can I safely refer to the current id of the record being insert? Keep in mind that this is a highly concurrent environment, and I would like to avoid multiple trips to the DB if possible.
I have tried the following:
insert into Dummy values(CONCAT('a', (select IDENT_CURRENT('Dummy'))))
and
insert into Dummy values(CONCAT('a', (select SCOPE_IDENTITY() + 1)))
The first query is not safe, for when running 1000 concurrent inserts I got 58 'duplicate key' exceptions.
The second query didn't work because SCOPE_IDENTITY() returned the same value for all queries as I suspected.
What are my alternatives here?
Try a temporary table to track your inserted ids using OUTPUT clause
INSERT #temp_ids(someval) OUTPUT inserted.identity_column
This will get all the inserted ids from your queries. 'inserted' is context safe.
Related
I am trying to build a stored procedure that retrieve information from few tables in my databases. I often use variable table to hold data since I have to return it in a result set and also reuse it in following queries instead of requiring the table multiple times.
Is this a good and common way to do that ?
So I started having performance issues when testing the stored procedure. By the way is there an efficient way to test is without having to change the parameter each times ? If I don't change parameter values the query will take only a few milliseconds to run I assume it use some sort of cache.
So I was starting having performance issues when the day before everything was working well so I reworked my queries looked that all index was being used correctly etc. Then I tried switching variable table for temp table just for testing purpose and bingo the 2 or 3 next tests ran like a charm and then performance issues started to appear again. So I am a bit clueless on what happens here and why it happen.
I am running my tests on the production db since it doesn't update or insert anything. There is a piece of code to give you an idea of my test case
--Stuff going on to get values in a temps table for the next query
DECLARE #ApplicationIDs TABLE(ID INT)
-- This table have over 110 000 000 rows and this query use one of its indexes. The query insert between 1 and 10-20k rows
INSERT INTO #ApplicationIDs(ID)
SELECT ApplicationID
FROM Schema.Application
WHERE Columna = value
AND Columnb = value
AND Columnc = value
-- I request the table again but joined with other tables to have my final resultset no performance issues here. ApplicationID is the clustered primary key
SELECT Columns
FROM Schema.Application
INNER JOIN SomeTable ON Columna = Columnb
WHERE ApplicationID IN (SELECT ID FROM #ApplicationIDs)
--There is where it starts happening this table has around 200 000 000 rows and about 50 columns and yes the applicationid column is indexed (nonclustered). I use this index that way in few other context and it work well just not this one
SELECT Columns
FROM Schema.SubApplication
WHERE ApplicationID IN (SELECT ID FROM #ApplicationIDs)
The server is in a VM with 64 gb of ram and SQL have 56GB allocated.
Let me know if you need further details.
Hi i am a bit confused about the handling of indexes in postgres. I use the 9.6 version. From my understanding after reading postgres docs and answers from stackoverflow i want to verify the following:
postgres does not support indexes with the classic notion
all indexes in postgres are non-clustered indexes
indexes does not allocate any new space but apply a sort on the table
thats way after create index a CLUSTER command shall follow.
in the docs it is stated that after updates/inserts on table the index is updated automatically
Show i created a table with col1,col2,col3,col4 and the an index based on col2, col3. Selects that have to do with col2, col3 became 15 times faster.
When i execute select * from table then results are displayed first sorted based on col2 and then based on col3.
When i add a new row in the table (with a col2 value (test_value) that already existed), this row went at the end of the table (this was checked with : select * from table).
1) Did the index got updated with this new entry automatically even if the select all showed the row at the end?
2) If a execute a query will all the rows that have the test_value on col2 what will happen? Will i get all the results through the index?
There are some wrong assumptions here.
The most important is: The order of the rows in a select is indeterminate unless you include ORDER BY. So you can get any result db engine decide is the faster way to get the data. So if select * from table return the last inserted element at the end, that doesn't tell you anything regarding the index.
How Rows are stored and Index information are separate things
1) Yes, index was updated after insert.
2) Yes, because index was already update.
In SQL Server, i am inserting multiple records into table using batch update. How do i get back the ID's (unique primary key) which is being created after batch update?
If I insert one record, I can get the last inserted using IDENT(tableName). I am not sure how to get if I do batch update. Please help.
For example, I have student table, with ROLE NO and NAME. ROLE NO is auto incremented by 1, as soon I insert the names into DB using java program. I will add 3 rows at a time using batch update from my java code. In DB, it gets added with ROLE NO 2, 3 and 4. How do I get these newly generated ID in my java program, please help
I tried getting ids using getgeneratedkeys method after I do executebatch. I get exception. Is batch update + get generated keys supported.?
In SQL Server when you do an insert there is an extra option your query; OUTPUT. This will let you capture back the data you inserted into the table - including your id's. You have to insert them into a temporary table; so something like this (with your table/ column names will get you there.
declare #MyNewRoles Table (Name, RoleNo)
insert into tblMyTable
(Name)
Select
Name
Output
inserted.Name, Inserted.RoleNo
into #MyNewRoles
From tblMyTableOfNames
select * from #MyNewRoles
If you don't mind adding a field to your table, you could generate a unique ID for each batch transaction (for example, a random UUID), and store that in the table as well. Then, to find the IDs associated with a given transaction you would just need something like
select my_id from my_table where batch_id = ?
I am very new to SQL and SQL server, would appreciate any help with the following problem.
I am trying to update a share price table with new prices.
The table has three columns: share code, date, price.
The share code + date = PK
As you can imagine, if you have thousands of share codes and 10 years' data for each, the table can get very big. So I have created a separate table called a share ID table, and use a share ID instead in the first table (I was reliably informed this would speed up the query, as searching by integer is faster than string).
So, to summarise, I have two tables as follows:
Table 1 = Share_code_ID (int), Date, Price
Table 2 = Share_code_ID (int), Share_name (string)
So let's say I want to update the table/s with today's price for share ZZZ. I need to:
Look for the Share_code_ID corresponding to 'ZZZ' in table 2
If it is found, update table 1 with the new price for that date, using the Share_code_ID I just found
If the Share_code_ID is not found, update both tables
Let's ignore for now how the Share_code_ID is generated for a new code, I'll worry about that later.
I'm trying to use a merge query loosely based on the following structure, but have no idea what I am doing:
MERGE INTO [Table 1]
USING (VALUES (1,23-May-2013,1000)) AS SOURCE (Share_code_ID,Date,Price)
{ SEEMS LIKE THERE SHOULD BE AN INNER JOIN HERE OR SOMETHING }
ON Table 2 = 'ZZZ'
WHEN MATCHED THEN UPDATE SET Table 1.Price = 1000
WHEN NOT MATCHED THEN INSERT { TO BOTH TABLES }
Any help would be appreciated.
http://msdn.microsoft.com/library/bb510625(v=sql.100).aspx
You use Table1 for target table and Table2 for source table
You want to do action, when given ID is not found in Table2 - in the source table
In the documentation, that you had read already, that corresponds to the clause
WHEN NOT MATCHED BY SOURCE ... THEN <merge_matched>
and the latter corresponds to
<merge_matched>::=
{ UPDATE SET <set_clause> | DELETE }
Ergo, you cannot insert into source-table there.
You could use triggers for auto-insertion, when you insert something in Table1, but that will not be able to insert proper Shared_Name - trigger just won't know it.
So you have two options i guess.
1) make T-SQL code block - look for Stored Procedures. I think there also is a construct to execute anonymous code block in MS SQ, like EXECUTE BLOCK command in Firebird SQL Server, but i don't know it for sure.
2) create updatable SQL VIEW, joining Table1 and Table2 to show last most current date, so that when you insert a row in this view the view's on-insert trigger would actually insert rows to both tables. And when you would update the data in the view, the on-update trigger would modify the data.
I have a table in MS SQL 2005. And would like to do:
update Table
set ID = ID + 1
where ID > 5
And the problem is that ID is primary key and when I do this I have an error, because when this query comes to row with ID 8 it tries to change the value to 9, but there is old row in this table with value 9 and there is constraint violation.
Therefore I would like to control the update query to make sure that it's executed in the descending order.
So no for ID = 1,2,3,4 and so on, but rather ID = 98574 (or else) and then 98573, 98572 and so on. In this situation there will be no constraint violation.
So how to control order of update execution? Is there a simple way to acomplish this programmatically?
Transact SQL defers constraint checking until the statement finishes.
That's why this query:
UPDATE mytable
SET id = CASE WHEN id = 7 THEN 8 ELSE 7 END
WHERE id IN (7, 8)
will not fail, though it swaps id's 7 and 8.
It seems that some duplicate values are left after your query finishes.
Try this:
update Table
set ID = ID * 100000 + 1
where ID > 5
update Table
set ID = ID / 100000
where ID > 500000
Don't forget the parenthesis...
update Table
set ID = (ID * 100000) + 1
where ID > 5
If the IDs get too big here, you can always use a loop.
Personally I would not update an id field this way, I would create a work table that is the old to new table. It stores both ids and then all the updates are done from that. If you are not using cascade delete (which could incidentally lock your tables for a long time), then start with the child tables and work up, other wise start with the pk table. Do not do this unless you are in single user mode or you can get some nasty data integrity problems if other users are changin things while the tables are not consistent with each other.
PKs are nothing to fool around with changing and if at all possible should not be changed.
Before you do any changes to production data in this way, make sure to take a full backup. Messing this up can cost you your job if you can't recover.