I have two old tables and wanna "synch" (or better setting new) two new tables... like that:
tbl_old_event_categories (id, title)
tbl_old_events (id, title, cat_id)
tbl_new_event_categories (id, category)
tbl_new_events (event_id, event, category_id)
The problem is that the new tables might already have values. So the IDs will changing (maybe). Because that I can't use ON DUPLICATE KEY UPDATE. :( I need to check it separate for each record. The tables are not unique (and I can't change that). :/
I created a JOIN to get:
tbl_old_fullevents (event, category) //no IDs (integer) only the NAMEs (string)
But how to create an INSERT INTO [tbl_new_events] with checking for existing [event]- and [category]-value? It's something like:
IF(tbl_old_fullevents.event IS NOT IN(tbl_new_events.events)) {
INSERT INTO new_events VALUE(
NULL, //ID
tbl_old_fullevents.event,
IF(tbl_old_fullevents.category IN(new_event_categories.categories)) {
new_event_categories.id //matched
}
ELSE {
INSERT INTO new_event_categories VALUE (
NULL, //ID
old_fullevents.category
);
new_event_categories.id //last INSERT-ID
}
);
}
Use the MERGE syntax. See http://technet.microsoft.com/en-us/library/bb510625.aspx for examples.
You can't use ON DUPLICATE KEY UPDATE because that's mysql.
Related
I'm not very experienced with SQLite, and I want to perform some operations based on a local database. My data consists of a lot of entries of one ID, and there are some additional IDs to specify the data. Based on multiple IDs, there should be a unique combination for each entry in the table.
I would like to select (if it exists) a row based on a combination of IDs, and either insert or update some columns in that row.
I haven't really tried anything because I can't find where to start.
But to illustrate what I mean, I would think of something like this:
UPDATE OR REPLACE INTO my_table (ID,Part_ID,Location_ID,Torque) VALUES (2,6,4,100) WHERE (ID,Part_ID,Location_ID) = (2,6,4)
First, you need a unique constraint for the combination of the columns ID, Part_ID and Location_ID, which you can define with a unique index:
CREATE UNIQUE INDEX idx_my_table ON my_table (ID, Part_ID, Location_ID);
Then use UPSERT:
INSERT INTO my_table (ID, Part_ID, Location_ID, Torque) VALUES (2, 6, 4, 100)
ON CONFLICT(ID, Part_ID, Location_ID) DO UPDATE
SET Torque = EXCLUDED.Torque;
See the demo.
I am new to using graph ql, and I am trying to set up a unique key for one of my tables.
For context, the table's key will be determined on the combination of part_number + organization_id. So each organization ID can only have one part_number, but different organizations can have the same part_number.
The issue I am running into is that organization_id is a nullable field. When null, this represents global data. So essentially, I want it to act the same as an organization_id.
IE, if I had the part_number: ABC123, I want to enforce that only one of those exist for each organization_id AND only one of those exists for a row with no organization_id.
Currently, I have a unique key set to product_pn_organization_id, and everything works fine for products with an organization ID, but as soon as the organization ID is null graph ql completely ignores the unique key constraint. So when I run an insert mutation with the product_pn_organization_id constraint on a part_number: ABC123 organization_id: null (assuming this already exists) instead of updating the row, it creates a new row.
If I run the same insert with an organization_id (part_number: ABC123, organization_id: 1, again assuming this row already exists) it will update the columns instead of creating a new row.
Right now, the only solution I can think of is creating an organization that represents 'global' and having that as the default organization_id so that organization_id is never actually null. However, I would rather avoid that if possible.
Hoping someone has some advice on how to move forward here. Thanks!
Per request, here are the mutations:
This mutation inserts a new row with organization_id set to null.
mutation MyMutation {
insert_products(objects: {pn: "ABC123", manufacturer_pn: "MANABC123"}, on_conflict: {constraint: products_pn_organization_id_key, update_columns: manufacturer_pn}) {
returning {
id
}
}
}
Ideally, this query would update the row from the first query, but instead creates a new row.
mutation MyMutation {
insert_products(objects: {pn: "ABC123", manufacturer_pn: "MANABC124"}, on_conflict: {constraint: products_pn_organization_id_key, update_columns: manufacturer_pn}) {
returning {
id
}
}
}
This query inserts the same PN but with an organization_id.
mutation MyMutation {
insert_products(objects: {pn: "ABC123", manufacturer_pn: "MANABC123", organization_id: "00000000-0000-0000-0000-000000000000"}, on_conflict: {constraint: products_pn_organization_id_key, update_columns: manufacturer_pn}) {
returning {
id
}
}
}
Unlike the second query, this query actually updates the row belonging to the organization_id/pn combination instead of creating a new row.
mutation MyMutation {
insert_products(objects: {pn: "ABC123", manufacturer_pn: "MANABC124", organization_id: "00000000-0000-0000-0000-000000000000"}, on_conflict: {constraint: products_pn_organization_id_key, update_columns: manufacturer_pn}) {
returning {
id
}
}
}
You are probably running on top of a Postgres DB, and you need to update to V 15 to get support for this. More info here, and an excerpt:
In Postgres 14 and older versions unique constraints always treat NULL
values as not equal to other NULL values. If you're inserting a NULL
value into a table and you have a unique constraint, the NULL value is
considered to be distinct on its own. NULL is always different from
another NULL. When you're inserting five records into the
"old_null_style" table where "val1" is just always the same value
"Hello" and then "val2" is always NULL.
Even though you have a unique constraint that actually supports you
inserting that five times or as many times as you'd like, because you
have that NULL value that makes each row distinct from another and
because the unique constraint includes both "val1" and "val2", all the
rows are unique.
I have the following table:
create table movie(
movie_id integer primary key,
title varchar(500) not null,
kind varchar(30),
year integer not null
);
I want to create a function:
addMovie(title, kind, year)
The function must insert a new row into the movie table with a unique movie_id, and then return the movie_id.
This is the first time I'm using PL/SQL, so I'm kind of lost at the moment - couldn't find any (good) examples so far.
Thanks!
Your function needs to do 3 things
Generate the unique movie id
Insert into the table
Return the generated id
Let's take it one step at a time
Generate the unique movie id
The best way to do it is to use a seqeuence which will generated a id for you. Look up on sequences
Insert into the table
This is done by a straightforward insert. Since the movie id is generated by the sequence, we use sequence_name.nextval in the insert statement. Thus the insert statement looks like
INSERT INTO movie(movie_id, title, kind, year) values (movie_id_seq.nextval, title, kind, year)
Return the generated id back
You can make use of the Returning clause in a DML to return the generated id back into a variable. And then use the RETURN statement to return the value back.
So this is how your function will look like
FUNCTION addmovie(p_title,
p_kind,
p_year)
RETURN NUMBER
IS
v_id movie.id%TYPE;
BEGIN
INSERT INTO movie
(
movie_id,
title,
kind,
year
)
VALUES
(
movie_id_seq.NEXTVAL,
p_title,
p_kind,
p_year
)
returning id
INTO v_id;
RETURN v_id;
END;
Note that this is a fairly basic function, with no error checking, exception handling - I'll leave it up to you.
Note that max(movie_id)+1 isn't the best way forward, for purposes of the assignment. You'll need
SELECT max(movie_id)+1 into v_id from movies;
before the insert statement.
Also, because of the DML, you can't use this function as part of a query.
I have two tables with identical structure which I need to copy several columns from one to the other. Something like this:
* UPDATE **
Apparently I need to update the new table. I want to copy the data from the old table into the new table for each record that the name matches. I'm not sure what command to use. Here is an approximation:
copy from OLDTABLE columns category, key into NEWTABLE when the names match
Any help or suggestions would be appreciated. Thanks in advance!
Your query does not have proper syntax. It has two select clauses, the second of which does not have a from:
insert into newdata
select category, key
from olddata
select category, key
where olddata.name = newdata.name
I am guessing that you want something like:
insert into newdata(cateogry, key)
select olddata.category, olddata.key
from olddata
where olddata.name
Do you really want an insert, or are you looking for an update?
I have a table Messages with columns ID (primary key, autoincrement) and Content (text).
I have a table Users with columns username (primary key, text) and Hash.
A message is sent by one Sender (user) to many recipients (user) and a recipient (user) can have many messages.
I created a table Messages_Recipients with two columns: MessageID (referring to the ID column of the Messages table and Recipient (referring to the username column in the Users table). This table represents the many to many relation between recipients and messages.
So, the question I have is this. The ID of a new message will be created after it has been stored in the database. But how can I hold a reference to the MessageRow I just added in order to retrieve this new MessageID? I can always search the database for the last row added of course, but that could possibly return a different row in a multithreaded environment?
EDIT: As I understand it for SQLite you can use the SELECT last_insert_rowid(). But how do I call this statement from ADO.Net?
My Persistence code (messages and messagesRecipients are DataTables):
public void Persist(Message message)
{
pm_databaseDataSet.MessagesRow messagerow;
messagerow=messages.AddMessagesRow(message.Sender,
message.TimeSent.ToFileTime(),
message.Content,
message.TimeCreated.ToFileTime());
UpdateMessages();
var x = messagerow;//I hoped the messagerow would hold a
//reference to the new row in the Messages table, but it does not.
foreach (var recipient in message.Recipients)
{
var row = messagesRecipients.NewMessages_RecipientsRow();
row.Recipient = recipient;
//row.MessageID= How do I find this??
messagesRecipients.AddMessages_RecipientsRow(row);
UpdateMessagesRecipients();//method not shown
}
}
private void UpdateMessages()
{
messagesAdapter.Update(messages);
messagesAdapter.Fill(messages);
}
One other option is to look at the system table sqlite_sequence. Your sqlite database will have that table automatically if you created any table with autoincrement primary key. This table is for sqlite to keep track of the autoincrement field so that it won't repeat the primary key even after you delete some rows or after some insert failed (read more about this here http://www.sqlite.org/autoinc.html).
So with this table there is the added benefit that you can find out your newly inserted item's primary key even after you inserted something else (in other tables, of course!). After making sure that your insert is successful (otherwise you will get a false number), you simply need to do:
select seq from sqlite_sequence where name="table_name"
With SQL Server you'd SELECT SCOPE_IDENTITY() to get the last identity value for the current process.
With SQlite, it looks like for an autoincrement you would do
SELECT last_insert_rowid()
immediately after your insert.
http://www.mail-archive.com/sqlite-users#sqlite.org/msg09429.html
In answer to your comment to get this value you would want to use SQL or OleDb code like:
using (SqlConnection conn = new SqlConnection(connString))
{
string sql = "SELECT last_insert_rowid()";
SqlCommand cmd = new SqlCommand(sql, conn);
conn.Open();
int lastID = (Int32) cmd.ExecuteScalar();
}
I've had issues with using SELECT last_insert_rowid() in a multithreaded environment. If another thread inserts into another table that has an autoinc, last_insert_rowid will return the autoinc value from the new table.
Here's where they state that in the doco:
If a separate thread performs a new INSERT on the same database connection while the sqlite3_last_insert_rowid() function is running and thus changes the last insert rowid, then the value returned by sqlite3_last_insert_rowid() is unpredictable and might not equal either the old or the new last insert rowid.
That's from sqlite.org doco
According to Android Sqlite get last insert row id there is another query:
SELECT rowid from your_table_name order by ROWID DESC limit 1
Sample code from #polyglot solution
SQLiteCommand sql_cmd;
sql_cmd.CommandText = "select seq from sqlite_sequence where name='myTable'; ";
int newId = Convert.ToInt32( sql_cmd.ExecuteScalar( ) );
sqlite3_last_insert_rowid() is unsafe in a multithreaded environment (and documented as such on SQLite)
However the good news is that you can play with the chance, see below
ID reservation is NOT implemented in SQLite, you can also avoid PK using your own UNIQUE Primary Key if you know something always variant in your data.
Note:
See if the clause on RETURNING won't solve your issue
https://www.sqlite.org/lang_returning.html
As this is only available in recent version of SQLite and may have some overhead, consider Using the fact that it's really bad luck if you have an insertion in-between your requests to SQLite
see also if you absolutely need to fetch SQlite internal PK, can you design your own predict-able PK:
https://sqlite.org/withoutrowid.html
If need traditional PK AUTOINCREMENT, yes there is a small risk that the id you fetch may belong to another insertion. Small but unacceptable risk.
A workaround is to call twice the sqlite3_last_insert_rowid()
#1 BEFORE my Insert, then #2 AFTER my insert
as in :
int IdLast = sqlite3_last_insert_rowid(m_db); // Before (this id is already used)
const int rc = sqlite3_exec(m_db, sql,NULL, NULL, &m_zErrMsg);
int IdEnd = sqlite3_last_insert_rowid(m_db); // After Insertion most probably the right one,
In the vast majority of cases IdEnd==IdLast+1. This the "happy path" and you can rely on IdEnd as being the ID you look for.
Else you have to need to do an extra SELECT where you can use criteria based on IdLast to IdEnd (any additional criteria in WHERE clause are good to add if any)
Use ROWID (which is an SQlite keyword) to SELECT the id range that is relevant.
"SELECT my_pk_id FROM Symbols WHERE ROWID>%d && ROWID<=%d;",IdLast,IdEnd);
// notice the > in: ROWID>%zd, as we already know that IdLast is NOT the one we look for.
As second call to sqlite3_last_insert_rowid is done right away after INSERT, this SELECT generally only return 2 or 3 row max.
Then search in result from SELECT for the data you Inserted to find the proper id.
Performance improvement: As the call to sqlite3_last_insert_rowid() is way faster than the INSERT, (Even if mutex may make that wrong it is statistically true) I bet on IdEnd to be the right one and unwind the SELECT results by the end. Nearly in every cases we tested the last ROW does contain the ID you look for).
Performance improvement: If you have an additional UNIQUE Key, then add it to the WHERE to get only one row.
I experimented using 3 threads doing heavy Insertions, it worked as expected, the preparation + DB handling take the vast majority of CPU cycles, then results is that the Odd of mixup ID is in the range of 1/1000 insertions (situation where IdEnd>IdLast+1)
So the penalty of an additional SELECT to resolve this is rather low.
Otherwise said the benefit to use the sqlite3_last_insert_rowid() is great in the vast majority of Insertion, and if using some care, can even safely be used in MT.
Caveat: Situation is slightly more awkward in transactional mode.
Also SQLite didn't explicitly guaranty that ID will be contiguous and growing (unless AUTOINCREMENT). (At least I didn't found information about that, but looking at the SQLite source code it preclude that)
the simplest method would be using :
SELECT MAX(id) FROM yourTableName LIMIT 1;
if you are trying to grab this last id in a relation to effect another table as for example : ( if invoice is added THEN add the ItemsList to the invoice ID )
in this case use something like :
var cmd_result = cmd.ExecuteNonQuery(); // return the number of effected rows
then use cmd_result to determine if the previous Query have been excuted successfully, something like : if(cmd_result > 0) followed by your Query SELECT MAX(id) FROM yourTableName LIMIT 1; just to make sure that you are not targeting the wrong row id in case the previous command did not add any Rows.
in fact cmd_result > 0 condition is very necessary thing in case anything fail . specially if you are developing a serious Application, you don't want your users waking up finding random items added to their invoice.
I recently came up with a solution to this problem that sacrifices some performance overhead to ensure you get the correct last inserted ID.
Let's say you have a table people. Add a column called random_bigint:
create table people (
id int primary key,
name text,
random_bigint int not null
);
Add a unique index on random_bigint:
create unique index people_random_bigint_idx
ON people(random_bigint);
In your application, generate a random bigint whenever you insert a record. I guess there is a trivial possibility that a collision will occur, so you should handle that error.
My app is in Go and the code that generates a random bigint looks like this:
func RandomPositiveBigInt() (int64, error) {
nBig, err := rand.Int(rand.Reader, big.NewInt(9223372036854775807))
if err != nil {
return 0, err
}
return nBig.Int64(), nil
}
After you've inserted the record, query the table with a where filter on the random bigint value:
select id from people where random_bigint = <put random bigint here>
The unique index will add a small amount of overhead on the insertion. The id lookup, while very fast because of the index, will also add a little overhead.
However, this method will guarantee a correct last inserted ID.