So I currently have a primary postgres database that handles multiple users from different apps. So one of the issue regarding concurrency is that when say AppOne and AppTwo want to add users at the same time.
Currently what is happening is AppOne will generate a random number (must be 10 digits long) and then check if the value exists in the database, if it doesn't exist then it will insert the user with that value in a column called user_url (used for their url).
Now as you can image, if in between the time for the generation, check, or insertion AppTwo makes a request to add a users we can have repeated unique values (it's happened). I want to solve that issue potentially using postgres triggers.
I know that I can use transactions, but I don't want to hold up the database, I'd rather it created the unique number sequence through a function and trigger on the database side, so as I scale I don't have to worry about race conditions. Once the trigger does it's thing, I can then get the newly added user with all of it's data, including the unique id.
So Ideally
CREATE OR REPLACE FUNCTION set_unique_number(...) RETURNS trigger AS $$
DECLARE
BEGIN
....something here
RETURN new;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER insert_unique_url_id BEFORE INSERT ... PROCEDURE
set_unique_number(...);
it would be a function that generates the number and inserts it into the row, which would be run by a trigger of BEFORE INSERT. I may be wrong.
Any help/suggestions would be helpful
EDIT: I want it so that there is no sequence to the numbers. this way people could not guess the next user's url.
Thanks
9,000,000,000 is small enough number that birthday problem will guarantee that you'll start to see collisions very soon.
I think you can work around this problem while still allowing concurrent inserts by using advisory locking. Your procedure might look like this (in pseudocode):
while (true) {
start transaction;
bigint new_id = floor(random())*9000000000+1000000000;
if select pg_try_advisory_xact_lock(new_id) {
if select not exists id from url where id=new_id {
insert into url (id, ...) values (new_id, ...);
commit;
break;
}
}
commit;
}
This procedure will never end when you'd have 9,000,000,000 rows in the database. You'd have to implement it externally, as Postgres procedures do not allow multiple transactions within a procedure. It might be possible to work around by using exceptions, but it'll be rather complicated.
Why don't you use UUID-ossp extension? It will allow you to generated UUID's from within postgres itself.
Heres a good tutorial how to use those as primary keys even.
Related
I have an API that i'm trying to read that gives me just the updated field. I'm trying to take that and update my tables using a stored procedure. So far the only way I have been able to figure out how to do this is with dynamic SQL but i would prefer to not do that if there is a way not to.
If it was just a couple columns, I'd just write a proc for each but we are talking about 100 fields and any of them could be updated together. One ticket might just need a timestamp updated at this time, but the next ticket might be a timestamp and who modified it while the next one might just be a note.
Everything I've read and have been taught have told me that dynamic SQL is bad and while I'll write it if I have too, I'd prefer to have a proc.
YOU CAN PERHAPS DO SOMETHING LIKE THIS:::
IF EXISTS (SELECT * FROM NEWTABLE NOT IN (SELECT * FROM OLDTABLE))
BEGIN
UPDATE OLDTABLE
SET OLDTABLE.OLDRECORDS = NEWTABLE.NEWRECORDS
WHERE OLDTABLE.PRIMARYKEY= NEWTABLE.PRIMARYKEY
END
The best way to solve your problem is using MERGE:
Performs insert, update, or delete operations on a target table based on the results of a join with a source table. For example, you can synchronize two tables by inserting, updating, or deleting rows in one table based on differences found in the other table.
As you can see your update could be more complex but more efficient as well. Using MERGE requires some proficiency, but when you start to use it you'll use it with pleasure again and again.
I am not sure how your business logic works that determines what columns are updated at what time. If there are separate business functions that require updating different but consistent columns per function, you will probably want to have individual update statements for each function. This will ensure that each process updates only the columns that it needs to update.
On the other hand, if your API is such that you really don't know ahead of time what needs to be updated, then building a dynamic SQL query is a good idea.
Another option is to build a save proc that sets every user-configurable field. As long as the calling process has all of that data, it can call the save procedure and pass every updateable column. There is no harm in having a UPDATE MyTable SET MyCol = #MyCol with the same values on each side.
Note that even if all of the values are the same, the rowversion (or timestampcolumns) will still be updated, if present.
With our software, the tables that users can edit have a widely varying range of columns. We chose to create a single save procedure for each table that has all of the update-able columns as parameters. The calling processes (our web servers) have all the required columns in memory. They pass all of the columns on every call. This performs fine for our purposes.
I have a Java EE Web Application and a SQL Server Database.
I intend to cluster my database later.
Now, I have two tables:
- Users
- Places
But I don't want to use auto id of SQL Server.
I want to generate my own id because of the cluster.
So, I've created a new table Parameter. The parameter table has two columns: TableName and LastId. My parameter table stores the last id. When I add a new user, my method addUser do this:
Query the last id of the parameter table and increments +1;
Insert the new User
Update the last id +1.
It's working. But it's a web application, so how about 1000 people simultaneously? Maybe some of them get the same last id. How can I solve this? I've tried with synchronized, but it's not working.
What do you suggest? Yes, I have to avoid auto-increment.
I know that the user has to wait.
Automatic ID may work better in a cluster, but if you want to be database-portable or implement the allocator yourself, the basic approach is to work in an optimistic loop.
I prefer 'Next ID', since it makes the logic cleaner, so I'm going to use that in this example.
SELECT the NextID from your allocator table.
UPDATE NextID SET NextID=NextID+Increment WHERE NextID=the value you read
loop while RowsAffected != 1.
Of course, you'll also use the TableName condition when selecting/ updating to select the appropriate allocator row.
You should also look at allocating in blocks -- Increment=200, say -- and caching them in the appserver. This will give better concurrency & be a lot faster than hitting the DB each time.
I have a table called ticket, and it has a field called number and a foreign key called client that needs to work much like an auto-field (incrementing by 1 for each new record), except that the client chain needs to be able specify the starting number. This isn't a unique field because multiple clients will undoubtedly use the same numbers (e.g. start at 1001). In my application I'm fetching the row with the highest number, and using that number + 1 to make the next record's number. This all takes place inside of a single transaction (the fetching and the saving of the new record). Is it true that I won't have to worry about a ticket ever getting an incorrect (duplicate) number under a high load situation, or will the transaction protect from that possibility? (note: I'm using PostgreSQL 9.x)
without locking the whole table on every insert/update, no. The way transactions work on PostgreSQL means that new rows that appear as a result of concurrent transactions never conflict with each other; and thats exactly what would be happening.
You need to make sure that updates actually cause the same rows to conflict. You would basically need to implement something similar to the mechanic used by PostgreSQL's native sequences.
What I would do is add another column to the table referenced by your client column to represent the last_val of the sequence's you'll be using. So each transaction would look sort of like this:
BEGIN;
SET TRANSACTION SERIALIZABLE;
UPDATE clients
SET footable_last_val = footable_last_val + 1
WHERE clients.id = :client_id;
INSERT INTO footable(somecol, client_id, number)
VALUES (:somevalue,
:client_id,
(SELECT footable_last_val
FROM clients
WHERE clients.id = :client_id));
COMMIT;
So that the first update into the clients table fails due to a version conflict before reaching the insert.
You do have to worry about duplicate numbers.
The typical problematic scenario is: transaction T1 reads N, and creates a new row with N+1. But before T1 commits, another transaction T2 sees N as the max for this client and creates another new row with N+1 => conflict.
There are many ways to avoid this; here is a simple piece of plpgsql code that implements one method, assuming a unique index on (client,number). The solution is to let the inserts run concurrently but in the event of a unique index violation, retry with refreshed values until it's accepted (it's not a busy loop, though, since concurrent inserts are blocked until other transactions commit)
do
$$
begin
loop
BEGIN
-- client number is assumed to be 1234 for the sake of simplicity
insert into the_table(client,number)
select 1234, 1+coalesce(max(number),0) from the_table where client=1234;
exit;
EXCEPTION
when unique_violation then -- nothing (keep looping)
END;
end loop;
end$$;
This example is a bit similar to the UPSERT implementation from the PG documentation.
It's easily transferable into a plpgsql function taking the client id as input.
I'm using LINQ, but my database tables do not have an IDENTITY column (although they are using a surrogate Primary Key ID column)
Can this work?
To get the identity values for a table, there is a stored procedure called GetIDValueForOrangeTable(), which looks at a SystemValues table and increments the ID therein.
Is there any way I can get LINQ to get the ID value from this SystemValues table on an insert, rather than the built in IDENTITY?
As an aside, I don't think this is a very good idea, especially not for a web application. I imagine there will be a lot of concurrency conflicts because of this SystemValues lookup. Am I justified in my concern?
Cheers
Duncan
Sure you can make this work with LINQ, and safely, too:
wrap the access to the underlying SystemValues table in the "GetIDValue.....()" function in a TRANSACTION (and not with the READUNCOMMITTED isolation level!), then one and only one user can access that table at any given time and you should be able to safely distribute ID's
call that stored proc from LINQ just before saving your entity and store the ID if you're dealing with a new entity (if the ID hasn't been set yet)
store your entity in the database
That should work - not sure if it's any faster and any more efficient than letting the database handle the work - but it should work - and safely.
Marc
UPDATE:
Something like this (adapt to your needs) will work safely:
CREATE PROCEDURE dbo.GetNextTableID(#TableID INT OUTPUT)
AS BEGIN
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
BEGIN TRANSACTION
UPDATE SystemTables
SET MaxTableID = MaxTableID + 1
WHERE ........
SELECT
#TableID = MaxTableID
FROM
dbo.SystemTables
COMMIT TRANSACTION
END
As for performance - as long as you have a reasonable number (less than 50 maybe) of concurrent users, and as long as this SystemTables tables isn't used for much else, then it should perform OK.
You are very justified in your concern. If two users try to insert at the sametime, both might be given the same number unless you do as described by marc_s and put the thing in a transaction. However, if the transaction doesn't wrap around your whole insert as well as the table that contains the id values, you may still have gaps if the outer insert fails (It got a value but then for some other reason didn't insert a record). Since most people do this to avoid gaps (something that is in most cases an unnecessary requirement) it makes life more complicated and still may not achieve the result. Using an identity field is almost always a better choice.
A few months back, I started using a CRUD script generator for SQL Server. The default insert statement that this generator produces, SELECTs the inserted row at the end of the stored procedure. It does the same for the UPDATE too.
The previous way (and the only other way I have seen online) is to just return the newly inserted Id back to the business object, and then have the business object update the Id of the record.
Having an extra SELECT is obviously an additional database call, and more data is being returned to the application. However, it allows additional flexibility within the stored procedure, and allows the application to reflect the actual data in the table.
The additional SELECT also increases the complexity when wanting to wrap the insert/update statements in a transaction.
I am wondering what people think is better way to do it, and I don't mean the implementation of either method. Just which is better, return just the Id, or return the whole row?
We always return the whole row on both an Insert and Update. We always want to make sure our client apps have a fresh copy of the row that was just inserted or updated. Since triggers and other processes might modify values in columns outside of the actual insert/update statement, and since the client usually needs the new primary key value (assuming it was auto generated), we've found it's best to return the whole row.
The select statement will have some sort of an advantage only if the data is generated in the procedure. Otherwise the data that you have inserted is generally available to you already so no point in selecting and returning again, IMHO. if its for the id then you can have it with SCOPE_IDENTITY(), that will return the last identity value created in the current session for the insert.
Based on my prior experience, my knee-jerk reaction is to just return the freshly generated identity value. Everything else the application is inserting, it already knows--names, dollars, whatever. But a few minutes reflection and reading the prior 6 (hmm, make that 5) replies, leads to a number of “it depends” situations:
At the most basic level, what you inserted is what you’d get – you pass in values, they get written to a row in the table, and you’re done.
Slightly more complex that that is when there are simple default values assigned during an insert statement. “DateCreated” columns that default to the current datetime, or “CreatedBy” that default to the current SQL login, are a prime example. I’d include identity columns here, since not every table will (or should) contain them. These values are generated by the database upon table insertion, so the calling application cannot know what they are. (It is not unknown for web server clocks to not be synchronized with database server clocks. Fun times…) If the application needs to know the values just generated, then yes, you’d need to pass those back.
And then there are are situations where additional processing is done within the database before data is inserted into the table. Such work might be done within stored procedures or triggers. Once again, if the application needs to know the results of such calculations, then the data would need to be returned.
With that said, it seems to me the main issue underlying your decision is: how much control/understanding do you have over the database? You say you are using a tool to automatically generate your CRUD procedures. Ok, that means that you do not have any elaborate processing going on within them, you’re just taking data and loading it on in. Next question: are there triggers (of any kind) present that might modify the data as it is being written to the tables? Extend that to: do you know whether or not such triggers exists? If they’re there and they matter, plan accordingly; if you do not or cannot know, then you might need to “follow up” on the insert to see if changes occurred. Lastly: does the application care? Does it need to be informed of the results of the insert action it just requested, and if so, how much does it need to know? (New identity value, date time it was added, whether or not something changed the Name from “Widget” to “Widget_201001270901”.)
If you have complete understanding and control over the system you are building, I would only put in as much as you need, as extra code that performs no useful function impacts performance and maintainability. On the flip side, if I were writing a tool to be used by others, I’d try to build something that did everything (so as to increase my market share). And if you are building code where you don't really know how and why it will be used (application purpose), or what it will in turn be working with (database design), then I guess you'd have to be paranoid and try to program for everything. (I strongly recommend not doing that. Pare down to do only what needs to be done.)
Quite often the database will have a property that gives you the ID of the last inserted item without having to do an additional select. For example, MS SQL Server has the ##Identity property (see here). You can pass this back to your application as an output parameter of your stored procedure and use it to update your data with the new ID. MySQL has something similar.
INSERT
INTO mytable (col1, col2)
OUTPUT INSERTED.*
VALUES ('value1', 'value2')
With this clause, returning the whole row does not require an extra SELECT and performance-wise is the same as returning only the id.
"Which is better" totally depends on your application needs. If you need the whole row, return the whole row, if you need only the id, return only the id.
You may add an extra setting to your business object which can trigger this option and return the whole row only if the object needs it:
IF #return_whole_row = 1
INSERT
INTO mytable (col1, col2)
OUTPUT INSERTED.*
VALUES ('value1', 'value2')
ELSE
INSERT
INTO mytable (col1, col2)
OUTPUT INSERTED.id
VALUES ('value1', 'value2')
FI
I don't think I would in general return an entire row, but it could be a useful technique.
If you are code-generating, you could generate two procs (one which calls the other, perhaps) or parametrize a single proc to determine whther to return it over the wire or not. I doubt the DB overhead is significant (single-row, got to have a PK lookup), but the data on the wire from DB to client could be significant when all added up and if it's just discarded in 99% of the cases, I see little value. Having an SP which returns different things with different parameters is a potential problem for clients, of course.
I can see where it would be useful if you have logic in triggers or calculated columns which are managed by the database, in which case, a SELECT is really the only way to get that data back without duplicating the logic in your client or the SP itself. Of course, the place to put any logic should be well thought out.
Putting ANY logic in the database is usually a carefully-thought-out tradeoff which starts with the minimally invasive and maximally useful things like constraints, unique constraints, referential integrity, etc and growing to the more invasive and marginally useful tools like triggers.
Typically, I like logic in the database when you have multi-modal access to the database itself, and you can't force people through your client assemblies, say. In this case, I would still try to force people through views or SPs which minimize the chance of errors, duplication, logic sync issues or misinterpretation of data, thereby providing as clean, consistent and coherent a perimeter as possible.