Multiple TSQL sequences in one sequence - sql-server

I need to run sequence number for all of my customers in my SQL Server database.
I have about 1000 customers, I need each and one of them have their own sequence?
How can that be done in TSQL?
Is it possible to make one sequence to serve all purposes?
It doesn't sounds reasonable to created 1000 sequences.

Sequence will give you only one number series. To have one per each customer, and I would assume new customers can also appear, the only way I can think of is to create a table for the sequences. You could just have a column for the customer code and int column for the sequence.
If your database needs these sequences really often (in scale of several per second), this could easily become a bottleneck, but I assume here that's not the case.

Related

PostgreSQL increase sequence even if error due to unique - is this normal behavior? [duplicate]

I must / have to create unique ID for invoices. I have a table id and another column for this unique number. I use serialization isolation level. Using
var seq = #"SELECT invoice_serial + 1 FROM invoice WHERE ""type""=#type ORDER BY invoice_serial DESC LIMIT 1";
Doesn't help because even using FOR UPDATE it wont read correct value as in serialization level.
Only solution seems to put some retry code.
Sequences do not generate gap-free sets of numbers, and there's really no way of making them do that because a rollback or error will "use" the sequence number.
I wrote up an article on this a while ago. It's directed at Oracle but is really about the fundamental principles of gap-free numbers, and I think the same applies here.
Well, it’s happened again. Someone has asked how to implement a requirement to generate a gap-free series of numbers and a swarm of nay-sayers have descended on them to say (and here I paraphrase slightly) that this will kill system performance, that’s it’s rarely a valid requirement, that whoever wrote the requirement is an idiot blah blah blah.
As I point out on the thread, it is sometimes a genuine legal requirement to generate gap-free series of numbers. Invoice numbers for the 2,000,000+ organisations in the UK that are VAT (sales tax) registered have such a requirement, and the reason for this is rather obvious: that it makes it more difficult to hide the generation of revenue from tax authorities. I’ve seen comments that it is a requirement in Spain and Portugal, and I’d not be surprised if it was not a requirement in many other countries.
So, if we accept that it is a valid requirement, under what circumstances are gap-free series* of numbers a problem? Group-think would often have you believe that it always is, but in fact it is only a potential problem under very particular circumstances.
The series of numbers must have no gaps.
Multiple processes create the entities to which the number is associated (eg. invoices).
The numbers must be generated at the time that the entity is created.
If all of these requirements must be met then you have a point of serialisation in your application, and we’ll discuss that in a moment.
First let’s talk about methods of implementing a series-of-numbers requirement if you can drop any one of those requirements.
If your series of numbers can have gaps (and you have multiple processes requiring instant generation of the number) then use an Oracle Sequence object. They are very high performance and the situations in which gaps can be expected have been very well discussed. It is not too challenging to minimise the amount of numbers skipped by making design efforts to minimise the chance of a process failure between generation of the number and commiting the transaction, if that is important.
If you do not have multiple processes creating the entities (and you need a gap-free series of numbers that must be instantly generated), as might be the case with the batch generation of invoices, then you already have a point of serialisation. That in itself may not be a problem, and may be an efficient way of performing the required operation. Generating the gap-free numbers is rather trivial in this case. You can read the current maximum value and apply an incrementing value to every entity with a number of techniques. For example if you are inserting a new batch of invoices into your invoice table from a temporary working table you might:
insert into
invoices
(
invoice#,
...)
with curr as (
select Coalesce(Max(invoice#)) max_invoice#
from invoices)
select
curr.max_invoice#+rownum,
...
from
tmp_invoice
...
Of course you would protect your process so that only one instance can run at a time (probably with DBMS_Lock if you're using Oracle), and protect the invoice# with a unique key contrainst, and probably check for missing values with separate code if you really, really care.
If you do not need instant generation of the numbers (but you need them gap-free and multiple processes generate the entities) then you can allow the entities to be generated and the transaction commited, and then leave generation of the number to a single batch job. An update on the entity table, or an insert into a separate table.
So if we need the trifecta of instant generation of a gap-free series of numbers by multiple processes? All we can do is to try to minimise the period of serialisation in the process, and I offer the following advice, and welcome any additional advice (or counter-advice of course).
Store your current values in a dedicated table. DO NOT use a sequence.
Ensure that all processes use the same code to generate new numbers by encapsulating it in a function or procedure.
Serialise access to the number generator with DBMS_Lock, making sure that each series has it’s own dedicated lock.
Hold the lock in the series generator until your entity creation transaction is complete by releasing the lock on commit
Delay the generation of the number until the last possible moment.
Consider the impact of an unexpected error after generating the number and before the commit is completed — will the application rollback gracefully and release the lock, or will it hold the lock on the series generator until the session disconnects later? Whatever method is used, if the transaction fails then the series number(s) must be “returned to the pool”.
Can you encapsulate the whole thing in a trigger on the entity’s table? Can you encapsulate it in a table or other API call that inserts the row and commits the insert automatically?
Original article
You could create a sequence with no cache , then get the next value from the sequence and use that as your counter.
CREATE SEQUENCE invoice_serial_seq START 101 CACHE 1;
SELECT nextval('invoice_serial_seq');
More info here
You either lock the table to inserts, and/or need to have retry code. There's no other option available. If you stop to think about what can happen with:
parallel processes rolling back
locks timing out
you'll see why.
In 2006, someone posted a gapless-sequence solution to the PostgreSQL mailing list: http://www.postgresql.org/message-id/44E376F6.7010802#seaworthysys.com

How oracle sequence works internally?

I wan to know how sequence for a table works internally in oracle database. Specially how they increment value of sequence.
Are they use trigger for incrementing value of sequence or anything else???
Oracle does not handle sequences as other objects, like tables.
If you insert 100 records using the NEXTVAL and issue a ROLLBACK,
this sequence does not get rolled back.
Instead, 100 records will have incremented the sequence.
The next insert will have the 101-st value of the sequence.
This will lead to "spaces" in the sequence. That allows for multiple people to
safely use sequences without the risk of duplicates.
If two users simultaneously grab the NEXTVAL, they will be
assigned unique numbers. Oracle caches sequences in memory.
The init.ora parameter SEQUENCE_CACHE_ENTRIES defines the cache size.

Is there a way to increase field limit in CRM?

I'm work many times on CRM Online which causes this issue, and then I have to do create entity relationship to get thorugh the issue.
So,
Is there a way I can increase the field limit in CRM form while doing customization?
Version: Any Online Version
No, as noted in the comments this is a limit in SQL.
Maximum Capacity Specifications for SQL Server
Quite a good discussion here, Best Practices - Maximum number of fields on an entity?
(Though why you would want to put 1000 fields on a single entity is beyond me. Would be interested to know your solution for curiosities sake).
This field limit for the database is not limited to SQL but also affect to other Database too so incase we need to reuse the same data in other databases then there may be the issues.
So now this limit makes sense to me, though still wondering all columns are stored in single column or span across rows like chain of columns, as like Oracle does.
I know oracle support 255 columns in a single row and upto 1000 in table, but if you got more then 255 then it spans the remaining to next corresponding row accordingly.

What is the faster way of counting? Using count query in the database or counting retrieved rows programmatically?

I was wondering what would be a faster way as far as counting rows is concerned from a database? Is count query in DB (server) going to be faster than some sort of count in a program (client).
I think the things that are to be considered are
Size of the database
Programing language use
Are the rows that are to be counted required by the client?
Any suggestions?
Just the counting by itself cannot be made any faster than executing it in a compiled language like C++, C# or Java. Counting through a stream of rows is nothing.
Unfortunately, copying the rows from the server to the client is enormously more expensive that counting them. So you are always better off letting the server count them.
I literally cannot think of a scenario in which one should count on the client (if the only thing required is the count, not the rows themselves).
If the rows are required on the client:
If all rows are required: Count on the client
If only a single page is required: Count on the server! There might be hundreds or thousands of pages and you don't want to download all of them. The RDBMS can often optimize counting because no column data is required to count.
It depends on what the requirement is, I would rather do it at the data base level and counting the rows at client would require to transfer the rows to the client. Some question to be looked at are Is the over head of passing rows to the client needed?Frequency of database access by application How big are the rows that is to be sent(Size of rows)
Considering all these, I would go with counting at the DB level in general, but again as I said in the beginning. It depends on the scenario where we would like to implement it.

Is it ok to set the sequence of a table to very large value like 10 million?

Is there any performance impact or any kind of issues?
The reason I am doing this is that we are doing some synchronization between two set of DBs with similar tables and we want to avoid duplicate PK errors when synchronizing data.
Yes, it's okay.
Note: If you have perfomance concerns you could use the "CACHE" option on "CREATE SEQUENCE":
"Specify how many values of the sequence the database preallocates and keeps in memory for faster access. This integer value can have 28 or fewer digits. The minimum value for this parameter is 2. For sequences that cycle, this value must be less than the number of values in the cycle. You cannot cache more values than will fit in a given cycle of sequence numbers. Therefore, the maximum value allowed for CACHE must be less than the value determined by the following formula:"
(CEIL (MAXVALUE - MINVALUE)) / ABS (INCREMENT)
"If a system failure occurs, all cached sequence values that have not been used in committed DML statements are lost. The potential number of lost values is equal to the value of the CACHE parameter."
Sure. What you plan on doing is actually a rather common practice. Just make sure the variables in your client code which you use to hold IDs are big enough (i.e., use longs instead of ints)
The only problem we recently had with creating tables with really large seeds was when we tried to interface with a system we did not control. That system was apparently reading our IDs as a char(6) field, so when we sent row 10000000 it would fail to write.
Performance-wise we have seen no issues on our side with using large ID numbers.
No performance impact that we've seen. I routinely bump sequences up by a large amount. The gaps come in handy if you need to "backfill" data into the table.
The only time we had a problem was when a really large sequence exceeded MAXINT on a particular client program. The sequence was fine, but the conversion to an integer in the client app started failing! In our case it was easy to refactor the ID column in the table and get things running again, but in retrospect this could have been a messy situation if the tables had been arranged differently!
If you are synching two tables why not change the PK seed/increment amount so that everything takes care of itself when a new PK is added?
Let's say you had to synch the data from 10 patient tables in 10 different databases.
Let's also say that eventually all databases had to be synched into a Patient table at headquarters.
Increment the PK by ten for each row but ensure the last digit was different for each database.
DB0 10,20,30..
DB1 11,21,31..
.....
DB9 19,29,39..
When everything is merged there is guaranteed to be no conflicts.
This is easily scaled to n database tables. Just make sure your PK key type will not overflow. I think BigInt could be big enough for you...

Resources