I'm using a Sequence to generate an id for me, but I use that sequence from my Java application. So say for example my last Id is 200.
If I add an entry with .sql by by using 201 as an id instead of doing seq.nextval. What would happen when my java application calls seq.nextval? Is sequence smart enough to check the max available number or it will just return 201?
It will just return 201, as the sequence has no idea what the numbers are used for.
Note: It may return, say, 220 if you have specified that the sequence has to cache values for some session (see the Oracle manual about CREATE SEQUENCE for more details)
Sequences just provide a way to "select" numbers that auto increment.
You will get 201 because they don't check anything, they just store the last value retrieved and when you query it again, it will return the next value in the sequence.
It will return 201.
You could also use nextval from JDBC, and then use that value to do the insert:
Statement st = conn.createStatement();
ResultSet rs = st.executeQuery("select seq.nextval from dual");
rs.next();
int yourId = rs.getInt(1);
// then use yourId to do the insert
This way you can insert using a number, and also keep the sequence the way it should be.
What nextval returns on the next call from your Java app depends on a number of factors:
If you run in a clustered environment, which node you next speak to. Each node will preallocate a pool of sequence values;
Whether or not the node you're talking to has been restarted. If this happens the pool of sequence values will tend to be "lost" (meaning skipped);
The step value of the sequence; and
Whether are transactions have called nextval on the sequence.
Sequences are loosely ordered, not absolutely ordered.
But Oracle has no idea what you do with sequence values so if you insert a 201 into the database, the sequence will happily return 201 completely oblivious to the inserted value as the two are basically unrelated.
It is never a good idea to mix sequence-generated values with manual inserts because then everything gets mixed up.
Not sure if it helps in your situation, but remember that you can ask for the current value of the sequence (with seq.currval or similar) so that you can check if already exists in the table due to a manual insert and, if necessary, ask for another nextval
Related
CREATE TABLE [sql_table1] ([c0] varbinary(25) NOT NULL primary key)
go
insert into sql_table1 values (0x3200),(0x32);
go
I get
Cannot insert duplicate key in object 'dbo.sql_table'. The duplicate
key value is (0x32).
Why? 0x32 does not equal 0x3200
It gets right padded. BINARY data gets tricky when you try to specify what should normally be equivalent numerically hex values. If you try this it will work:
insert into #sql_table1 values (0x32),(CAST(50 as VARBINARY(25)));
-- inserts 0x00000032
-- and 0x32
But these are numerically equivalent. Generally speaking, it's a bad idea to have a BINARY column of any sort be a primary key or try to put a unique index on it (moreso than a CHAR/VARCHAR/NVARCHAR column) - any application that inserts into it is going to almost certainly be CASTing from some native format/representation to binary, but there's no guarantee that that CAST actually works in a unique manner in either direction - did the application insert the value 50 (= 0x32), or did it try to insert the literal 0x32, or did it try to insert the ASCII value of 2 (= 0x32), or did it insert the first byte(s) of something else....? If one app submits 0x32 and another 0x0032 are they the same or different (SQL Server says different - see below)?
The bottom line is that SQL Server is going to behave unusually if you try to compare binary columns flat out without some context. What will work is comparing binary data using BINARY_CHECKSUM
SELECT BINARY_CHECKSUM(0x32) -- 50
SELECT BINARY_CHECKSUM(0x320) -- 16! -- it's treating this as having a different number or ordering of bytes
SELECT BINARY_CHECKSUM(0x3200) -- 50
SELECT BINARY_CHECKSUM(0x32000) -- 16
SELECT BINARY_CHECKSUM(0x0032) -- 50
SELECT BINARY_CHECKSUM(0x00032) -- 50
SELECT BINARY_CHECKSUM(0x000000000032) -- 50
but again, this only helps you see that the hexidecimal represenation of the binary data isn't going to work exactly the way it would seem. The point is, your primary key is going to be based on the BINARY_CHECKSUMs of the data instead of any particular format/representation of the data. Normally that's a good thing, but with binary data (and padding) it becomes a lot trickier. Even then, in my example above the BINARY_CHECKSUM of both columns will be exactly the same (SELECT BINARY_CHECKSUM(c0) FROM sql_table1 will output 50 for both rows). Weird - a little further testing is showing that any different number of leading 0s that fit into the column length will bypass the unique check even though the checksum is the same (e.g. VALUES (0x32), (0x032), (0x0032) etc.).
This only gets worse if you start throwing different versions of SQL Server into the mix (per MSDN documentation).
What you should do for PK/Unique design on a table is figure out what context will make sense of this data - an order number, a file reference, a timestamp, a device ID, some other business or logical identifier, etc.... If nothing else, pseudokey it with an IDENTITY column.
We wrote a function get_timestamp() defined as
CREATE OR REPLACE FUNCTION get_timestamp()
RETURNS integer AS
$$
SELECT (FLOOR(EXTRACT(EPOCH FROM clock_timestamp()) * 10) - 13885344000)::int;
$$
LANGUAGE SQL;
This was used on INSERT and UPDATE to enter or edit a value in a created and modified field in the database record. However, we found when adding or updating records consecutively it was returning the same value.
On inspecting the function in pgAdmin III we noted that on running the SQL to build the function the key word IMMUTABLE had been injected after the LANGUAGE SQL statement. The documentation states that the default is VOLATILE (If none of these appear, VOLATILE is the default assumption) so I am not sure why IMMUTABLE was injected, however, changing this to STABLE has solved the issue of repeated values.
NOTE: As stated in the accepted answer, IMMUTABLE is never added to a function by pgAdmin or Postgres and must have been added during development.
I am guessing what was happening was that this function was being evaluated and the result was being cached for optimization, as it was marked IMMUTABLE indicating to the Postgres engine that the return value should not change given the same (empty) parameter list. However, when not used within a trigger, when used directly in the INSERT statement, the function would return a distinct value FIVE times before then returning the same value from then on. Is this due to some optimisation algorithm that says something like "If an IMMUTABLE function is used more that 5 times in a session, cache the result for future calls"?
Any clarification on how these keywords should be used in Postgres functions would be appreciated. Is STABLE the correct option for us given that we use this function in triggers, or is there something more to consider, for example the docs say:
(It is inappropriate for AFTER triggers that wish to query rows
modified by the current command.)
But I am not altogether clear on why.
The key word IMMUTABLE is never added automatically by pgAdmin or Postgres. Whoever created or replaced the function did that.
The correct volatility for the given function is VOLATILE (also the default), not STABLE - or it wouldn't make sense to use clock_timestamp() which is VOLATILE in contrast to now() or CURRENT_TIMESTAMP which are STABLE: those return the same timestamp within the same transaction. The manual:
clock_timestamp() returns the actual current time, and therefore its
value changes even within a single SQL command.
The manual warns that function volatility STABLE ...
is inappropriate for AFTER triggers that wish to query rows modified
by the current command.
.. because repeated evaluation of the trigger function can return different results for the same row. So, not STABLE.
You ask:
Do you have an idea as to why the function returned correctly five
times before sticking on the fifth value when set as IMMUTABLE?
The Postgres Wiki:
With 9.2, the planner will use specific plans regarding to the
parameters sent (the query will be planned at execution), except if
the query is executed several times and the planner decides that the
generic plan is not too much more expensive than the specific plans.
Bold emphasis mine. Doesn't seem to make sense for an IMMUTABLE function without input parameters. But the false label is overridden by the VOLATILE function in the body (voids function inlining): a different query plan can still make sense.
Related:
PostgreSQL Stored Procedure Performance
Aside
trunc() is slightly faster than floor() and does the same here, since positive numbers are guaranteed:
SELECT (trunc(EXTRACT(EPOCH FROM clock_timestamp()) * 10) - 13885344000)::int
How can I make Sybase's database engine return an unsorted list of records in non-numeric order?
~~~
I have an issue where I need to reproduce an error in the application where I select from a table where the ID is generated in sequence, but the ID is not the last one in the selection.
Let me explain.
ID STATUS
_____________
1234 C
1235 C
1236 O
Above is 3 IDs. I had code where these would be the results of a
select #p_id = ID from table where (conditions).
However, there wasn't a clause to check for status = 'O' (open). Remember Sybase saves the last returned record into a variable.
~~~~~
I'm being asked to give the testing team something that will make the results not work. If Sybase selects the above in an unordered list, it could appear in ascending order, or, if the database engine needs to change blocks of stored data or something technical magic stuff, the order could be messed up. The original error was when the procedure would return say 1234 instead of 1236.
Is there a way that I can have a 100% guarantee that Sybase will search over a block of data and have to double back, effectively 'breaking' the ascending search, and returning not the last record, but any other one? (all records except the maximum will end up erroring, because they are all 'Closed')
I want some sort of magical SQL code that will make sure things don't search the table in exactly numeric order. Ideally I'd like to not have to change the procedure, as the test team want to see the exact same procedure breaking (as easy as plonking a order by id desc would fudge the results).
If you don't specify an order, there is no way to guarantee the return order of the results. It will be however the index is built - and can depend on the order of insertion, the type of index, and the content of index keys.
It's generally a bad idea to do those sorts of singleton SELECTs. You should always specify a specific record with the WHERE clause, or use a cursor, or TOPn or similar. The problem comes when someone tries to understand your code, because some databases when they see multiple hits take the first value, some take the last value, some take a random value (they call that "implementation-defined"), and some throw an error.
Is this by any chance related to 1156837? :)
Again MSDN does not really explain in plain English the exact difference, or the information for when to choose one over the other.
CHECKSUM
Returns the checksum value computed over a row of a table, or over a list of expressions. CHECKSUM is intended for use in building hash indexes.
BINARY_CHECKSUM
Returns the binary checksum value computed over a row of a table or over a list of expressions. BINARY_CHECKSUM can be used to detect changes to a row of a table.
It does hint that binary checksum should be used to detect row changes, but not why.
Check out the following blog post that highlights the diferences.
http://decipherinfosys.wordpress.com/2007/05/18/checksum-functions-in-sql-server-2005/
Adding info from this link:
The key intent of the CHECKSUM functions is to build a hash index based on an expression or a column list. If say you use it to compute and store a column at the table level to denote the checksum over the columns that make a record unique in a table, then this can be helpful in determining whether a row has changed or not. This mechanism can then be used instead of joining with all the columns that make the record unique to see whether the record has been updated or not. SQL Server Books Online has a lot of examples on this piece of functionality.
A couple of things to watch out for when using these functions:
You need to make sure that the column(s) or expression order is the same between the two checksums that are being compared else the value would be different and will lead to issues.
We would not recommend using checksum(*) since the value that will get generated that way will be based on the column order of the table definition at run time which can easily change over a period of time. So, explicitly define the column listing.
Be careful when you include the datetime data-type columns since the granularity is 1/300th of a second and even a small variation will result into a different checksum value. So, if you have to use a datetime data-type column, then make sure that you get the exact date + hour/min. i.e. the level of granularity that you want.
There are three checksum functions available to you:
CHECKSUM: This was described above.
CHECKSUM_AGG: This returns the checksum of the values in a group and Null values are ignored in this case. This also works with the new analytic function’s OVER clause in SQL Server 2005.
BINARY_CHECKSUM: As the name states, this returns the binary checksum value computed over a row or a list of expressions. The difference between CHECKSUM and BINARY_CHECKSUM is in the value generated for the string data-types. An example of such a difference is the values generated for “DECIPHER” and “decipher” will be different in the case of a BINARY_CHECKSUM but will be the same for the CHECKSUM function (assuming that we have a case insensitive installation of the instance).
Another difference is in the comparison of expressions. BINARY_CHECKSUM() returns the same value if the elements of two expressions have the same type and byte representation. So, “2Volvo Director 20” and “3Volvo Director 30” will yield the same value, however the CHECKSUM() function evaluates the type as well as compares the two strings and if they are equal, then only the same value is returned.
Example:
STRING BINARY_CHECKSUM_USAGE CHECKSUM_USAGE
------------------- ---------------------- -----------
2Volvo Director 20 -1356512636 -341465450
3Volvo Director 30 -1356512636 -341453853
4Volvo Director 40 -1356512636 -341455363
HASHBYTES with MD5 is 5 times slower than CHECKSUM, I've tested this on a table with over 1 million rows, and ran each test 5 times to get an average.
Interestingly CHECKSUM takes exactly the same time as BINARY_CHECKSUM.
Here is my post with the full results published:
http://networkprogramming.wordpress.com/2011/01/14/binary_checksum-vs-hashbytes-in-sql/
I've found that checksum collisions (i.e. two different values returning the same checksum) are more common than most people seem to think. We have a table of currencies, using the ISO currency code as the PK. And in a table of less than 200 rows, there are three pairs of currency codes that return the same Binary_Checksum():
"ETB" and "EUR" (Ethiopian Birr and Euro) both return 16386.
"LTL" and "MDL" (Lithuanian Litas and Moldovan leu) both return 18700.
"TJS" and "UZS" (Somoni and Uzbekistan Som) both return 20723.
The same happens with ISO culture codes: "de" and "eu" (German and Basque) both return 1573.
Changing Binary_Checksum() to Checksum() fixes the problem in these cases...but in other cases it may not help. So my advice is to test thoroughly before relying too heavily on the uniqueness of these functions.
Be careful when using CHECSUM, you may get un-expected outcome. the following statements produce the same checksum value;
SELECT CHECKSUM (N'这么便宜怎么办?廉价iPhone售价再曝光', 5, 4102)
SELECT CHECKSUM (N'PlayStation Now – Sony startet Spiele-Streaming im Sommer 2014', 238, 13096)
Its easy to get collisions from CHECKSUM(). HASHBYTES() was added in SQL 2005 to enhance SQL Server's system hash functionality so I suggest you also look into this as an alternative.
You can get checksum value through this query:
SELECT
checksum(definition) as Checksum_Value,
definition
FROM sys.sql_modules
WHERE object_id = object_id('RefCRMCustomer_GetCustomerAdditionModificationDetail');
replace your proc name in the bracket.
I have a bunch of records in several tables in a database that have a "process number" field, that's basically a number, but I have to store it as a string both because of some legacy data that has stuff like "89a" as a number and some numbering system that requires that process numbers be represented as number/year.
The problem arises when I try to order the processes by number. I get stuff like:
1
10
11
12
And the other problem is when I need to add a new process. The new process' number should be the biggest existing number incremented by one, and for that I would need a way to order the existing records by number.
Any suggestions?
Maybe this will help.
Essentially:
SELECT process_order FROM your_table ORDER BY process_order + 0 ASC
Can you store the numbers as zero padded values? That is, 01, 10, 11, 12?
I would suggest to create a new numeric field used only for ordering and update it from a trigger.
Can you split the data into two fields?
Store the 'process number' as an int and the 'process subtype' as a string.
That way:
you can easily get the MAX processNumber - and increment it when you need to generate a
new number
you can ORDER BY processNumber ASC,
processSubtype ASC - to get the
correct order, even if multiple records have the same base number with different years/letters appended
when you need the 'full' number you
can just concatenate the two fields
Would that do what you need?
Given that your process numbers don't seem to follow any fixed patterns (from your question and comments), can you construct/maintain a process number table that has two fields:
create table process_ordering ( processNumber varchar(N), processOrder int )
Then select all the process numbers from your tables and insert into the process number table. Set the ordering however you want based on the (varying) process number formats. Join on this table, order by processOrder and select all fields from the other table. Index this table on processNumber to make the join fast.
select my_processes.*
from my_processes
inner join process_ordering on my_process.processNumber = process_ordering.processNumber
order by process_ordering.processOrder
It seems to me that you have two tasks here.
• Convert the strings to numbers by legacy format/strip off the junk• Order the numbers
If you have a practical way of introducing string-parsing regular expressions into your process (and your issue has enough volume to be worth the effort), then I'd
• Create a reference table such as
CREATE TABLE tblLegacyFormatRegularExpressionMaster(
LegacyFormatId int,
LegacyFormatName varchar(50),
RegularExpression varchar(max)
)
• Then, with a way of invoking the regular expressions, such as the CLR integration in SQL Server 2005 and above (the .NET Common Language Runtime integration to allow calls to compiled .NET methods from within SQL Server as ordinary (Microsoft extended) T-SQL, then you should be able to solve your problem.
• See
http://www.codeproject.com/KB/string/SqlRegEx.aspx
I apologize if this is way too much overhead for your problem at hand.
Suggestion:
• Make your column a fixed width text (i.e. CHAR rather than VARCHAR).
• Pad the existing values with enough leading zeros to fill each column and a trailing space(s) where the values do not end in 'a' (or whatever).
• Add a CHECK constraint (or equivalent) to ensure new values conform to the pattern e.g. something like
CHECK (process_number LIKE '[0-9][0-9][0-9][0-9][0-9][0-9][ab ]')
• In your insert/update stored procedures (or equivalent), pad any incoming values to fit the pattern.
• Remove the leading/trailing zeros/spaces as appropriate when displaying the values to humans.
Another advantage of this approach is that the incoming values '1', '01', '001', etc would all be considered to be the same value and could be covered by a simple unique constraint in the DBMS.
BTW I like the idea of splitting the trailing 'a' (or whatever) into a separate column, however I got the impression the data element in question is an identifier and therefore would not be appropriate to split it.
You need to cast your field as you're selecting. I'm basing this syntax on MySQL - but the idea's the same:
select * from table order by cast(field AS UNSIGNED);
Of course UNSIGNED could be SIGNED if required.