I'm trying to create a trigger for my database table so that users can only enter a postcode that is 6-8 characters long. However, this doesn't seem to work even though the trigger doesn't show any errors.
Here is the code:
create or replace trigger loc_postcode
before insert or update of postcode
on location
for each row
begin
if ( LENGTH(:new.postcode) > 8) or ( LENGTH(:new.postcode) < 6)
then raise_application_error(0001,
'The postcode must be between 6 and 8 characters long');
end if;
end;
and the error:
ORA-04098: trigger 'C3392387.LOC_ID' is invalid and failed re-validation
As others have mentioned the trouble with the previous version of your trigger was that you were comparing a string to a number when you needed to compare the length of the string to a value. I won't go into any further details on this.
The reason for your current error is that you're not using a valid error code for a user-defined error. Per the documentation the RAISE_APPLICATION_ERROR procedure takes error codes in the range -20000 to -20999. Change the error code to -20001 and the trigger will work.
I'm a little surprised that you were getting the error that you are. I would have expected you to get "ORA-21000: error number argument to raise_application_error of 1 is out of range" as can be demonstrated in this SQL Fiddle. It's possibly because you have a slightly dodgy character after your final semi-colon. It's displaying a a space in Hex when I look at it in a text editor, but judging how it appears when I copy it into SQL Fiddle it might not be. It's also possible it's an artefact of Stack Exchange's rendering engine.
Incidentally, 0001 is not a valid Oracle error code; 00001 is a unique constraint violation and would be declared as -00001 (note the minus sign).
However, this is not how I would go about doing this. Triggers incur additional overhead when used and obfuscate constraints that could be declared in the database. There's also always the danger of having cascading triggers, which can make your data-model extremely complex.
The simpler method of doing this would be to declare your POSTCODE column to be at most 8 characters/bytes (up to you) and to add a check constraint on the column to ensure that the length of the postcode is 6 characters (or bytes) or greater. This embeds the logic you need in the structure of the table (and thus in Oracle's metadata), making it a lot easier to see what's going on.
If you were to declare your table DDL as something like the below (obviously massively simplified):
create table location (
id number
, postcode varchar2(8)
, constraint pk_location primary key (id)
, constraint ck_location_postcode check (length(postcode) between 6 and 8)
)
Then you can achieve the same result (working SQL Fiddle). Note that the maximum length of the column POSTCODE is 8, which takes care of the upper bound and there's a further check constraint limiting it. I've defined the check constraint to take care of both the upper and lower bounds so that you can tell in the future that you intended 8 to be the upper bound. A change to the size of the column will not, therefore, break your constraint. It's a safety feature, nothing more and it could be declared as follows without changing the functionality:
, constraint ck_location_postcode check (length(postcode) >= 6)
Presumably, you want the length not the value:
create or replace trigger loc_id
before insert or update of postcode
on location
for each row
begin
if (length(:new.postcode) > 8) or (length(:new.postcode) < 6)
then raise_application_error(
'The postcode must be between 6 and 8 characters long');
end if;
end;
Your code doesn't generate an error because Oracle allows you to attempt to compare strings and numbers. The failure is when the string is not a numeric format.
It looks like POSTCODE is a string. Since you are trying to check its length, you need to use the LENGTH function.
if ( LENGTH(:new.postcode) > 8) or ( LENGTH(:new.postcode) < 6)
or personally I would prefer:
if NOT LENGTH(:new.postcode) BETWEEN 6 AND 8 THEN
In your version, you are trying to compare the actual value of POSTCODE to the number 6 and 8, which results in an error when the string value can't be converted to a number.
Related
I implemented Data masking policy on two view columns which are First_Name and Last_Name in the Customer table with sha2(Val) based on the current role. Ex. alter view .<SCHEMA_NAME>.<TABLE_NAE> modify
column <COLUMN_NAME> set masking policy public.pii_allowed;
When executing view definition by concatenating both columns it's running fine but giving an error with a view.
That is "String 689z3z73z8z32zz46z24zz916z15zzz6z4z45z26zz887zzz98765432zz2312z5 yy3y9y24y61yy0y910y63y6yy384y277y670y283746y2y2y960y25y6y85yy03 is too long and would be truncated in 'CONCAT'". The result value length is 129 including space.
I tryied to write case statement to avoid the to print the value. Ex. case when length(First_name||''||Last_name) > 64 THEN First_Name esle lenght(First_name||''||Last_name) end Name. But till it is giving error with above message with Complex views.
Please suggest resolving this error.
CREATE TABLE [sql_table1] ([c0] varbinary(25) NOT NULL primary key)
go
insert into sql_table1 values (0x3200),(0x32);
go
I get
Cannot insert duplicate key in object 'dbo.sql_table'. The duplicate
key value is (0x32).
Why? 0x32 does not equal 0x3200
It gets right padded. BINARY data gets tricky when you try to specify what should normally be equivalent numerically hex values. If you try this it will work:
insert into #sql_table1 values (0x32),(CAST(50 as VARBINARY(25)));
-- inserts 0x00000032
-- and 0x32
But these are numerically equivalent. Generally speaking, it's a bad idea to have a BINARY column of any sort be a primary key or try to put a unique index on it (moreso than a CHAR/VARCHAR/NVARCHAR column) - any application that inserts into it is going to almost certainly be CASTing from some native format/representation to binary, but there's no guarantee that that CAST actually works in a unique manner in either direction - did the application insert the value 50 (= 0x32), or did it try to insert the literal 0x32, or did it try to insert the ASCII value of 2 (= 0x32), or did it insert the first byte(s) of something else....? If one app submits 0x32 and another 0x0032 are they the same or different (SQL Server says different - see below)?
The bottom line is that SQL Server is going to behave unusually if you try to compare binary columns flat out without some context. What will work is comparing binary data using BINARY_CHECKSUM
SELECT BINARY_CHECKSUM(0x32) -- 50
SELECT BINARY_CHECKSUM(0x320) -- 16! -- it's treating this as having a different number or ordering of bytes
SELECT BINARY_CHECKSUM(0x3200) -- 50
SELECT BINARY_CHECKSUM(0x32000) -- 16
SELECT BINARY_CHECKSUM(0x0032) -- 50
SELECT BINARY_CHECKSUM(0x00032) -- 50
SELECT BINARY_CHECKSUM(0x000000000032) -- 50
but again, this only helps you see that the hexidecimal represenation of the binary data isn't going to work exactly the way it would seem. The point is, your primary key is going to be based on the BINARY_CHECKSUMs of the data instead of any particular format/representation of the data. Normally that's a good thing, but with binary data (and padding) it becomes a lot trickier. Even then, in my example above the BINARY_CHECKSUM of both columns will be exactly the same (SELECT BINARY_CHECKSUM(c0) FROM sql_table1 will output 50 for both rows). Weird - a little further testing is showing that any different number of leading 0s that fit into the column length will bypass the unique check even though the checksum is the same (e.g. VALUES (0x32), (0x032), (0x0032) etc.).
This only gets worse if you start throwing different versions of SQL Server into the mix (per MSDN documentation).
What you should do for PK/Unique design on a table is figure out what context will make sense of this data - an order number, a file reference, a timestamp, a device ID, some other business or logical identifier, etc.... If nothing else, pseudokey it with an IDENTITY column.
I need to store the id of the person in Database.But the id should contain one alpha in beginning for that ,I have followed following thing ,
for id column I set the default value like
create table alphanumeric (id int default ('f'||nextval('seq_test'))::int) ;
So now table was created like
default (('f'::text || nextval('seq_test'::regclass)))::integer
After creating the table I insert the values its showing the error like
INSERT INTO alpha VALUES (default) ;
ERROR: invalid input syntax for integer: "f50"
I understood the error but I need this type of storing .....!
Notes : I don't want to use function or triggers .
Just to add a couple more cents to #muistooshort's answer. If you are certain the IDs you want will always conform to a certain regular expression, you can enforce that with a CHECK constraint:
CREATE TABLE alphanumeric (
id VARCHAR DEFAULT ('f' || nextval('seqtest') PRIMARY KEY,
...
CHECK(id ~ '^[A-Za-z][0-9]+')
);
Of course, I'm making a gross assumption about the nature of your identifiers, you will have to apply your own judgement about whether or not your identifiers constitute a regular language.
Secondly, the sort order #muistooshort is talking about is sometimes (confusingly) called 'natural sort' and you can get a PostgreSQL function to assist with this.
You want to use a string for your ids so use a text column for your id:
create table alphanumeric (
id text default ('f' || nextval('seq_test'))
)
If you're only use seq_test for that column then you probably want it to be owned by that column:
alter sequence seq_test owned by alphanumeric.id
That way the sequence will be dropped if you drop the table and you won't have an unused sequence cluttering up your database.
One thing you might want to note about this id scheme is that they won't sort the way a human would sort them; 'f100' < 'f2', for example, will be true and that might have side effects that you'll need to work around.
How are the NULL and Empty Varchar values stored in SQL Server. And in case I have no user entry for a string field on my UI, should I store a NULL or a '' ?
There's a nice article here which discusses this point. Key things to take away are that there is no difference in table size, however some users prefer to use an empty string as it can make queries easier as there is not a NULL check to do. You just check if the string is empty. Another thing to note is what NULL means in the context of a relational database. It means that the pointer to the character field is set to 0x00 in the row's header, therefore no data to access.
Update
There's a detailed article here which talks about what is actually happening on a row basis
Each row has a null bitmap for columns that allow nulls. If the row in
that column is null then a bit in the bitmap is 1 else it's 0.
For variable size datatypes the acctual size is 0 bytes.
For fixed size datatype the acctual size is the default datatype size
in bytes set to default value (0 for numbers, '' for chars).
the result of DBCC PAGE shows that both NULL and empty strings both take up zero bytes.
Be careful with nulls and checking for inequality in sql server.
For example
select * from foo where bla <> 'something'
will NOT return records where bla is null. Even though logically it should.
So the right way to check would be
select * from foo where isnull(bla,'') <> 'something'
Which of course people often forget and then get weird bugs.
The conceptual differences between NULL and "empty-string" are real and very important in database design, but often misunderstood and improperly applied - here's a short description of the two:
NULL - means that we do NOT know what the value is, it may exist, but it may not exist, we just don't know.
Empty-String - means we know what the value is and that it is nothing.
Here's a simple example:
Suppose you have a table with people's names including separate columns for first_name, middle_name, and last_name. In the scenario where first_name = 'John', last_name = 'Doe', and middle_name IS NULL, it means that we do not know what the middle name is, or if it even exists. Change that scenario such that middle_name = '' (i.e. empty-string), and it now means that we know that there is no middle name.
I once heard a SQL Server instructor promote making every character type column in a database required, and then assigning a DEFAULT VALUE to each of either '' (empty-string), or 'unknown'. In stating this, the instructor demonstrated he did not have a clear understanding of the difference between NULLs and empty-strings. Admittedly, the differences can seem confusing, but for me the above example helps to clarify the difference. Also, it is important to understand the difference when writing SQL code, and properly handle for NULLs as well as empty-strings.
An empty string is a string with zero length or no character.
Null is absence of data.
NULL values are stored separately in a special bitmap space for all the columns.
If you do not distinguish between NULL and '' in your application, then I would recommend you to store '' in your tables (unless the string column is a foreign key, in which case it would probably be better to prohibit the column from storing empty strings and allow the NULLs, if that is compatible with the logic of your application).
NULL is a non value, like undefined. '' is a empty string with 0 characters.
The value of a string in database depends of your value in your UI, but generally, it's an empty string '' if you specify the parameter in your query or stored procedure.
if it's not a foreign key field, not using empty strings could save you some trouble. only allow nulls if you'll take null to mean something different than an empty string. for example if you have a password field, a null value could indicate that a new user has not created his password yet while an empty varchar could indicate a blank password. for a field like "address2" allowing nulls can only make life difficult. things to watch out for include null references and unexpected results of = and <> operators mentioned by Vagif Verdi, and watching out for these things is often unnecessary programmer overhead.
edit: if performance is an issue see this related question: Nullable vs. non-null varchar data types - which is faster for queries?
In terms of having something tell you, whether a value in a VARCHAR column has something or nothing, I've written a function which I use to decide for me.
CREATE FUNCTION [dbo].[ISNULLEMPTY](#X VARCHAR(MAX))
RETURNS BIT AS
BEGIN
DECLARE #result AS BIT
IF #X IS NOT NULL AND LEN(#X) > 0
SET #result = 0
ELSE
SET #result = 1
RETURN #result
END
Now there is no doubt.
How are the "NULL" and "empty varchar" values stored in SQL Server.
Why would you want to know that? Or in other words, if you knew the answer, how would you use that information?
And in case I have no user entry for a string field on my UI, should I store a NULL or a ''?
It depends on the nature of your field. Ask yourself whether the empty string is a valid value for your field.
If it is (for example, house name in an address) then that might be what you want to store (depending on whether or not you know that the address has no house name).
If it's not (for example, a person's name), then you should store a null, because people don't have blank names (in any culture, so far as I know).
I have a bunch of records in several tables in a database that have a "process number" field, that's basically a number, but I have to store it as a string both because of some legacy data that has stuff like "89a" as a number and some numbering system that requires that process numbers be represented as number/year.
The problem arises when I try to order the processes by number. I get stuff like:
1
10
11
12
And the other problem is when I need to add a new process. The new process' number should be the biggest existing number incremented by one, and for that I would need a way to order the existing records by number.
Any suggestions?
Maybe this will help.
Essentially:
SELECT process_order FROM your_table ORDER BY process_order + 0 ASC
Can you store the numbers as zero padded values? That is, 01, 10, 11, 12?
I would suggest to create a new numeric field used only for ordering and update it from a trigger.
Can you split the data into two fields?
Store the 'process number' as an int and the 'process subtype' as a string.
That way:
you can easily get the MAX processNumber - and increment it when you need to generate a
new number
you can ORDER BY processNumber ASC,
processSubtype ASC - to get the
correct order, even if multiple records have the same base number with different years/letters appended
when you need the 'full' number you
can just concatenate the two fields
Would that do what you need?
Given that your process numbers don't seem to follow any fixed patterns (from your question and comments), can you construct/maintain a process number table that has two fields:
create table process_ordering ( processNumber varchar(N), processOrder int )
Then select all the process numbers from your tables and insert into the process number table. Set the ordering however you want based on the (varying) process number formats. Join on this table, order by processOrder and select all fields from the other table. Index this table on processNumber to make the join fast.
select my_processes.*
from my_processes
inner join process_ordering on my_process.processNumber = process_ordering.processNumber
order by process_ordering.processOrder
It seems to me that you have two tasks here.
• Convert the strings to numbers by legacy format/strip off the junk• Order the numbers
If you have a practical way of introducing string-parsing regular expressions into your process (and your issue has enough volume to be worth the effort), then I'd
• Create a reference table such as
CREATE TABLE tblLegacyFormatRegularExpressionMaster(
LegacyFormatId int,
LegacyFormatName varchar(50),
RegularExpression varchar(max)
)
• Then, with a way of invoking the regular expressions, such as the CLR integration in SQL Server 2005 and above (the .NET Common Language Runtime integration to allow calls to compiled .NET methods from within SQL Server as ordinary (Microsoft extended) T-SQL, then you should be able to solve your problem.
• See
http://www.codeproject.com/KB/string/SqlRegEx.aspx
I apologize if this is way too much overhead for your problem at hand.
Suggestion:
• Make your column a fixed width text (i.e. CHAR rather than VARCHAR).
• Pad the existing values with enough leading zeros to fill each column and a trailing space(s) where the values do not end in 'a' (or whatever).
• Add a CHECK constraint (or equivalent) to ensure new values conform to the pattern e.g. something like
CHECK (process_number LIKE '[0-9][0-9][0-9][0-9][0-9][0-9][ab ]')
• In your insert/update stored procedures (or equivalent), pad any incoming values to fit the pattern.
• Remove the leading/trailing zeros/spaces as appropriate when displaying the values to humans.
Another advantage of this approach is that the incoming values '1', '01', '001', etc would all be considered to be the same value and could be covered by a simple unique constraint in the DBMS.
BTW I like the idea of splitting the trailing 'a' (or whatever) into a separate column, however I got the impression the data element in question is an identifier and therefore would not be appropriate to split it.
You need to cast your field as you're selecting. I'm basing this syntax on MySQL - but the idea's the same:
select * from table order by cast(field AS UNSIGNED);
Of course UNSIGNED could be SIGNED if required.