how to store the alphanumeric value in integer type in postgres? - database

I need to store the id of the person in Database.But the id should contain one alpha in beginning for that ,I have followed following thing ,
for id column I set the default value like
create table alphanumeric (id int default ('f'||nextval('seq_test'))::int) ;
So now table was created like
default (('f'::text || nextval('seq_test'::regclass)))::integer
After creating the table I insert the values its showing the error like
INSERT INTO alpha VALUES (default) ;
ERROR: invalid input syntax for integer: "f50"
I understood the error but I need this type of storing .....!
Notes : I don't want to use function or triggers .

Just to add a couple more cents to #muistooshort's answer. If you are certain the IDs you want will always conform to a certain regular expression, you can enforce that with a CHECK constraint:
CREATE TABLE alphanumeric (
id VARCHAR DEFAULT ('f' || nextval('seqtest') PRIMARY KEY,
...
CHECK(id ~ '^[A-Za-z][0-9]+')
);
Of course, I'm making a gross assumption about the nature of your identifiers, you will have to apply your own judgement about whether or not your identifiers constitute a regular language.
Secondly, the sort order #muistooshort is talking about is sometimes (confusingly) called 'natural sort' and you can get a PostgreSQL function to assist with this.

You want to use a string for your ids so use a text column for your id:
create table alphanumeric (
id text default ('f' || nextval('seq_test'))
)
If you're only use seq_test for that column then you probably want it to be owned by that column:
alter sequence seq_test owned by alphanumeric.id
That way the sequence will be dropped if you drop the table and you won't have an unused sequence cluttering up your database.
One thing you might want to note about this id scheme is that they won't sort the way a human would sort them; 'f100' < 'f2', for example, will be true and that might have side effects that you'll need to work around.

Related

Adding all table fields into the SE11 structure?

I'm learning SAP and ABAP language, I need to create a structure with all the fields of the SFLIGHT database table and a few more. Do I have to enter all the fields of the SFLIGHT table manually or is there any possibility to add all the fields of a given table at once?
I have to create DDIC structure like that:
Do I have to fill this component names manually?
The code solution is given by Jonas, but if you are asking about SE11-way then
Edit => scroll down to Include => click on Insert
https://techazmaan.com/ddic-include-structure/
With the TYPES ... LINE OF statement one can declare a type which represents a structure for one line of a table:
TYPES flight TYPE LINE OF sflight.
" the structure type can then be used in the program
DATA(scheduled_flight) = VALUE flight(
" ...
).
INSERT scheduled_flight INTO sflight.
Thus usually there is no need to declare such a structure in the dictionary, as it already exists implicitly through the table creation.

How to solve the performance difference while querying for records when numeric data is strored in varchar columns?

I am trying to query my database which is MSSQL with jpa query dsl library (package com.querydsl.jpa.impl.JPAQuery) and found a performance problem while running the query. I am using java api to execute the query dsl predicate.
My table has a column called point_id whose type is Varchar(20) and is used to store numeric values ie number values as string.
When I try the query (which is also done by the query dsl)
select
testperfor0_.serv_code as hm_serv_8_5_,
testperfor0_.version as version9_5_
from
TestPerformanceObject testperfor0_
where
(
testperfor0_.point_id in (
1, 2
);
the performance is very low when compared to the query
select
testperfor0_.serv_code as hm_serv_8_5_,
testperfor0_.version as version9_5_
from
TestPerformanceObject testperfor0_
where
(
testperfor0_.point_id in (
'1,'2'
);
The difference between the 2nd query and the one done by the dsl is that the data is provided in single quotes. This says that there will be a conversion (to_char()) while performing the query and this performance problem is also discussed here .
Is there any solution for this ?
Edit: The column is of type Varchar(20) because it can also hold
non-numeric values.
Your real problem actually begins with your topic sentence:
My table has a column called point_id whose type is Varchar(20) and is used to store numeric values ie number values as string.
If you are trying to store numeric values, then you should be using some kind of number column, not a varchar or other text column.
That being said, the performance difference appears to be due to an implicit conversion which is happening with this version of your query:
where testperfor0_.hm_inv_point_id in (1, 2)
If you must stick with your current data model, then you should be comparing point_id against text values. So, from your JPA code make sure that you are binding Java strings to the IN clause.

data type character varying has no default operator class for access method "gist"

I tried to execute the following command (Postgresql):
ALTER TABLE authentication ADD CONSTRAINT overlapping_times EXCLUDE USING GIST
(method with =,
authenticator with =,
box(point(extract(epoch FROM validfrom at time zone 'UTC'),extract(epoch FROM validfrom at time zone 'UTC') ),
point(extract(epoch FROM validuntil at time zone 'UTC'), extract(epoch FROM validuntil at time zone 'UTC'))) WITH &&
)
and I got the following error message:
ERROR: data type character varying has no default operator class for access method "gist"
HINT: You must specify an operator class for the index or define a default operator class for the data type.
I did quite extensive googling still I am unable to translate this to plain English. What should I do to execute the command above?
The type of "method" is character varying, "authenticator" is text, "validfrom", "validuntil" are dates.
For authenticator and method, use a plain unique constraint. text and varchar() are identical for indexing purposes.
This means three alter table statements instead of one, but it should save you these problems. Box should support GiST properly so you should be good there.
In plain English the error is telling you that the data type does not support the index operations expected of the index type. So text strings can't be searched via an index as to whether they overlap, for example. In other words the three cannot be put in the same constraint.
Additionally, keep in mind that UNIQUE constraints are faster than exclude constraints, so are preferred where they work.

Ordering numbers that are stored as strings in the database

I have a bunch of records in several tables in a database that have a "process number" field, that's basically a number, but I have to store it as a string both because of some legacy data that has stuff like "89a" as a number and some numbering system that requires that process numbers be represented as number/year.
The problem arises when I try to order the processes by number. I get stuff like:
1
10
11
12
And the other problem is when I need to add a new process. The new process' number should be the biggest existing number incremented by one, and for that I would need a way to order the existing records by number.
Any suggestions?
Maybe this will help.
Essentially:
SELECT process_order FROM your_table ORDER BY process_order + 0 ASC
Can you store the numbers as zero padded values? That is, 01, 10, 11, 12?
I would suggest to create a new numeric field used only for ordering and update it from a trigger.
Can you split the data into two fields?
Store the 'process number' as an int and the 'process subtype' as a string.
That way:
you can easily get the MAX processNumber - and increment it when you need to generate a
new number
you can ORDER BY processNumber ASC,
processSubtype ASC - to get the
correct order, even if multiple records have the same base number with different years/letters appended
when you need the 'full' number you
can just concatenate the two fields
Would that do what you need?
Given that your process numbers don't seem to follow any fixed patterns (from your question and comments), can you construct/maintain a process number table that has two fields:
create table process_ordering ( processNumber varchar(N), processOrder int )
Then select all the process numbers from your tables and insert into the process number table. Set the ordering however you want based on the (varying) process number formats. Join on this table, order by processOrder and select all fields from the other table. Index this table on processNumber to make the join fast.
select my_processes.*
from my_processes
inner join process_ordering on my_process.processNumber = process_ordering.processNumber
order by process_ordering.processOrder
It seems to me that you have two tasks here.
• Convert the strings to numbers by legacy format/strip off the junk• Order the numbers
If you have a practical way of introducing string-parsing regular expressions into your process (and your issue has enough volume to be worth the effort), then I'd
• Create a reference table such as
CREATE TABLE tblLegacyFormatRegularExpressionMaster(
LegacyFormatId int,
LegacyFormatName varchar(50),
RegularExpression varchar(max)
)
• Then, with a way of invoking the regular expressions, such as the CLR integration in SQL Server 2005 and above (the .NET Common Language Runtime integration to allow calls to compiled .NET methods from within SQL Server as ordinary (Microsoft extended) T-SQL, then you should be able to solve your problem.
• See
http://www.codeproject.com/KB/string/SqlRegEx.aspx
I apologize if this is way too much overhead for your problem at hand.
Suggestion:
• Make your column a fixed width text (i.e. CHAR rather than VARCHAR).
• Pad the existing values with enough leading zeros to fill each column and a trailing space(s) where the values do not end in 'a' (or whatever).
• Add a CHECK constraint (or equivalent) to ensure new values conform to the pattern e.g. something like
CHECK (process_number LIKE '[0-9][0-9][0-9][0-9][0-9][0-9][ab ]')
• In your insert/update stored procedures (or equivalent), pad any incoming values to fit the pattern.
• Remove the leading/trailing zeros/spaces as appropriate when displaying the values to humans.
Another advantage of this approach is that the incoming values '1', '01', '001', etc would all be considered to be the same value and could be covered by a simple unique constraint in the DBMS.
BTW I like the idea of splitting the trailing 'a' (or whatever) into a separate column, however I got the impression the data element in question is an identifier and therefore would not be appropriate to split it.
You need to cast your field as you're selecting. I'm basing this syntax on MySQL - but the idea's the same:
select * from table order by cast(field AS UNSIGNED);
Of course UNSIGNED could be SIGNED if required.

Which database systems support an ENUM data type, which don't?

Following up this question: "Database enums - pros and cons", I'd like to know which database systems support enumeration data types, and a bit of detail on how they do it (e.g. what is stored internally, what are the limits, query syntax implications, indexing implications, ...).
Discussion of use cases or the pros and cons should take place in the other questions.
I know that MySQL does support ENUM:
the data type is implemented as integer value with associated strings
you can have a maximum of 65.535 elements for a single enumeration
each string has a numerical equivalent, counting from 1, in the order of definition
the numerical value of the field is accessible via "SELECT enum_col+0"
in non-strict SQL mode, assigning not-in-list values does not necessarily result in an error, but rather a special error value is assigned instead, having the numerical value 0
sorting occurs in numerical order (e.g. order of definition), not in alphabetical order of the string equivalents
assignment either works via the value string or the index number
this: ENUM('0','1','2') should be avoided, because '0' would have integer value 1
PostgreSQL supports ENUM from 8.3 onwards. For older versions, you can use :
You can simulate an ENUM by doing something like this :
CREATE TABLE persons (
person_id int not null primary key,
favourite_colour varchar(255) NOT NULL,
CHECK (favourite_colour IN ('red', 'blue', 'yellow', 'purple'))
);
You could also have :
CREATE TABLE colours (
colour_id int not null primary key,
colour varchar(255) not null
)
CREATE TABLE persons (
person_id int not null primary key,
favourite_colour_id integer NOT NULL references colours(colour_id),
);
which would have you add a join when you get to know the favorite colour, but has the advantage that you can add colours simply by adding an entry to the colour table, and not that you would not need to change the schema each time. You also could add attribute to the colour, like the HTML code, or the RVB values.
You also could create your own type which does an enum, but I don't think it would be any more faster than the varchar and the CHECK.
Oracle doesn't support ENUM at all.
AFAIK, neither IBM DB2 nor IBM Informix Dynamic Server support ENUM types.
Unlike what mat said, PostgreSQL does support ENUM (since version
8.3, the last one):
essais=> CREATE TYPE rcount AS ENUM (
essais(> 'one',
essais(> 'two',
essais(> 'three'
essais(> );
CREATE TYPE
essais=>
essais=> CREATE TABLE dummy (id SERIAL, num rcount);
NOTICE: CREATE TABLE will create implicit sequence "dummy_id_seq" for serial column "dummy.id"
CREATE TABLE
essais=> INSERT INTO dummy (num) VALUES ('one');
INSERT 0 1
essais=> INSERT INTO dummy (num) VALUES ('three');
INSERT 0 1
essais=> INSERT INTO dummy (num) VALUES ('four');
ERROR: invalid input value for enum rcount: "four"
essais=>
essais=> SELECT * FROM dummy WHERE num='three';
id | num
----+-------
2 | three
4 | three
There are functions which work specifically on enums.
Indexing works fine on enum types.
According to the manual, implementation is as follows:
An enum value occupies four bytes on disk. The length of an enum value's textual label is limited by the NAMEDATALEN setting compiled into PostgreSQL; in standard builds this means at most 63 bytes.
Enum labels are case sensitive, so 'happy' is not the same as 'HAPPY'. Spaces in the labels are significant, too.
MSSQL doesn't support ENUM.
When you use Entity Framework 5, you can use enums (look at: Enumeration Support in Entity Framework and EF5 Enum Types Walkthrough), but even then the values are stored as int in the database.

Resources