I want to use different sequences depending which type of number I want to add.
NumberPool
id
name
description
sequence_name
1
item
items
seq_items
2
documents
stored documents
seq_docs
NumberTable
Number_Type_id
number
more_metadata
1
1
foo
1
2
bar
1
3
foobar
2
1
barfoo
Normally, I would first get the sequence for a given number type:
SELECT * FROM numberpool
WHERE NumberPool.name LIKE "item"`
-> NumberPool.id = 1, sequence_name= 'seq_item'
INSERT INTO NumberTable (Number_Type_id, number, metadata)
OUTPUT Inserted.number
VALUES (3, NEXT VALUE FOR DBO.seq_items, 'foo')
Is there a way to map the numberpool.id?
INSERT INTO NumberTable (Number_Type_id, number, metadata)
OUTPUT Inserted.number, Inserted.metadata
VALUES (3, 'foo')
or numberpool.name to a sequence?
INSERT INTO NumberTable (Number_Type_id, number, metadata)
OUTPUT Inserted.number, Inserted.metadata
VALUES ('item', 'foo')
In that way I would not need to know the ids on the client side. I am unsure if this is a good idea or if my database structure needs some work.
Related
In my table I've got column facebook where I store facebook data ( comment count, share count etc.) and It's an array. For example:
{{total_count,14},{comment_count,0},{comment_plugin_count,0},{share_count,12},{reaction_count,2}}
Now I'm trying to SELECT rows that facebook total_count is between 5 and 10. I've tried this:
SELECT * FROM pl where regexp_matches(array_to_string(facebook, ' '), '(\d+).*')::numeric[] BETWEEN 5 and 10;
But I'm getting an error:
ERROR: operator does not exist: numeric[] >= integer
Any ideas?
There is no need to convert the array to text and use regexp. You can access a particular element of the array, e.g.:
with pl(facebook) as (
values ('{{total_count,14},{comment_count,0},{comment_plugin_count,0},{share_count,12},{reaction_count,2}}'::text[])
)
select facebook[1][2] as total_count
from pl;
total_count
-------------
14
(1 row)
Your query may look like this:
select *
from pl
where facebook[1][2]::numeric between 5 and 10
Update. You could avoid the troubles described in the comments if you would use the word null instead of empty strings ''''.
with pl(id, facebook) as (
values
(1, '{{total_count,14},{comment_count,0}}'::text[]),
(2, '{{total_count,null},{comment_count,null}}'::text[]),
(3, '{{total_count,7},{comment_count,10}}'::text[])
)
select *
from pl
where facebook[1][2]::numeric between 5 and 10
id | facebook
----+--------------------------------------
3 | {{total_count,7},{comment_count,10}}
(1 row)
However, it would be unfair to leave your problems without an additional comment. The case is suitable as an example for the lecture How not to use arrays in Postgres. You have at least a few better options. The most performant and natural is to simply use regular integer columns:
create table pl (
...
facebook_total_count integer,
facebook_comment_count integer,
...
);
If for some reason you need to separate this data from others in the table, create a new secondary table with a foreign key to the main table.
If for some mysterious reason you have to store the data in a single column, use the jsonb type, example:
with pl(id, facebook) as (
values
(1, '{"total_count": 14, "comment_count": 0}'::jsonb),
(2, '{"total_count": null, "comment_count": null}'::jsonb),
(3, '{"total_count": 7, "comment_count": 10}'::jsonb)
)
select *
from pl
where (facebook->>'total_count')::integer between 5 and 10
hstore can be an alternative to jsonb.
All these ways are much easier to maintain and much more efficient than your current model. Time to move to the bright side of power.
Found an interesting issue with quick search if you do single character search, 25 characters work fine, one doesn’t – “n”
A quick search select is of the form:
SELECT* FROM some_full_text_table WHERE CONTAINS(the_full_text_field, ' "n*" ')
This should search the full text field in the table for anything beginning with “n”.
Example data rows:
“name1 foo bar us cardiac”
“aname foo2 us echo”
“some name foo3 ct”
“the requested letter”
The query should return rows 1 and 3 as those are the only one that have words that begin with “n”.
The actual result is all rows are returned, even row 4, which has no n’s at all.
I truly believe this is an SQL bug. “na*” works as expected and returns 1 and 3.
Looking for a solution or work around
Here is an example SQL 2014
create table TestFullTextSearch (
Id int not null,
AllText nvarchar(400)
)
create unique index test_tfts on TestFullTextSearch(Id);
create fulltext catalog ftcat_tfts;
create fulltext index on TestFullTextSearch(AllText)
key index test_tfts on ftcat_tfts
with change_tracking auto, stoplist off
go
/* No n's in the data */
insert into TestFullTextSearch values (1, 'legacyreport Report Legacy 23049823490 20150713 Cardiac US ')
insert into TestFullTextSearch values (2, '123-45-678 foo bar 19450712 20020723 Exercise Stress US ')
insert into TestFullTextSearch values (3, '2048 jj goodguy xy2000 19490328 20150721 Cardiac US ')
insert into TestFullTextSearch values (4, '12345678 4.0 ALLCALCS 19650409 20031103 Cardiac Difficult US ')
select * from TestFullTextSearch where contains(AllText, '"n*"')
/* the result of the select */
1 legacyreport Report Legacy 23049823490 20150713 Cardiac US
2 123-45-678 foo bar 19450712 20020723 Exercise Stress US
3 2048 jj goodguy xy2000 19490328 20150721 Cardiac US
4 12345678 4.0 ALLCALCS 19650409 20031103 Cardiac Difficult US
Say I have a SQL Server table with these values:
ID test
-----------------
1 '1,11,X1'
2 'Q22,11,111,51'
3 '1'
4 '5,Q22,1'
If I want to find out which rows contain the comma-separated value '1', I can just do the following and it will work but I'd like to find a better or less wordy way of doing so if it exists. Unfortunately I cannot use RegExp because using \b1\b would be awesome here.
Select test
FROM ...
WHERE
test LIKE '1,%'
OR test = '1'
OR test LIKE '%,1'
OR test LIKE %,1,%
Something like...
WHERE
test LIKE '%[,{NULL}]1[,{NULL}]%'
I know this line isn't correct but you get what I'm after... hopefully ;)
EDITED based on comments below
You shouldn't use comma-delimited values to store lists. You should use a junction table. But, if you have to, the following logic might help:
Select test
FROM ...
WHERE ',' + test + ',' like '%,' + '1' + ',%' ;
This assumes that what you are looking for is "1" as the entire item in the list.
Note: You can/should write the like pattern as '%,1,%'. I just put it in three pieces to separate out the pattern you are looking for.
There are plenty of SplitString functions available if you google around (many here on StackOverflow) that take a comma delimited string like you have, and split it out into multiple rows. You can CROSS APPLY that table-value function to your query, and then just select for those rows that have '1'.
For example, using this splitstring function here (just one of many):
T-SQL split string
You can write this code to get exactly what you want (note, the declare and insert are just to set up test data so you can see it in action):
DECLARE #test TABLE (ID int, test varchar(400));
INSERT INTO #test (ID, test)
VALUES(1, '1,11,X1'),
(2, 'Q22,11,111,51'),
(3, '1'),
(4, '5,Q22,1')
SELECT *
FROM #test
CROSS APPLY splitstring(test)
WHERE [Name] = '1'
This query returns this:
1 1,11,X1 1
3 1 1
4 5,Q22,1 1
select *
from table
where ',' + test + ',' like '%,1,%'
You have to "normalize" your database. If you have multiple attributs for one row, it's a problem!
Add a "One to Many" relation with yours attributs.
You can do like that:
ID, test
1, 1
1, 11
1, X1
2, Q22
2, 11
[...]
SELECT test FROM ...
WHERE ID = (SELECT ID FROM ... WHERE test = 1)
You primary key is (ID, test) now.
You need something like:
SELECT test
FROM _tableName_
WHERE (test LIKE '1,%'
OR test LIKE '%,1'
OR test LIKE '%,1,%'
OR test LIKE '1')
This will return rows that match in order
1 starts a list
1 ends a list
1 is in the middle of a list
1 is its own list
As per the module requirement file name length to be as 8 chars, for that to implement first 4 char DDMM and remaining 4 char trying to fetch the random numbers from the database by using function and view, the same what I am using in database I have pasted below:
Function:
CREATE FUNCTION [dbo].[GenerateRandomNumbersLetters]
(
#NumberOfCharacters TINYINT
)
RETURNS VARCHAR(32)
AS
BEGIN
RETURN
(
SELECT LEFT(REPLACE([NewID], '-', ''), #NumberOfCharacters)
FROM dbo.RetrieveNewID
);
END
View:
CREATE VIEW [dbo].[RetrieveNewID]
AS
SELECT [NewID] = NEWID();
My query:
select
SUBSTRING(replace(convert(varchar(10), getdate(), 3), '/', ''), 1, 4) +
dbo.GenerateRandomNumbersLetters(4) as FileNamerandomNUM
Ex: 0907CCE7
For every row it will provide a random number, but in one scenario recently the random generate duplicates, how can I avoid such scenarios also, kindly advice
There is a risk of 'value repeating' for random numbers especially if you take only the first four digit of a random number.
Instead of that , generate sequence numbers. to implement this you can create a table with structure
file_date | seq_no
Ex: 0907 | 1000
0907 | 1001
then each time you want to get a file name, query against this table for the next sequence number
select max(seq_no)+1 from <table>
I am working with vba in excel and a database in access. The access database is a table that contains 3 columns; OrderIDs which is a column of numbers saying what order the particular item was in, OrderDescription which is a column that contains the description of the item, and Item # which is a column that gives a number to each particular item (if the item is the same as another, they both are the same item).
I need to build a 2-dimensional array in excel using VBA holding which items were purchased in which orders. The rows will be the Order ID and the columns will be the Item ID. The elements of this array will contain an indicator (like True or a “1”) that indicates that this order contains certain items. For example, row 6 (representing order ID 6) will have “True” in columns 1, 5, and 26 if that order purchased item IDs 1, 5, and 26. All other columns for that order will be blank.
In order to do this, i think I will have to determine the max order number (39) and the max item number(33). This information is available in the database which I can connect to using a .connection and .recordset. Some order numbers and some item numbers may not appear.
Note also that this will likely be a sparse array (not many entries) as most orders contain only a few items. We do not care how many of an item a customer purchased, only that the item was purchased on this order.
MY QUESTION is how can I set up this array? I tried a loop that would assign the values of the order numbers to an array and the items numbers to an array and then dimensioning the array to those sizes, but it wont work.
is there a way to make an element of an array return a value of True if it exists?
Thanks for your help
It seems to me that the best bet may be a cross tab query run on an access connection. You can create your array with the ADO method GetRows : http://www.w3schools.com/ado/met_rs_getrows.asp.
TRANSFORM Nz([Item #],0)>0 AS Val
SELECT OrderNo
FROM Table
GROUP BY OrderNo
PIVOT [Item #]
With a Counter table containing integers from 1 to maximum number of items in a column (field) Num.
TRANSFORM First(q.Val) AS FirstOfVal
SELECT q.OrderNo
FROM (SELECT t.OrderNo, c.Num, Nz([Item #],0)>0 AS Val
FROM TableX t RIGHT JOIN [Counter] c ON t.[Item #] = c.Num
WHERE c.Num<12) q
GROUP BY q.OrderNo
PIVOT q.Num
Output:
OrderNo 1 2 3 4 5 6 7 8 9 10 11
0 0 0 0 0 0
1 -1 -1 -1 -1
2 -1 -1 -1 -1