I have a 160 chars bit string and I need to have an integer array that stores the position of the bits that have a value of 1.
Example:
bitstring = '00110101'
array = [3,4,6,8]
Is it possible to do this just with SQL or do I need to define a PL/SQL function or something like that?
It's assuredly possible to write it in SQL. Here's a starting point:
select array(
select substring(str from i for 1) as bit
from generate_series(1, length(str)) as i
where bit = '1'
);
You might want to wrap that in a pl/sql function regardless, though, so to avoid duplicating code all over the place.
Working function:
create or replace function get_bit_positions(varbit) returns bit[] as $$
select array(
select substring($1 from i for 1) as bit
from generate_series(1, length($1)) as i
where substring($1 from i for 1) = '1'
);
$$ language sql immutable;
Working version:
WITH x AS (SELECT '00110101'::varbit AS b)
SELECT array_agg(i)
FROM (SELECT b, generate_series(1, length(b)) AS i FROM x) y
WHERE substring(b, i, 1) = '1';
Simpler once you convert the varbit to text[]. Cast to text and run string_to_array().
Then you can use generate_subscripts() and pick array elements by index:
WITH x AS (SELECT string_to_array('00110101'::varbit::text, NULL) AS b)
SELECT array_agg(i)
FROM (SELECT b, generate_subscripts(b,1) AS i FROM x) y
WHERE b[i] = '1'
Details in this related question on dba.SE.
Related
Goal: write a function in PostgreSQL SQL that takes as input an integer array whose each element is either 0, 1, or -1 and returns an array of the same length, where each element of the output array is the sum of all adjacent nonzero values in the input array having the same or lower index.
Example, this input:
{0,1,1,1,1,0,-1,-1,0}
should produce this result:
{0,1,2,3,4,0,-1,-2,0}
Here is my attempt at such a function:
CREATE FUNCTION runs(input int[], output int[] DEFAULT '{}')
RETURNS int[] AS $$
SELECT
CASE WHEN cardinality(input) = 0 THEN output
ELSE runs(input[2:],
array_append(output, CASE
WHEN input[1] = 0 THEN 0
ELSE output[cardinality(output)] + input[1]
END)
)
END
$$ LANGUAGE SQL;
Which gives unexpected (to me) output:
# select runs('{0,1,1,1,1,0,-1,-1,-1,0}');
runs
----------------------------------------
{0,1,2,3,4,5,6,0,0,0,-1,-2,-3,-4,-5,0}
(1 row)
I'm using PostgreSQL 14.4. While I am ignorant of why there are more elements in the output array than the input, the cardinality() in the recursive call seems to be causing it, as also does using array_length() or array_upper() in the same place.
Question: how can I write a function that gives me the output I want (and why is the function I wrote failing to do that)?
Bonus extra: For context, this input array is coming from array_agg() invoked on a table column and the output will go back into a table using unnest(). I'm converting to/from an array since I see no way to do it directly on the table, in particular because WITH RECURSIVE forbids references to the recursive table in either an outer join or subquery. But if there's a way around using arrays (especially with a lack of tail-recursion optimization) that will answer the general question (But I am still very very curious why I'm seeing the extra elements in the output array).
Everything indicates that you have found a reportable Postgres bug. The function should work properly, and a slight modification unexpectedly changes its behavior. Add SELECT; right after $$ to get the function to run as expected, see Db<>fiddle.
A good alternative to a recursive solution is a simple iterative function. Handling arrays in PL/pgSQL is typically simpler and faster than recursion.
create or replace function loop_function(input int[])
returns int[] language plpgsql as $$
declare
val int;
tot int = 0;
res int[];
begin
foreach val in array input loop
if val = 0 then tot = 0;
else tot := tot + val;
end if;
res := res || tot;
end loop;
return res;
end $$;
Test it in Db<>fiddle.
The OP wrote:
this input array is coming from array_agg() invoked on a table column and the output will go back into a table using unnest().
You can calculate these cumulative sums directly in the table with the help of window functions.
select id, val, sum(val) over w
from (
select
id,
val,
case val
when 0 then 0
else sum((val = 0)::int) over w
end as series
from my_table
window w as (order by id)
) t
window w as (partition by series order by id)
order by id
Test it in Db<>fiddle.
I have a data like below.
id col1[]
--- ------
1 {1,2,3}
2 {3,4,5}
My question is how to use replace function in arrays.
select array_replace(col1, 1, 100) where id = 1;
but it gives an error like:
function array_replace(integer[], integer, integer) does not exist
can anyone suggest how to use it?
Your statement (augmented with the missing FROM clause):
SELECT array_replace(col1, 1, 100) FROM tbl WHERE id = 1;
As commented by #mu, array_replace() was introduced with pg 9.3. I see 3 options for older versions:
1. intarray
As long as ...
we are dealing with integer arrays.
elements are unique.
and the order of elements is irrelevant.
A simple and fast option would be to install the additional module intarray, which (among other things) provides operators to subtract and add elements (or whole arrays) from/to integer arrays:
SELECT CASE col1 && '{1}'::int[] THEN (col1 - 1) + 100 ELSE col1 END AS col1
FROM tbl WHERE id = 1;
2. Emulate with SQL functions
A (slower) drop-in replacement for array_replace() using polymorphic types, so it works for any base type:
CREATE OR REPLACE FUNCTION f_array_replace(anyarray, anyelement, anyelement)
RETURNS anyarray LANGUAGE SQL IMMUTABLE AS
'SELECT ARRAY (SELECT CASE WHEN x = $2 THEN $3 ELSE x END FROM unnest($1) x)';
Does not replace NULL values. Related:
Replace NULL values in an array in PostgreSQL
If you need to guarantee order of elements:
PostgreSQL unnest() with element number
3. Apply patch to source and recompile
Get the patch "Add array_remove() and array_replace() functions" from the git repo, apply it to the source of your version and recompile. May or may not apply cleanly. The older your version the worse are your chances. I have not tried that, I would rather upgrade to the current version.
You can create your own
based on this source :
CREATE TABLE arr(id int, col1 int[]);
INSERT INTO arr VALUES (1, '{1,2,3}');
INSERT INTO arr VALUES (2, '{3,4,5}');
SELECT array(
SELECT CASE WHEN q = 1 THEN 100 ELSE q END
FROM UNNEST(col1::int[]) q)
FROM arr;
array
-----------
{100,2,3}
{3,4,5}
You can create your own function and put it in your public schema if you still want to call by function though it will be slightly different than the original.
UPDATE tbl SET col1 = array_replace(col1, 1, 100) WHERE id = 1;
Here is the sample query for a test-array:
SELECT test_id,
test_array,
(array (
-- Replace existing value 'int' of an array with given value 'Text'
SELECT CASE WHEN a = '0' THEN 'MyEntry'
WHEN a = '1' THEN 'Apple'
WHEN a = '2' THEN 'Banana'
WHEN a = '3' THEN 'ChErRiEs'
WHEN a = '4' THEN 'Dragon Fruit'
WHEN a = '5' THEN 'Eat a Fruit in a Day'
ELSE 'NONE' END
FROM UNNEST(test_array::TEXT[]) a) ::TEXT
-- UNNEST : Lists out values of my_test_array
) test_result
FROM (
--my_test_array
SELECT 1 test_id, '{0,1,2,3,4,5,6,7,8,9}'::TEXT[][] test_array
) test;
I thought (hoped) this would work but it doesn't, it evaluates both sided of the AND statement even if the left side is going to be false:
SELECT PropertyNumber
FROM Properties
WHERE PropertyNumber = '203a'
AND
(
NOT PropertyNumber LIKE '[^0-9]' AND CONVERT(INT,PropertyNumber) > 0
)
So I get:
Conversion failed when converting the nvarchar value '203a' to data type int.
Is there anyway to do conditional AND or any other way to solve this problem?
You can do this in two steps:
SELECT
OtherField, CAST(PropertyNumber AS INT) AS PropertyNumber
FROM
(SELECT OtherField, PropertyNumber FROM Properties WHERE ISNUMERIC(PropertyNumber) = 1) AS X
This will discard records with non-numeric property numbers. If you want to keep them...
SELECT
P.OtherField, P.PropertyNumber AS PropertyNumberAsText, CAST(X.PropertyNumber AS INT) AS PropertyNumberAsInt
FROM
Properties AS P
LEFT JOIN (SELECT DISTINCT PropertyNumber FROM Properties WHERE ISNUMERIC(PropertyNumber) = 1) AS X ON P.PropertyNumber = X.PropertyNumber
Because ISNUMERIC() approves of some values which cannot be cast to INT, I tend to use something like this:
CASE WHEN Field IN ('-', '.') THEN NULL
WHEN Field LIKE '%,%' THEN NULL
WHEN ISNUMERIC(Field) = 1 THEN CAST(Field AS INT) END
Note that this will reject values like "1,234", which is a valid integer in some regions. Code to identify and handle all possible formats for numeric values is a separate question, and has probably been asked on SO before.
try this:
SELECT PropertyNumber
FROM #Temp
WHERE PropertyNumber = '203a'
AND
(
case When IsNumeric('-' + PropertyNumber + 'e0') = 1
Then Convert(Int, PropertyNumber)
Else NULL
End > 0
)
The trick here is to use the knowledge that IsNumeric will return 1 only under certain conditions. By putting a '-' sign in front of your data, you are effectively making sure that property numbers are positive. By putting 'e0' after the column name, you are preventing floating point values.
How can I declare an array like variable with two or three values and get them randomly during execution?
a := [1, 2, 5] -- sample sake
select random(a) -- returns random value
Any suggestion where to start?
Try this one:
select (array['Yes', 'No', 'Maybe'])[floor(random() * 3 + 1)];
Updated 2023-01-10 to fix the broken array literal. Made it several times faster while being at it:
CREATE OR REPLACE FUNCTION random_pick()
RETURNS int
LANGUAGE sql VOLATILE PARALLEL SAFE AS
$func$
SELECT ('[0:2]={1,2,5}'::int[])[trunc(random() * 3)::int];
$func$;
random() returns a value x where 0.0 <= x < 1.0. Multiply by 3 and truncate it with trunc() (slightly faster than floor()) to get 0, 1, or 2 with exactly equal chance.
Postgres indexes are 1-based by default (as per SQL standard). This would be off-by-1. We could increment by 1 every time, but for efficiency I declare the array index to start with 0 instead. Slightly faster, yet. See:
Normalize array subscripts so they start with 1
The manual on mathematical functions.
PARALLEL SAFE for Postgres 9.6 or later. See:
PARALLEL label for a function with SELECT and INSERT
When to mark functions as PARALLEL RESTRICTED vs PARALLEL SAFE?
You can use the plain SELECT statement if you don't want to create a function:
SELECT ('[0:2]={1,2,5}'::int[])[trunc(random() * 3)::int];
Erwin Brandstetter answered the OP's question well enough. However, for others looking for understanding how to randomly pick elements from more complex arrays (like me some two months ago), I expanded his function:
CREATE OR REPLACE FUNCTION random_pick( a anyarray, OUT x anyelement )
RETURNS anyelement AS
$func$
BEGIN
IF a = '{}' THEN
x := NULL::TEXT;
ELSE
WHILE x IS NULL LOOP
x := a[floor(array_lower(a, 1) + (random()*( array_upper(a, 1) - array_lower(a, 1)+1) ) )::int];
END LOOP;
END IF;
END
$func$ LANGUAGE plpgsql VOLATILE RETURNS NULL ON NULL INPUT;
Few assumptions:
this is not only for integer arrays, but for arrays of any type
we ignore NULL data; NULL is returned only if the array is empty or if NULL is inserted (values of other non-array types produce an error)
the array don't need to be formatted as usual - the array index may start and end anywhere, may have gaps etc.
this is for one-dimensional arrays
Other notes:
without the first IF statement, empty array would lead to an endless loop
without the loop, gaps and NULLs would make the function return NULL
omit both array_lower calls if you know that your arrays start at zero
with gaps in the index, you will need array_upper instead of array_length; without gaps, it's the same (not sure which is faster, but they shouldn't be much different)
the +1 after second array_lower serves to get the last value in the array with the same probability as any other; otherwise it would need the random()'s output to be exactly 1, which never happens
this is considerably slower than Erwin's solution, and likely to be an overkill for the your needs; in practice, most people would mix an ideal cocktail from the two
Here is another way to do the same thing
WITH arr AS (
SELECT '{1, 2, 5}'::INT[] a
)
SELECT a[1 + floor((random() * array_length(a, 1)))::int] FROM arr;
You can change the array to any type you would like.
CREATE OR REPLACE FUNCTION pick_random( members anyarray )
RETURNS anyelement AS
$$
BEGIN
RETURN members[trunc(random() * array_length(members, 1) + 1)];
END
$$ LANGUAGE plpgsql VOLATILE;
or
CREATE OR REPLACE FUNCTION pick_random( members anyarray )
RETURNS anyelement AS
$$
SELECT (array_agg(m1 order by random()))[1]
FROM unnest(members) m1;
$$ LANGUAGE SQL VOLATILE;
For bigger datasets, see:
http://blog.rhodiumtoad.org.uk/2009/03/08/selecting-random-rows-from-a-table/
http://www.depesz.com/2007/09/16/my-thoughts-on-getting-random-row/
https://blog.2ndquadrant.com/tablesample-and-other-methods-for-getting-random-tuples/
https://www.postgresql.org/docs/current/static/functions-math.html
CREATE FUNCTION random_pick(p_items anyarray)
RETURNS anyelement AS
$$
SELECT unnest(p_items) ORDER BY RANDOM() LIMIT 1;
$$ LANGUAGE SQL;
I have a table:
c1|c2|c3|c4
-----+--+--+----
a b c 10
a a b 20
c a c 10
b b c 10
c b c 30
I want to write a function where the inputs are 3 strings / text eg ('a b c, b d, c'), compare every element to each other, find if a row exist with this combination, an sum the number of the 4th (c4) column up. But if there is a constellation of b a c or c a b it would match a b c 10. If there is a row like b c c then it wont be a row like c b b. Every matchup is unique.
I think the best would be to use string_to_array(text, text).
I put together some pseudo code, but no idea how to write it in SQL. Maybe the logic is wrong too.
function (x,y,z)
res = 0
x_array = string_to_array(x, ' ')
y_array = string_to_array(y, ' ')
z_array = string_to_array(z, ' ')
foreach(x_item in x_array)
foreach(y_item in y_array)
foreach(z_item in z_array)
if (c1 = (x_item || y_item || z_item ) && c2 = (x_item || y_item || z_item ) && c3 = (x_item || y_item || z_item ))
res++
EDIT
First off all there was a mistake in the example table. There was a row a b c and c b a. It cant be. a b c = c b a ! and each row must be unique.
example: three text inputs a b c | b c | c
each element vs each element: a b c , a c c, b b c, b c c, c b c, c c c
a b c = 10;
a c c (is the same as c a c) = 10;
b b c = 10;
b c c (is the same as c b c) = 30;
c b c = 30;
c c c (no match) = 0; result = 90
I think this might be what you want:
Return the sum of column c4 from all rows where a given set of three tokens matches the columns (c1, c2, c3).
Simple version
Much simpler with contains #> and is contained <# by operators:
SELECT sum(c4) AS sum_of_matching_c4
FROM tbl
WHERE ARRAY[c1,c2,c3] <# ARRAY['b', 'a', 'c'] -- strings in arbitrary order
AND ARRAY[c1,c2,c3] #> ARRAY['b', 'a', 'c'];
Sorry, that would fail for ('b', 'c', 'c') vs. ('c', 'b', 'b').
Slow and sure
WITH i(arr) AS (
SELECT ARRAY(VALUES ('b'), ('c'), ('c') ORDER BY 1) -- input once
) -- in arbitrary order
SELECT sum(c4) AS sum_of_matching_c4
FROM (
SELECT c4, array_agg(x ORDER BY x) AS arr
FROM (
SELECT ctid, c4, unnest(ARRAY[c1,c2,c3]) AS x
FROM tbl t, i
WHERE ARRAY[c1,c2,c3] <# arr -- optional pre-selection
AND ARRAY[c1,c2,c3] #> arr -- for better performance?
) a
GROUP BY ctid, c4
) b
JOIN i USING (arr)
-> sqlfiddle demo.
The major difficulty is to order the values of the columns within the row.
For your input (3 strings) I achieve this in the WHERE clause with a VALUE expression in the CTE which I order right away and collect it in an array. I use a CTE for convenience, so we have to enter values in one place only.
It's more complicated for the row values. I put the three columns in an array and break that up to rows with unnest(). As you did not provide a primary key, I use the ctid as ad-hoc surrogate primary key instead - which I need for the GROUP BY to stuff the now sorted (c1, c2, c3) into an array.
Finally I sum up all c4 of rows where the now sorted arrays match exactly.
Note: I expressly do not use string_agg() because that does not produce distinct results. Consider:
'abc' 'cde' 'fgh'
'ab' 'ccdef' 'gh'
.. resulting int the same string if concatenated.
Index / Performance
You might consider to save pre-ordered data to speed up queries. Doing it on the fly is expensive. I.e. you could pre-generate the sorted array and save it as redundant column which you can then support with an index. Should be faster by several orders of magnitude for the cost of redundant data storage.
If you are dealing with long strings, a solution similar to what I outlined in this related answer on dba.SE might be the best course of action.
Alternatively (preferred!) guarantee that (c1, c2, c3) are always stored in ascending order. You could use a trigger BEFORE INSERT OR UPDATE to keep values within the row ordered. No redundant storage and you can simply create a multi-column index on the three columns and compare to them one by one (instead of comparing the array like in my example).
You don't need to write a function for that.
First, there's no "strings" with postgresql ( sql ) , it's "text" or "varchar".
Second, what you need is an SQL query like this:
SELECT ( DISTINCT ( c1 || c2 || c3 )) AS txtcol, SUM (c4) AS rowsum;
or
SELECT ( DISTINCT ( c1 || c2 || c3 )) AS txtcol, SUM(c4) AS numsum GROUP BY txtcol;
Can't recall the exact syntax at the moment, you need to work it out,
anyway the point is you need to concatenate varchar columns with some built-in
function like CONCAT or "||" operator, and then sum/group by numeric column. All you need
is to concatenate columns, and give resulting all-together column a name.
To be exact, you don't even need to show concatenated column on resulting table,
you could output just sums, and number of rows sumarized for example.
Theoretically you could write SQL function or PL/SQL function for that, but I'm sure it's just not necessary, your case seems to me simple enough to be able to achieve result you want without a function. Built-in sumarizing function SUM() is called "aggregate" function, other examples of aggregating functions are e.g. MIN() or MAX().
Note what you're actually trying to do, is grouping rows by some resulting VARCHAR column by the effect of concatenation per-row.
EDIT: "Arrays" in SQL or procedural SQL is some internally-handled arrays, do not confuse them with relations ( tables in database, nor with tables as SELECT results ). I think you also don't need SQL arrays for that, the task really isn't so hard as it looks like.