How to change index? - sql-server

I want to column index change.
For example:
Column name: MemberID
Sample data:
MemberID1: M100045
MemberID2: M100046
MemberID3: M100047
Expected results:
Member ID1: T200045
Member ID2: T200046
Member ID3: T200047
How I do change 5 index SQL query?

The index will be updated when the data is updated. Thus, you just need to update the data.
update YourTable
set yourColumn = stuff(yourColumn,1,2,'T2')
STUFF() inserts a string into another string, and optionally deletes a specified length of characters. In the code above, we are inserting T2 starting at position 1, and deleting 2 character (M1 in this case).

Related

POSTGRESQL array not always returning first NULL element

I have a column in a table, with character varying[] type:
select details from x where id = 1;
Results:
[2:10]={wfeweqf,NULL,NULL,wefwf,NULL,NULL,NULL,NULL,wqfewef}
which means that detail1 is null:
select details[1] from x where id = 1;
Results:
[null]
Is there a way to access all elements of this array, to get:
Results:
{NULL,wfeweqf,NULL,NULL,wefwf,NULL,NULL,NULL,NULL,wqfewef}
?
I guess is not the purpose of this mechanism of omitting first nulls in array, but unfortunetly I need whole array.
edit:

How to parse a table with a JSON array field in PostgreSQL into rows?

I have a table that contains a json array. Here is a sample of the contents of the field from:
SELECT json_array FROM table LIMIT 5;
Result:
[{"key1":"value1"}, {"key1":"value2"}, ..., {"key2":"value3"}]
[]
[]
[]{"key1":"value1"}
[]
How can I retrieve all the values and count how many of each value was found?
I am using PostgreSQL 9.5.14, and I have tried the solutions here Querying a JSON array of objects in Postgres
and the ones suggested to me by another generous stackoverflow user in my last question: How can I parse JSON arrays in postgresql?
I tried:
SELECT
value -> 'key1'
FROM
table,
json_array_elements(json_array);
which sadly does not work for me due to receiving the error: cannot call json_array_elements on a scalar
This error happens when using a query that returns more than one row or more than one column as a scalar subquery.
Another solution I tried was:
SELECT json_array as json, (json_array->0),
coalesce(
case
when (json_array->0) IS NULL then null
else (json_array->0->>'key1')
end,
'No value') AS "Value"
FROM table;
which only returned null values for the "Value"
Referencing Querying a JSON array of objects in Postgres I attempted to use this solution as well:
WITH json_test (col) AS (
values (json_arrays)
)
SELECT
y.x->'key1' "key1"
FROM json_test jt,
LATERAL (SELECT json_array_elements(jt.col) x) y;
But I would need to be able to fit all the elements of the json_arrays into json_test
So far I have only attempted to list all the values in the all json arrays, but my ideal end-result for the query resembles this:
Value | Amount
---------------
value1 | 48
value2 | 112
value3 | 93
value4 | 0
Yet again I am grateful for any help with this, thank you in advance.
step-by-step demo:db<>fiddle
SELECT
each.value,
COUNT(*)
FROM
data,
json_array_elements(json_array) elems, -- 1
json_each_text(elems) each -- 2
GROUP BY each.value -- 3
Expand array into one row for each array element
split the key/value pairs into two columns
group by the new value column/count

Access the index of an element in a jsonb array

I would like to access the index of an element in a jsonb array, like this:
SELECT
jsonb_array_elements(data->'Steps') AS Step,
INDEX_OF_STEP
FROM my_process
I don't see any function in the manual for this.
Is this somehow possible?
Use with ordinality. You have to call the function in the from clause to do this:
with my_process(data) as (
values
('{"Steps": ["first", "second"]}'::jsonb)
)
select value as step, ordinality- 1 as index
from my_process
cross join jsonb_array_elements(data->'Steps') with ordinality
step | index
----------+-------
"first" | 0
"second" | 1
(2 rows)
Read in the documentation (7.2.1.4. Table Functions):
If the WITH ORDINALITY clause is specified, an additional column of type bigint will be added to the function result columns. This column numbers the rows of the function result set, starting from 1.
You could try using
jsonb_each_text(jsonb)
which should supply both the key and value.
There is an example in this question:
Extract key, value from json objects in Postgres
except you would use the jsonb version.

What's the meaning of this simple SQL statement?

I am new to T-SQL. What is the meaning of the following statement?
BEGIN
UPDATE table_name
SET a = ISNULL(#f_flag,0)
END
Begin, End: The Begin and End is not needed. It identifies a code
block, usefull if more that one statement.
UPDATE table_name: Update the data in the table "table_name".
SET: Keyword, start the comma delimited list of column - value pairs
to update
a = : a is the column mame, to value to the right of the = is what
value will be used
ISNULL(#f_flag,0): The value to assign. In this case the IsNull checks the value of the #f_flag variable, and if it is null, then use a 0.
*Note: that there is no "WHERE" clause here, therefore, all rows in the table will be updated.

Make a new array with items derived from another array

Given a PostgreSQL ARRAY of items of one type, how can I create a new array where each item is derived from the items in the initial array?
Example: I have an array of INTERVAL values. I want a new array where each item is a NUMERIC(10, 1) that is the total number of seconds in the corresponding INTERVAL value.
I know how to convert one INTERVAL value:
foo=> SELECT '00:01:20.000'::INTERVAL AS duration_interval;
duration_interval
-------------------
00:01:20
(1 row)
foo=> SELECT extract(EPOCH FROM date_trunc('second', '00:01:20.000'::INTERVAL))
::NUMERIC(10, 1) AS duration_seconds;
duration_seconds
------------------
80.0
(1 row)
The array does not exist in a table – this is a value returned from another function call – so the conversion code needs to operate on it as an array.
How can I convert an array of INTERVAL values to an array of corresponding NUMERIC values?
You need to unnest() the array, do the conversion and then aggregate back into an array.
Assuming you want to do this on a real table with a primary key:
SELECT pk, array_agg(extract(epoch from dur_int)::numeric(10,1)
ORDER BY ordinality) AS duration_seconds
FROM my_table, unnest(duration_interval) WITH ORDINALITY d(dur_int)
GROUP BY pk;
If you have a single array, such as the result from a function call:
SELECT array_agg(extract(epoch from dur_int)::numeric(10,1)
ORDER BY ordinality) AS duration_seconds
FROM unnest(function(...)) WITH ORDINALITY d(dur_int);
Note that you need the WITH ORDINALITY clause when unnesting the array. This will add a column ordinality to the result such that every row has two columns: (dur_int interval, ordinality bigint). When putting the array back again with seconds instead of an interval, you order the rows by the ordinality column. That way you ensure that the order in the resulting array of seconds is the same as in the original array of intervals. (In general, SQL row sources have no specific ordering, the server may present rows in any order it prefers.)
If you have access to the function and you are not breaking other uses of it, you might be better off by changing the function such that you can use its result directly.
If there is a primary key then #Patrick answer is enough. If not then use row_number to aggregate on:
with i(i) as (values
(array['00:01:20.000','00:00:30.000']::interval[]),
(array['00:02:10.000','00:01:30.000']::interval[])
)
select array_agg(extract(epoch from a)::numeric(10,1))
from (
select i, row_number() over() as r
from i
) s, unnest(i) a (a)
group by r
;
array_agg
--------------
{80.0,30.0}
{130.0,90.0}

Resources