Determine provided values and copied values in SQL Server instead of update trigger - sql-server

Let's say I have a table like this:
CREATE TABLE [A]
(
[X] INT,
[Y] INT,
[Z] INT
)
..that has an instead of update trigger. If I insert a row into this table:
INSERT INTO [A]
SELECT 1, 1, 1
Then I hit the update trigger with code like this:
UPDATE [A]
SET [X] = 2
WHERE [X] = 1
I'll end up with an [deleted] table like so:
X: 1,
Y: 1,
Z: 1
I'll end up with an [inserted] table like so:
X: 2,
Y: 1,
Z: 1
Is there any way to determine that only the X was actually set? In other words, how do I distinguish between:
UPDATE [A]
SET
[X] = 2,
[Y] = 1,
[Z] = 1
WHERE [X] = 1
...and the statement above?
Note:
SQL Server 2008

In the trigger, you can add a WHERE clause to compare old and new values and ignore non-changing rows.
There is the function UPDATE() you can use but it can still gives true if the old and new values are the same. It isn't very reliable.
However, there is no way to determine the actual columns in the SET clause of the triggering UPDATE call. That is, these look exactly the same to the trigger
UPDATE [A]
SET
[X] = 2,
[Y] = 1,
[Z] = 1
WHERE [X] = 1
UPDATE [A]
SET [X] = 2
WHERE [X] = 1

Related

modify only last value in jsonb field

I have table with jsonb field
Example:
id jsonb_t
1 [ {"x" : 1 , "y": 2} , {"x" : 2 , "y": 3} , {"x": 3, "y" : 4} ]
2 [ {"x" : 1 , "y": 3} , {"x" : 3 , "y": 3} , {"x": 8, "y" : 2} ]
3 [ {"x" : 1 , "y": 4} , {"x" : 4 , "y": 3} , {"x": 5, "y" : 9} ]
I want to modify table where id = 3 but only the last row in jsonb array of objects , it means replace e.g. "y":9 into "y":8 , and increment "x":5 by 1 to "x":6 .
I can't figure out how to do it in one step (replace and increment should be done "in place" due to thousends rows in jsonb[] array field) , thanks in advance for help .
You can use some jsonb functions such alike
SELECT jsonb_agg(jsonb_build_object('x', x, 'y', y))
FROM (SELECT CASE
WHEN row_number() over() = jsonb_array_length(jsonb_t) THEN
x + 1
ELSE
x
END AS x,
CASE
WHEN row_number() over() = jsonb_array_length(jsonb_t) THEN
y - 1
ELSE
y
END AS y
FROM t, jsonb_to_recordset(jsonb_t) AS(x INT, y INT)
WHERE id = 3) AS j
Demo
where jsonb_to_recordset expands outermost array of objects as individual integer elements, then (in/de)crement them after determining match through use of row_number and jsonb_array_length functions, then go back to build up the jsonb value again within the main query.
Your sample data looks like the column is in fact defined as jsonb not jsonb[] and the array is a proper JSON array (not an array of jsonb values)
If that is correct, then you can use jsonb_set() to extract and modify the value of the last array element:
update the_table
set jsonb_t = jsonb_set(jsonb_t,
array[jsonb_array_length(jsonb_t)-1]::text[],
jsonb_t -> jsonb_array_length(jsonb_t)-1 ||
'{"y":8}' ||
jsonb_build_object('x', (jsonb_t -> jsonb_array_length(jsonb_t)-1 ->> 'x')::int + 1)
)
where id = 3
As documented in the manual jsonb_set() takes three parameters: the input value, the path to the value that should be changed and the new value.
The second parameter array[jsonb_array_length(jsonb_t)-1]::text[] calculates the target position in the JSON array by taking its length and subtracting one to get the last element. This integer is then converted to a text array (which is the required type for the second parameter).
The expression jsonb_t -> jsonb_array_length(jsonb_t)-1 then picks that array element and appends the '{"y":8}' which will replace the existing key/value pair with y. The expression
jsonb_build_object('x', (jsonb_t -> jsonb_array_length(jsonb_t)-1 ->> 'x')::int + 1
extracts the current value of the x key, converting it to an integer, increments it by one and builds a new JSON object with the key x that is also appended to the old value, thus replacing the existing key.
Online example

Postgresql array sum

Given an array column in a table of a PostgreSQL database containing the following:
{{765,4},{767,3},{569,5},{567,3},{725,5}}
How could I calculate the sum of all second elements of each subarray,
i.e. 4+3+5+3+5
You can try using UNNEST which expands an array to a set of rows, and filtering by the row number:
SELECT *, (
SELECT SUM(v)
FROM UNNEST(array_column) WITH ORDINALITY a(v, n)
WHERE n % 2 = 0
) FROM your_table;
I was able to resolve my objective presented here by using jsonb array.
The jsonbArray [{"an": 4, "qid": 765}, {"an": 3, "qid": 767}, {"an": 5, "qid": 569}, {"an": 3, "qid": 567}, {"an": 5, "qid": 725}]
The query that accomplishes the objective:
WITH answers as (
SELECT
(jsonbArray -> 'an')::int as an,
(jsonbArray -> 'qid')::int as qid
FROM (
SELECT jsonb_array_elements(jsonbArray) AS jsonbArray
FROM user where id = 1
) AS s
group by qid, an
)
select sum(an) as score from answers where qid in (765,725)
Result:
score
9

Moving PostgreSQL bigint array unique value to another index

How can I move the array bigint value from one index to another? For example, I have an array ARRAY[1, 2, 3, 4] of bigint unique values and want to move value 1 to index 3 so the final result would be ARRAY[2, 3, 1, 4]
The assumptions:
Element in the array identified by the value.
The uniqueness of the elements guaranteed.
Any element can be moved to any place.
Null values not involved on any side.
The value is contained in the array if not we have 2 options. First is do nothing and second handling of this exception by exception mechanism. It's an extreme case that can happen only because of some BUG
Arrays are 1-dimensional.
General assumptions:
Array elements are UNIQUE NOT NULL.
Arrays are 1-dimensional with standard subscripts (1..N). See:
Normalize array subscripts for 1-dimensional array so they start with 1
Simple solution
CREATE FUNCTION f_array_move_element_simple(_arr bigint[], _elem bigint, _pos int)
RETURNS bigint[] LANGUAGE sql IMMUTABLE AS
'SELECT a1[:_pos-1] || _elem || a1[_pos:] FROM array_remove(_arr, _elem) a1'
All fine & dandy, as long as:
The given element is actually contained in the array.
The given position is between 1 and the length of the array.
Proper solution
CREATE FUNCTION f_array_move_element(_arr ANYARRAY, _elem ANYELEMENT, _pos int)
RETURNS ANYARRAY AS
$func$
BEGIN
IF _pos IS NULL OR _pos < 1 THEN
RAISE EXCEPTION 'Target position % not allowed. Must be a positive integer.', _pos;
ELSIF _pos > array_length(_arr, 1) THEN
RAISE EXCEPTION 'Target position % not allowed. Cannot be greater than length of array.', _pos;
END IF;
CASE array_position(_arr, _elem) = _pos -- already in position, return org
WHEN true THEN
RETURN _arr;
WHEN false THEN -- remove element
_arr := array_remove(_arr, _elem);
ELSE -- element not found
RAISE EXCEPTION 'Element % not contained in array %!', _elem, _arr;
END CASE;
RETURN _arr[:_pos-1] || _elem || _arr[_pos:];
END
$func$ LANGUAGE plpgsql IMMUTABLE;
Exceptions are raised if any of the additional assumptions for the simple func are violated.
The "proper" function uses polymorphic types and works for any data type, not just bigint - as long as array and element type match.
db<>fiddle here
Postgresql supports slicing and appending so:
SELECT c, c[2:3] || c[1] || c[4:] AS result
FROM (SELECT ARRAY[1, 2, 3, 4] c) sub
db<>fiddle demo
Another variant using 'WITH .. SELECT ..' avoid searching elements by value, just array element numbers. jsonb[] with big jsons for example.
test_model.data - field to update.
:idx_from, :idx_to - placeholders, 1-based.
WITH from_removed AS (
SELECT
test_model.id,
ARRAY_CAT(
data[: :idx_from - 1],
data[:idx_from + 1 :]
) AS "d"
FROM test_model
)
UPDATE test_model AS source
SET data =
from_removed.d[: :idx_to - 1] ||
data[:idx_from : :idx_from] ||
from_removed.d[:idx_to :]
FROM from_removed
WHERE source.id = from_removed.id AND source.id = :id

Insert multiple columns and rows into SQL Server with node js

I'm trying to to make a script that reads from a nosql and inserts into a SQL Server database.
That said I'm reading collections dynamically, so I need something to do things like
var columns = [ 1, 2, 3, 4 ...]
var values = [a, b, c ,4 ...]
request.query("INSERT INTO TABLE (" + [columns] + ") VALUES ( " [values] ");"
I have some collections with up to like 27 columns and I can't hog the database by inserting each value as I have like 20.000.000 registers to do... can't find anything that can do that inside a transaction, so I would appreciate any suggestions
var columns = [ 1, 2, 3, 4 ...]
var values = [a, b, c ,4 ...]
request.query(`INSERT INTO TABLE (${columns}) VALUES ?`), [[values]])
columns is an array so will have convert into a string for removing '[' and ']' brackets.

How to select each value of array

Consider following case
Table : tab1
id serial primary key
arr int[]
Now I want to select each value of arr.
SELECT * FROM (SELECT arr FROM tab1) AS tab2
I need kind of iteration in array.
e.g.
id arr
-----------------------------
1 [1,2]
2 [5,6,8]
So I could get result as
arr val
-------------------------------
[1,2] 1
[1,2] 2
[5,6,8] 5
[5,6,8] 6
[5,6,8] 8
Use unnest() for that:
WITH array_data(id,arr) AS ( VALUES
(1,ARRAY[1,2]),
(2,ARRAY[5,6,8])
)
SELECT arr,unnest(arr) AS val
FROM array_data;
I don't know if I've got well but here you have all you need
select id,
unnest(arr),
array_to_string(arr,','),
array_length(arr, 1)
from array_data;

Resources