data row 1 :
{
"30":{"status":0,"approval":"0","entrydate":"2023-01-30"},
"26":{"status":0,"approval":"0","entrydate":"2023-01-30"}
}
data row 2 :
{
"12":{"status":0,"approval":"0","entrydate":"2023-01-30"},
"13":{"status":1,"approval":"20022-xxxx","entrydate":"2023-01-30"}
}
data row 3 :
{
"20":{"status":1,"approval":"20022-xxxx","entrydate":"2023-01-30"},
"24":{"status":1,"approval":"20022-xxxx","entrydate":"2023-01-30"}
}
How to select row data in a SQL Server database if status=1 => row 2, row 3 and if status=0 => row 1, row 2 because the json key is dynamic.
A possible approach is a combination of OPENJSON() and JSON_VALUE():
SELECT *
FROM JsonTable
WHERE EXISTS(
SELECT 1
FROM OPENJSON(JsonColumn)
WHERE JSON_VALUE([value], '$.status') = '0'
)
Related
I want to insert a row into a SQL server table at a specific position. For example my table has 100 rows and also I have a field named LineNumber,I want to insert a new row after line number 9. But the ID column which is PK for the table already has a row with LineNumber 9. So now I need a new row with the line number 9 or 10 so that the ID field has to be updated automatically. How can I insert a row at this position so that all the rows after it shift to next position?
Don't modify the primary key, that is not a good way to modify the order of your output now that you have a new record you want to insert.
Add a new column on to the table to hold your order. Then you can copy the primary key values in to that column if that's your current order before making the required changes for the new row.
Sample that you should be able to copy and paste and run as is:
I've added orderid column, which you will need to do with default null values.
DECLARE #OrderTable AS TABLE
(
id INT ,
val VARCHAR(5) ,
orderid INT
)
INSERT INTO #OrderTable
( id, val, orderid )
VALUES ( 1, 'aaa', NULL )
,
( 2, 'bbb', NULL )
,
( 3, 'ddd', NULL )
SELECT *
FROM #OrderTable
-- Produces:
/*
id val orderid
1 aaa NULL
2 bbb NULL
3 ddd NULL
*/
-- Update the `orderid` column to your existing order:
UPDATE #OrderTable
SET orderid = id
SELECT *
FROM #OrderTable
-- Produces:
/*
id val orderid
1 aaa 1
2 bbb 2
3 ddd 3
*/
-- Then you want to add a new item to change the order:
DECLARE #newVal AS NVARCHAR(5) = 'ccc'
DECLARE #newValOrder AS INT = 3
-- Update the table to prepare for the new row:
UPDATE #OrderTable
SET orderid = orderid + 1
WHERE orderid >= 3
-- this inserts ID = 4, which is what your primary key would do by default
-- this is just an example with hard coded value
INSERT INTO #OrderTable
( id, val, orderid )
VALUES ( 4, #newVal, #newValOrder )
-- Select the data, using the new order column:
SELECT *
FROM #OrderTable
ORDER BY orderid
-- Produces:
/*
id val orderid
1 aaa 1
2 bbb 2
4 ccc 3
3 ddd 4
*/
What makes this difficult is that the column is a primary key. If you can interact with the database when no one else is, then you can do this:
Make the column no longer a primary key.
Run a command like this:
UPDATE MyTable
SET PrimaryColumnID = PrimaryColumnID + 1
WHERE PrimaryColumnID > 8
Insert the row with the appropriate PrimaryColumnID (9).
Restore the column to being the primary key.
Obviously, this probably wouldn't be good with a large table. You could create a new primary key column and switch it, then fix the values, then switch it back.
Two steps, first update LineNumber
UPDATE
table
SET
LineNumber = LineNumber + 1
WHERE
LineNumber>9
Then do your insert
INSERT INTO table
(LineNumber, ...) VALUES (10, .....)
how to replace this query in sql server ?
DELETE FROM student
WHERE id=2
AND list_column_name='lastName'
OR list_column_name='firstName'
LIMIT 3;
There is no ORDER BY in your original query.
If you just want to delete an arbitary three records matching your WHERE you can use.
DELETE TOP(3) FROM student
WHERE id = 2
AND list_column_name = 'lastName'
OR list_column_name = 'firstName';
For greater control of TOP (as ordered by what?) you need to use a CTE or similar.
using CTE and TOP 3 where equal to LIMIT 3
;WITH CTE
AS (SELECT TOP 3 Studentname
FROM student
WHERE id = 2
AND list_column_name = 'lastName'
OR list_column_name = 'firstName')
DELETE FROM CTE
I have a table like this;
CREATE TABLE test (
id BIGSERIAL PRIMARY KEY,
data JSONB
);
INSERT INTO test(data) VALUES('[1,2,"a",4,"8",6]'); -- id = 1
INSERT INTO test(data) VALUES('[1,2,"b",4,"7",6]'); -- id = 2
How to update element data->1 and data->3 into something else without PL/*?
For Postgres 9.5 or later use jsonb_set(). See later answer of adriaan.
You cannot manipulate selected elements of a json / jsonb type directly. Functionality for that is still missing in Postgres 9.4. You have to do 3 steps:
Unnest / decompose the JSON value.
Manipulate selected elements.
Aggregate / compose the value back again.
To replace the 3rd element of the json array (data->3) in the row with id = 1 with a given (new) value ('<new_value>'):
UPDATE test t
SET data = t2.data
FROM (
SELECT id, array_to_json(
array_agg(CASE WHEN rn = 1 THEN '<new_value>' ELSE elem END))
) AS data
FROM test t2
, json_array_elements_text(t2.data) WITH ORDINALITY x(elem, rn)
WHERE id = 1
GROUP BY 1
) t2
WHERE t.id = t2.id
AND t.data <> t2.data; -- avoid empty updates
About json_array_elements_text():
How to turn JSON array into Postgres array?
About WITH ORDINALITY:
PostgreSQL unnest() with element number
You can do this from PostgreSQL 9.5 with jsonb_set:
INSERT INTO test(data) VALUES('[1,2,"a",4,"8",6]');
UPDATE test SET data = jsonb_set(data, '{2}','"b"', false) WHERE id = 1
Try it out with a simple select:
SELECT jsonb_set('[1,2,"a",4,"8",6]', '{2}','"b"', false)
-- [1, 2, "b", 4, "8", 6]
And if you want to update two fields you can do:
SELECT jsonb_set(jsonb_set('[1,2,"a",4,"8",6]', '{0}','100', false), '{2}','"b"', false)
-- [100, 2, "b", 4, "8", 6]
I am trying to update each row in a table with data from a random row from another table. Here is the SQL I am currently using:
SELECT Data, RowNumber FROM SampleData
SELECT FLOOR(ABS(CHECKSUM(NEWID())) / 2147483647.0 * 3 + 1) FROM Name
UPDATE Name SET Surname = (SELECT Data FROM SampleData WHERE RowNumber = FLOOR(ABS(CHECKSUM(NEWID())) / 2147483647.0 * 3 + 1))
And here are the results I'm getting:
Smith 1
Hunt 2
Jones 3
2
2
3
2
1
3
2
.... continues with a random number between 1 and 3 for each row in the Name table
Msg 512, Level 16, State 1, Line 9
Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression.
So my question is why does the SELECT statement produce a single random number per row where the UPDATE sub-query seems to return multiple rows. I'm using SQL Server 2012 SP1 in case that makes a difference.
You are trying to update a field with a set of data, this leads to the described error.
Create a temporary mapping table and update name form the join between ID of name and "random" ID of SampleDate
SQLFiddle
SELECT ID,FLOOR(ABS(CHECKSUM(NEWID())) / 2147483647.0 * 3 + 1) as RN
Into #tmp
FROM Name
Update Name set SurName=Data
from #tmp
join SampleData sd on sd.RowNumber=#tmp.rn
where #tmp.ID=Name.ID
Select * from Name
I have a query that selects number of rows containing repeated rows that all columns values are the same except one column let's call it X column.
What I want to do is to combine all values of X column values in all repeated rows and separate the values with ',' char.
The query I use:
SELECT App.ID,App.Name,Grp.ColumnX
FROM (
SELECT * FROM CustomersGeneralGroups AS CG WHERE CG.GeneralGroups_ID IN(1,2,3,4)
) AS GroupsCustomers
LEFT JOIN Appointments AS App ON GroupsCustomers.Customers_ID = App.CustomerID
INNER JOIN Groups AS Grp ON Grp.ID = GroupsCustomers.GeneralGroups_ID
WHERE App.AppointmentDateTimeStart > #startDate AND App.AppointmentDateTimeEnd < #endDate
The column which will differ is ColumnX, columns ID and Name will be same but ColumnX will be different.
Ex:
if the query will return rows like these:
ID Name ColumnX
1 test1 1
1 test1 2
1 test1 3
The result I want to be is:
ID Name ColumnX
1 test1 1,2,3
I don't mind if I have to do it with linq not sql.
I used GroupBy in linq but it merges the ColumnX values.
If you have this data loaded in objects, you can use LINQ methods to achieve this like so:
var groupedRecords =
items
.GroupBy(item => new { item.Id, item.Name })
.Select(grouping => new
{
grouping.Key.Id,
grouping.Key.Name,
columnXValues = string.Join(",", grouping.Select(g => g.ColumnX))
});