SQL SERVER: Export Query RESULT as JSON Object - sql-server

I am using Azure sql server and trying to export results of a query in the following format.
Required Query Result:
{ "results": [{...},{...}], "response": 0 }
From this example : https://msdn.microsoft.com/en-us/library/dn921894.aspx
I am using this sql but I am not sure how to add another response property as a sibling to the root property :"results".
Current Query:
SELECT name, surname
FROM emp
FOR JSON AUTO, ROOT('results')
Output of Query:
{ "results": [
{ "name": "John", "surname": "Doe" },
{ "name": "Jane", "surname": "Doe" } ] }

Use FOR JSON PATH instead of FOR JSON AUTO. See the Format Query Results as JSON with FOR JSON (SQL Server) page for several examples, including dot-separated column names and queries from SELECTS

There is no built-in option for this format, so maybe the easiest way would be to manually format response, something like:
declare #resp nvarchar(20) = '20'
SELECT '{"response":"' +
(SELECT * FROM emp FOR JSON PATH) +
'", "response": ' + #resp + ' }'
FOR JSON will do harder part (formatting table) and you just need to wrap it.

Related

How to use JsonPath expression with wildcards in MS SQL 2019's Json_Value?

In my SQL Table, I have a column storing JSON with a structure similar to the following:
{
"type": "Common",
"items": [
{
"name": "landline",
"number": "0123-4567-8888"
},
{
"name": "home",
"number": "0123-4567-8910"
},
{
"name": "mobile",
"number": "0123-4567-9910"
}
]
}
This is the table structure I am using:
CREATE TABLE StoreDp(
[JsonData] [nvarchar](max),
[Type] AS (json_value([JsonData],'lax $.type')) PERSISTED,
[Items] AS (json_value([JsonData],N'lax $.items[*].name')) PERSISTED
)
Now, when I am trying to insert the sample JSON (serialized) in the table column [JsonData], I am getting an error
JSON path is not properly formatted. Unexpected character '*' is found at position 3.
I was expecting data to be inserted with value in [Items] as "[landline, home, mobile]"
I have validated the jsonpath expression, and it works fine except for in SQL Server.
Update: Corrected the SQL server version.
SQL Server cannot do shred and rebuild JSON using wildcard paths and JSON_VALUE.
You would have to use a combination of OPENJSON and STRING_AGG, and also STRING_ESCAPE if you want the result to be valid JSON.
SELECT
(
SELECT '[' + STRING_AGG('"' + STRING_ESCAPE(j.name, 'json') + '"', ',') + ']'
FROM OPENJSON(sd.JsonData, '$.items')
WITH (
name varchar(20)
) j
)
FROM StoreDp sd;
db<>fiddle
You could only do this in a computed column by using a scalar UDF. However those have major performance implications and should generally be avoided. I suggest you just make a view instead.

Is it possible to parse json by using select statement in Netezza?

I have json data in one of the column of my table and I would like to parse json data by using select statement in Netezza. I am not able to figure it out.
Can you all help me to solve this problem?
Let's say I have TableA and this table has column Customer_detail. data from customer_detail field lookss like this
'{"Customer":[{"id":"1","name":"mike","address":"NYC"}]}'
Now I would like to query id from customer object of customer_detail column.
Thanks in advance.
From NPS 11.1.0.0 onwards, you can parse and use json datatype itself in NPS.
Here's an example
SYSTEM.ADMIN(ADMIN)=> create table jtest(c1 jsonb);
CREATE TABLE
SYSTEM.ADMIN(ADMIN)=> insert into jtest values('{"name": "Joe Smith", "age": 28, "sports": ["football", "volleyball", "soccer"]}');
INSERT 0 1
SYSTEM.ADMIN(ADMIN)=> insert into jtest values('{"name": "Jane Smith", "age": 38, "sports": ["volleyball", "soccer"]}');
INSERT 0 1
SYSTEM.ADMIN(ADMIN)=> select * from jtest;
C1
----------------------------------------------------------------------------------
{"age": 28, "name": "Joe Smith", "sports": ["football", "volleyball", "soccer"]}
{"age": 38, "name": "Jane Smith", "sports": ["volleyball", "soccer"]}
(2 rows)
SYSTEM.ADMIN(ADMIN)=> select c1 -> 'name' from jtest where c1 -> 'age' > 20::jsonb ;
?COLUMN?
--------------
"Joe Smith"
"Jane Smith"
(2 rows)
You can refer to https://www.ibm.com/support/knowledgecenter/SSTNZ3/com.ibm.ips.doc/postgresql/dbuser/r_dbuser_functions_expressions.html for more details as well.
Looking at the comment you put above, something like
select customer_detail::json -> 'Customer' -> 0 -> 'id' as id,
customer_detail::json -> 'Customer' -> 0 -> 'name' as name
from ...
This will parse the text to json during every execution. A more performant would be to convert customer_detail to jsonb datatype
If the NPS version is below 11.1.x then the json handling needs to be done (a) externally as in using sql to get the json data and then processing it outside the database or (b) using UDF - creating a UDF that supports json parsing
E.g -
Using the programming language of choice, process the json external to SQL
import nzpy # install using "python3 -m pip install nzpy"
import os
import json
# assume NZ_USER, NZ_PASSWORD, NZ_DATABASE and NZ_HOST are set
con = nzpy.connect(user=os.environ["NZ_USER"],
password=os.environ["NZ_PASSWORD"], host=os.environ["NZ_HOST"],
database=os.environ["NZ_DATABASE"], port=5480)
with con.cursor() as cur:
cur.execute('select customer_detail from ...')
for customer_detail in cur.fetch_all():
c = json.loads(customer_detail)
print((c['Customer'][0]['name'], c['Customer'][0]['id']))
Or create a UDF that parses json and use that in the SQL query
If none of those are options, and the json is always well formatted (ie. no new lines, only one key called "id" and one key called "name", etc) then a regex may be a way around, though its not recommended since its not a real json parser
select regexp_extract(customer_detail,
'"id"[[:space:]]*:[[:space:]]*"([^"]+)"', 1, 1) as id,
regexp_extract(customer_detail,
'"name"[[:space:]]*:[[:space:]]*"([^"]+)"', 1, 1) as name
....

How to escape characters in JSON response for SQL INSERT FROM OPENJSON via PYTHON

Getting error when sending API JSON response to SQL Server db due to apostrophe in the values ('). See second JSON response for operator_country_lar
Here is a sample API response:
json_response= '{
"num_results": 455161,
"results": [
{
"activity_date": "1975-12-01",
"activity_id": "50",
"activity_name": "ORDERED",
"operator_country_lar": "France",
"registered_owner_name": "AIR FRANCE"
},
{
"activity_date": "1974-10-01",
"activity_id": "50",
"activity_name": "ORDERED",
"operator_country_lar": "Korea, Democratic People's Republic of",
"registered_owner_name": "KOREAN AIR LINES"
}
],
"results_this_page": 2,
"status": 200}'
Serializing the response and executing:
query = "DECLARE #json nvarchar(max)= "+json_response+" INSERT INTO "+self.database+"([activity_date],[activity_name],[operator_name])
SELECT * FROM OPENJSON(#json, '$.results') WITH (activity_date date, activity_name nchar(25), operator_name nchar(256))"
cursor.execute(query)
cnxn.commit()
The first instance works fine, but the second set in the response throws an error. How do I escape the apostrophe or is there a more accurate way to write the SQL OPENJSON statement in PYTHON?
This search and replace worked. It may not be perfect but it's relatively fast in processing json. Hope it helps someone.
self.response = json.loads(json.dumps(self.response.json()).replace("'", ""))
Please share below if you find a better solution.

How to import documents that have arrays with the Cosmos DB Data Migration Tool

I'm trying to import documents from a SQL Server database. Each document will have a list of products that a customer has bought, for example:
{
"name": "John Smith"
"products": [
{
"name": "Pencil Sharpener"
"description": "Something, this and that."
},
{
"name": "Pencil case"
"description": "A case for pencils."
}
]
}
In the SQL Server database, the customer and products are stored in separate tables with a one-to-many relationship between the customer and products:
Customer
Id INT
Name VARCHAR
Product
Id INT
CustomerId INT (FK)
Name VARCHAR
Description VARCHAR
I've checked through the documentation , but can't see any mention of how to write the SQL query to map the one-to-many relationships to a single document.
I think there may be a way to do it as on the Target Information step (and when selecting DocumentDB - Bulk import (single partition collections)) there's the option to provide a bulk insert stored procedure. Maybe the products can be assigned to the document's products array from within there. I'm just not sure how to go about doing it as I'm new to Cosmos DB.
I hope that's clear enough and thanks in advance for your help!
It seems that you’d like to return products info formatted as json when you import data from SQL Server using the Azure Cosmos DB: DocumentDB API Data Migration tool. Based on your customer and products table structure and your requirement, I do the following test, which works fine on my side. You can refer to it.
Import data from SQL Server to JSON file
Query
select distinct c.Name, (SELECT p.Name as [name], p.[Description] as [description] from [dbo].[Product] p where c.Id = p.CustomerId for JSON path) as products
from [dbo].[Customer] c
JSON output
[
{
"Name": "Jack",
"products": null
},
{
"Name": "John Smith",
"products": "[{\"name\":\"Pencil Sharpener\",\"description\":\"Something, this and that\"},{\"name\":\"Pencil case\",\"description\":\"A case for pencils.\"}]"
}
]
Parsing the products
On the 'Target Information' step, you'll need to use your own version of BulkTransformationInsert.js. On line 32 is a 'transformDocument' function where you can edit the document. The following will parse the products and then assign them back to document before returning;
function transformDocument(doc) {
if (doc["products"]) {
let products = doc["products"];
let productsArr = JSON.parse(products);
doc["products"] = productsArr;
}
return doc;
}

How can I remove JSON property stored in a SQL Server column?

I have a task to remove a JSON property saved in a SQL Server database table's column. Here is the structure of my table
OrderId Name JSON
In the JSON column I have this JSON data:
{
"Property1" :"",
"Property2" :"",
"metadata-department": "A",
"metadata-group": "B"
}
I have over 500 records that have this json value.
Can I update all the records to remove metadata-department?
This question is old but I thought I'd post another solution for those who may end up here via search engine.
In SQL Server 2016 or SQL Azure, the following works:
DECLARE #info NVARCHAR(100) = '{"name":"John","skills":["C#","SQL"]}'
PRINT JSON_MODIFY(#info, '$.name', NULL)
-- Result: {"skills":["C#","SQL"]}
Source: https://msdn.microsoft.com/en-us/library/dn921892.aspx
In my SQL Server environment, I have installed CLR RegEx Replace functions and using these, I've achieved what you wanted:
DECLARE #Text NVARCHAR(MAX) = '{
"Property1" :"",
"Property2" :"",
"metadata-department": "A",
"metadata-group": "B"
}';
PRINT dbo.RegExReplace(#Text, '"metadata-department": "[A-Za-z0-z]",\r\n\s+', '');
Result:
{
"Property1" :"",
"Property2" :"",
"metadata-group": "B"
}
CLR Source: https://www.simple-talk.com/sql/t-sql-programming/clr-assembly-regex-functions-for-sql-server-by-example/
in MYSQL it's JSON_REMOVE()
UPDATE orders SET JSON=JSON_REMOVE(JSON, $."metadata-department")
https://dev.mysql.com/doc/refman/8.0/en/json-modification-functions.html#function_json-remove

Resources