I am now working on loading some data from Snowflake using the REST API called SQL API. The issue is that Snowflake uses some weird format for fields with DATE type when creating the response JSON.
I have this example field metadata:
{
"name" : "...",
"database" : "...",
"schema" : "...",
"table" : "...",
"type" : "date",
"scale" : null,
"precision" : null,
"length" : null,
"collation" : null,
"nullable" : true,
"byteLength" : null
}
And in the resultset its value is "9245". Using the query in the browser I see that the actual value is 1995-04-25.
What magic function decodes this integer back to a date?
Based on documentation Getting the Data From the Results
DATE
Integer value (in a string) of the number of days since the epoch (e.g. 18262).
Related: Why is 1/1/1970 the “epoch time”?
Check:
SELECT DATEADD(day, 9245, '1970-01-01'::DATE)
--1995-04-25
SELECT '1970-01-01'::DATE + INTERVAL '9245 DAYS';
-- 1995-04-25
db<>fiddle demo
Related
In my SQL Table, I have a column storing JSON with a structure similar to the following:
{
"type": "Common",
"items": [
{
"name": "landline",
"number": "0123-4567-8888"
},
{
"name": "home",
"number": "0123-4567-8910"
},
{
"name": "mobile",
"number": "0123-4567-9910"
}
]
}
This is the table structure I am using:
CREATE TABLE StoreDp(
[JsonData] [nvarchar](max),
[Type] AS (json_value([JsonData],'lax $.type')) PERSISTED,
[Items] AS (json_value([JsonData],N'lax $.items[*].name')) PERSISTED
)
Now, when I am trying to insert the sample JSON (serialized) in the table column [JsonData], I am getting an error
JSON path is not properly formatted. Unexpected character '*' is found at position 3.
I was expecting data to be inserted with value in [Items] as "[landline, home, mobile]"
I have validated the jsonpath expression, and it works fine except for in SQL Server.
Update: Corrected the SQL server version.
SQL Server cannot do shred and rebuild JSON using wildcard paths and JSON_VALUE.
You would have to use a combination of OPENJSON and STRING_AGG, and also STRING_ESCAPE if you want the result to be valid JSON.
SELECT
(
SELECT '[' + STRING_AGG('"' + STRING_ESCAPE(j.name, 'json') + '"', ',') + ']'
FROM OPENJSON(sd.JsonData, '$.items')
WITH (
name varchar(20)
) j
)
FROM StoreDp sd;
db<>fiddle
You could only do this in a computed column by using a scalar UDF. However those have major performance implications and should generally be avoided. I suggest you just make a view instead.
I am doing the R&D on the duplicate error scenario in SF Bulk API and found that somehow I am not able to perform insert and update operation simultaneously for the contact with the same(external id) within the single batch.
I received a duplicate error. Screen capture for reference
https://www.screencast.com/t/ReE41vuzb
When the batch contains different external id's I do not have any error. But when external Id is repeated in a single batch I receive the below error.
{
"success" : false,
"created" : false,
"id" : null,
"errors" : [ {
"message" : "Duplicate external id specified: 7401",
"fields" : [ "Origami_ID__c" ],
"statusCode" : "DUPLICATE_VALUE",
"extendedErrorDetails" : null
}
Although there is No duplicate at the target side.
You just need the last record from multiple records with same unique id.
For this follow the below logic:
//Create Map and populate the map with last record having unique id
Map<String,MyObject__c> mapWithUniqueIdMyObject = new Map<String,MyObject__c>();
MyObject__c currentObject;
for(Integer currentPosition = myObjectListWith5000Records.size(); currentPosition >=0 ;currentPosition--){
currentObject = myObjectListWith5000Records.get( currentPosition );
if( !mapWithUniqueIdMyObject.containsKey(currentObject.Origami_ID__c) ){//If the map does not contain any object with unique id then put the object in the map
mapWithUniqueIdMyObject.put( currentObject.Origami_ID__c, currentObject );
}
}
upsert mapWithUniqueIdMyObject.getValues();
I am following Link to integrate cloudant no sql-db.
There are methods given create DB, Find all, count, search, update. Now I want to update one key value in one of my DB doc file. how Can i achieve that. Document shows like
updateDoc (name, doc)
Arguments:
name - database name
docID - document to update
but when i pass my database name and doc ID its throwing database already created can not create db. But i wanted to updated doc. So can anyone help me out.
Below is one of the doc of may table 'employee_table' for reference -
{
"_id": "0b6459f8d368db408140ddc09bb30d19",
"_rev": "1-6fe6413eef59d0b9c5ab5344dc642bb1",
"Reporting_Manager": "sdasd",
"Designation": "asdasd",
"Access_Level": 2,
"Employee_ID": 123123,
"Employee_Name": "Suhas",
"Project_Name": "asdasd",
"Password": "asda",
"Location": "asdasd",
"Project_Manager": "asdas"
}
So I want to update some values from above doc file of my table 'employee_table'. So what parameters I have to pass to update.
first of all , there is no concept named table in no sql world.
second, to update document first you need to get document based on any input field of document. you can also use Employee_ID or some other document field. then use database.get_query_result
db_name = 'Employee'
database = client.create_database(db_name,throw_on_exists=False)
EmployeeIDValue = '123123'
#here throw_on_exists=False meaning dont throw error if DB already present
def updateDoc (database, EmployeeIDValue):
results = database.get_query_result(
selector= { 'Employee_ID': {'$eq': EmployeeIDValue} }, )
for result in results:
my_doc_id = result["_id"]
my_doc = database[my_doc_id] #===> this give you your document.
'''Now you can do update'''
my_doc['Employee_Name'] = 'XYZ'
my_doc.save() ====> this command updates current document
I've a mongo db collection as below.
{
"_id": {
"$oid": "57048f2f60b18f8e186e65b6"
},
"ID": 1,
"Room_ID": 303,
"StartTime": "2016-03-12T11:00:00Z",
"EndTime": "2016-03-12T12:00:00Z",
"Login_ID": "ABCDE"
}
I'm using db.collection.find() query to fetch the results. I am trying to display the date in two parts 1. DD-MM-YY and 2. HH-MM-SS.
I've two ideas for this. 1. Get the date in the desired format from collection.find() command or 2. After fetching the records into cursor, parse it in angularjs code to separate date and time values.
I've tried the $dateToString of mongodb but couldn't get the results. In fact I am confused with it.
db.RoomReservation.aggregate({ $project: {"Login_ID": "ABCDE", dt: { $dateToString: { format: "%Y-%m-%d", date: "$StartTime"}}}});
assert: command failed: {
"errmsg" : "exception: can't convert from BSON type String to Date",
"code" : 16006,
"ok" : 0
} : aggregate failed
Can someone please help me in getting the output of collection.find() in the following format
Room_ID, Date, StartTime, EndTime, Login_ID
303, 2016-03-12, 11:00, 12:00, ABCDE
You can simply parse your date in angular part using filters , if you want ..
<span>{{ | date:'dd-MM-yyyy # HH:mm:ss Z'}}</span>
you can refer for more date options here
I am using Azure sql server and trying to export results of a query in the following format.
Required Query Result:
{ "results": [{...},{...}], "response": 0 }
From this example : https://msdn.microsoft.com/en-us/library/dn921894.aspx
I am using this sql but I am not sure how to add another response property as a sibling to the root property :"results".
Current Query:
SELECT name, surname
FROM emp
FOR JSON AUTO, ROOT('results')
Output of Query:
{ "results": [
{ "name": "John", "surname": "Doe" },
{ "name": "Jane", "surname": "Doe" } ] }
Use FOR JSON PATH instead of FOR JSON AUTO. See the Format Query Results as JSON with FOR JSON (SQL Server) page for several examples, including dot-separated column names and queries from SELECTS
There is no built-in option for this format, so maybe the easiest way would be to manually format response, something like:
declare #resp nvarchar(20) = '20'
SELECT '{"response":"' +
(SELECT * FROM emp FOR JSON PATH) +
'", "response": ' + #resp + ' }'
FOR JSON will do harder part (formatting table) and you just need to wrap it.