I have JSON in a field, but I need to check it's schema before I process it. I need to know if anything has been added or removed from the schema.
Is there a way to extract the JSON schema from a JSON string so I can compare it to a known schema?
An online example is http://jsonschema.net/, but I want to do the same thing in TSQL
SQL Server don't support any json schema binding.
If your JSON is simple or flat, you can use
SELECT [key] FROM OPENJSON(#json)
to find all keys on the first level and compare them with some expected key set.
Related
I have looked and found some instances there something similar is being done for websites etc....
I have a SQL table that I am accessing in FileMaker Pro (Through ESS) via an ODBC connection to the SQL database and I have everything I need except there is one field(LNL_BLOB) in one table (duo.MMOBJS) which is an image "(image, null)" which cannot be accessed via the ODBC connection.
What I am hopping to accomplish is find a way that when an image is placed in the field, it is ALSO converted to Base64 in another field in the same table. Also, the database creator has a "View" (Foreign Concept to us Filemaker Developers) with this same data called "dbo.VW_BLOB_IMAGES" if that is helpful.
If there is a field with Base64 text, within FileMaker I can decode it to get the image.
What thoughts do you all have? Is there and even better way?
NOTE: I am using many tables and lots of the data in the app that I have made, this image is not the only reason I have created the ODBC connection.
Table
View
Well, one way to get base64 out of SQL would be to trick the XML engine in SQL to convert your column to base64, then strip out the XML:
SELECT SUBSTRING(Q.Base64Data, 7, LEN(Q.Base64Data)-9)
FROM (SELECT
(
SELECT LNL_BLOB AS B
FROM duo.MMOBJS
FOR XML raw('r'), BINARY BASE64
) AS [Base64Data]) AS [Q]
You'd probably want to add that to your select statement or a view, rather than add it to the table; but, you could write a trigger that would maintain the field using that definition.
I work with a mean stack: node, express, angular, mongodb.
The project with module to orders.
The customer selects a product, with specifications, adds to the card, give the order: choose among payment method, finalizing.
During exports.createOrder = function (req, res), creating a record to the database mongo with its unique _id.
I need this data also save to a SQL Server database.
That's why at this point, I create additionally a connection to SQL Server using mssql library and assigns variables sql.Request() .input('Order_ID'), sql.NVarChar (255) order._id) and others.
Call out at this point, the procedure .execute ('AddOrder'), located on the SQL Server which adds data to the table orders.
The problem is that _id saved object is different in the two databases. The other variables are the same.
At the time of updating the order in the SQL Server database creates a new row with _id, now the same as in mongo, but does not overwrite my old row - logical because the update when finds orderID.
All data record are saved. Except _id which is overwritten.
In the mongo database for example I have: 586a8871a14d27e81a55533d but in mssql is 586a8871a14d27e81a55533f.
Mongo creates new _id. exactly it is changed 3-byte counter.
I mean accurate data in both databases, but rather some data needs to save besides mongo to sql.
a secondary identifier in mssql not need / not used.
Is there a way to pass the same object _id to the SQL Server database, when you make order so as to mongo not created at the moment of the new _id?
And why, for example, if I set auto increment _id, at this point the _id column in SQL Server database ends up being NULL?
How to pass additional new_id, which is set to auto increment to the SQL Server database? Writes to mongo but in the whole project is UNDEFINED.
Mongo id and SQL id are different by default, usually in SQL default id is integer type. You need to create id in SQL which is string type (primary key called "id", type string) to be able to copy id from mongo to SQL.
It would be great for you to provide some code for more detailed answer.
I think the issue is that the Mongo ObjectID is binary, so converting it to a string just shows the textual representation of the binary. Instead of sql.NVarChar (255) can you use sql.Binary (12)? See this post from much smarter people: Has anyone found an efficient way to store BSON ObjectId values in an SQL database?
I want to return all the documents in a Cloudant database but only include some of the fields. I know that you can make a GET request against https://$USERNAME.cloudant.com/$DATABASE/_all_docs, but there doesn't seem to be a way to select only certain fields.
Alternatively , you can POST to /db/_find and include selector and fields in the JSON body. However, is there a universal selector, similar to SELECT * in SQL databases?
You can use {"_id":{"$gt":0}} as your selector to match all the documents, although you should note that it is not going to be performant on large data sets.
EDIT The XML value is saved in a XML column in SQL server with the entire transaction
I have a general question I suppose regarding the integrity of XML values stored in a SQL Server database.
We are working with very imnportant data elements in regards to healthcare. We currently utilize a BizTalk server that parses very complex looped and segmented files for eligibility and BizTalk parses the file, pushes out an XML "value" does some validation and then pushes it to the data tables.
I have a request from a Director of mine to create a report off of those XML values.
So I have trouble doing this for a couple reasons:
1) I would like to understand what exactly the XML has, does this data retain it's integrity regardless of whether we store the value in a table or store it in the XML?
2) Consistency - Will this data be consistent? Or does the fact that we are looking at XML values over and over using XML values to join the existing table to the XML "table" make the consistency an issue?
3) Accuracy - I would like this data to be accurate and consistent. I guess I'm having a hard time trusting that this data is available in the same form the data in a table is...
Am I being too overcautious here? Or are there valid reasons why this would not be a good idea to create reports for external clients?
Let me know if I can provide anything else, I'm looking for high-level comments, code should be somewhat irrelevant other than we have to use a value in the XML to render other values in the XML for linking purposes.
Off the bat I can think that this may not be consistent in that it's not set up like a DB table. No Primary Key, No Duplicate checks, No Indexing, etc...Is this true also?
Thanks in advance!
I think this article will answer your concerns: http://msdn.microsoft.com/en-us/library/hh403385.aspx
If you are treating a row with an xml column as your grain, the database will keep it transactionally consistent. With the XML type, you can use XML indexes to speed up your queries, which would be an advantage over storing this as varchar(max). Does this answer your question?
Is there a direct route that is pretty straight forward? (i.e. can SQL Server read XML)
Or, is it best to parse the XML and just transfer it in the usual way via ADO.Net either as individual rows or perhaps a batch update?
I realize there may be solutions that involve large complex stored procs--while I'm not entirely opposed to this, I tend to prefer to have most of my business logic in the C# code. I have seen a solution using SQLXMLBulkLoad, but it seemed to require fairly complex SQL code.
For reference, I'll be working with about 100 rows at a time with about 50 small pieces of data for each (strings and ints). This will eventually become a daily batch job.
Any code snippets you can provide would be very much appreciated.
SQL Server 2005 and up have a datatype called "XML" which you can store XML in - untyped or typed with a XSD schema.
You can basically fill columns of type XML from an XML literal string, so you can easily just use a normal INSERT statement and fill the XML contents into that field.
Marc
You can use the function OPENXML and stored procedure sp_xml_preparedocument to easily convert your XML into rowsets.
If you are using SQL Server 2008 (or 2005), it has an xml native datatype. You can associate an XSD schema with xml variables, and Insert directly into columns of type xml.
Yes, SQL Server 2005 and above can parse XML out of the box.
You use the nodes, value and query methods to break it down how you want, whether values or attributes
Some shameless plugging:
Importing XML into SQL Server
Search XML Column in SQL
Xml data and Xml document could have different meaning.
When xml type is good for data, it doesn't save formatting (white spaces removed), so in some cases (e.g. cofiguration files) the best option is nvarchar.