How to Update Doc in Cloudant no sql-db - cloudant

I am following Link to integrate cloudant no sql-db.
There are methods given create DB, Find all, count, search, update. Now I want to update one key value in one of my DB doc file. how Can i achieve that. Document shows like
updateDoc (name, doc)
Arguments:
name - database name
docID - document to update
but when i pass my database name and doc ID its throwing database already created can not create db. But i wanted to updated doc. So can anyone help me out.
Below is one of the doc of may table 'employee_table' for reference -
{
"_id": "0b6459f8d368db408140ddc09bb30d19",
"_rev": "1-6fe6413eef59d0b9c5ab5344dc642bb1",
"Reporting_Manager": "sdasd",
"Designation": "asdasd",
"Access_Level": 2,
"Employee_ID": 123123,
"Employee_Name": "Suhas",
"Project_Name": "asdasd",
"Password": "asda",
"Location": "asdasd",
"Project_Manager": "asdas"
}
So I want to update some values from above doc file of my table 'employee_table'. So what parameters I have to pass to update.

first of all , there is no concept named table in no sql world.
second, to update document first you need to get document based on any input field of document. you can also use Employee_ID or some other document field. then use database.get_query_result
db_name = 'Employee'
database = client.create_database(db_name,throw_on_exists=False)
EmployeeIDValue = '123123'
#here throw_on_exists=False meaning dont throw error if DB already present
def updateDoc (database, EmployeeIDValue):
results = database.get_query_result(
selector= { 'Employee_ID': {'$eq': EmployeeIDValue} }, )
for result in results:
my_doc_id = result["_id"]
my_doc = database[my_doc_id] #===> this give you your document.
'''Now you can do update'''
my_doc['Employee_Name'] = 'XYZ'
my_doc.save() ====> this command updates current document

Related

Laravel update record control

I'm building a laravel application where i have a Reservation model. After the new record is created, the user may want to update it.
I'm looking for a solution to check if the UpdateRequest values are the same of the actual records and if there are updates only update the modified values.
Example, the original record has :
{'id' => 1, 'name' = 'Jhon', surname = 'Doe'}
While the UpdateRequest has
{'id' => 1, 'name' = 'Jhon', surname = 'Kirby'}
So, my questions are :
How do i check which values are different?
How do i update only the
specified values?
I've tried to look at the documentation of laravel but it doesn't seem to refer to this situation, is there any solution or should I just update all the values?
Thank you

Unable to assign self generated _id as reference in other collection in mongo shell

I am trying to create two collections in mongoDB. The collections are named roles and users. I read that _id is self-generated id returned by MongoDB on each document insertion. But if we try to insert into the document with the _id key then there is no new Object ID generated.
So I created a collection roles like below-
db.roles.insertOne({"_id" : 101, "name": "admin", "type":1})
db.roles.insertOne({"_id" : 102, "name": "guest", "type":1})
Now, I tried to use these _ids of roles collection in my users table like below -
db.users.insertOne({"username": "test", "password": "test", role: ObjectId(101)})
But it throws me an error saying -
invalid object id: length :
However, if I try to insert a Mongo generated ID, something like ObjectId("600045071dbbafd62cdd6045"), it is able to insert the document.
Can anyone please tell me, what I might be doing wrong?
you need to insert the role same way like the _id, role:101

Snowflake dynamic masking masks underlying table: the derivative tables are not masked and views become empty?

I have a raw table which has a variant column of json data.
There are some normal views (not materialised view) and tables are created using the json events from the raw table.
After applied a masking policy using UDF on the variant column of the raw table when the role is bi_analyst, there are two issues I found:
The tables derived from the underlying table are not masked with bi_analyst role;
The views derived using the underlying table become empty with bi_analyst role;
Does anyone know why this is happened? dose this dynamic masking feature not support views on underlying table?
What I would like to do is masking the underlying data and all the tables and views coming from it are also masked with the specified role.
It is easy to deal with those tables, since I can just apply the masking policy on them as well.
However, I have no idea about the views. How can I still access the views with the role, which should able to see the data but not the sensitive columns?
The UDF is:
-- JavaScript UDF to mask pii data --
use role ACCOUNTADMIN;
CREATE OR REPLACE FUNCTION full_address_masking(V variant)
RETURNS variant
LANGUAGE JAVASCRIPT
AS
$$
if ("detail" in V) {
if ("latitude" in V.detail) {
V.detail.latitude = "******";
}
if ("longitude" in V.detail) {
V.detail.longitude = "******";
}
if ("customerAddress" in V.detail) {
V.detail.customerAddress = "******";
}
}
return V;
$$;
The Masking policy is:
-- Create a masking policy using JavaScript UDF --
create or replace masking policy json_address_mask as (val variant) returns variant ->
CASE
WHEN current_role() IN ('ACCOUNTADMIN') THEN val
WHEN current_role() IN ('BI_ANALYST') THEN full_address_masking(val)
ELSE full_address_masking(val)
END;
The sql command to set masking policy on raw data is:
-- Set masking policy --
use role ACCOUNTADMIN;
alter table DB.PUBLIC.RAW_DATA
modify column EVERYTHING
set masking policy json_address_mask;
The masking policy is applied on the variant column EVERYTHING, which data structure looks like:
{
"detail": {
"customAddress": "******",
"id": 1,
"latitude": "******",
"longitude": "******"
},
"source": "AAA"
}
A derivative table is:
create or replace table DB.SCHEMA_A.TABLE_A
as
select * from DB.PUBLIC.RAW_DATA
where everything:source='AAA';
grant select on table DB.schema_A.table_A to role bi_analyst;
A view is:
create or replace view DB.SCHEMA_A.VIEW_A as (
select
everything:account::string as account,
everything:detail:latitude::float as detail_latitude,
everything:detail:longitude::float as detail_longitude,
from
DB.PUBLIC.RAW_DATA
where
everything:source::string = 'AAA'
grant select on view DB.SCHEMA_A.VIEW_A to role bi_analyst;
The result is that RAW_DATA is masked, TABLE_A is not masked at all, VIEW_A gets 0 rows returned when querying data with BI_ANALYST role.
#1 - When you create a table from a table that has masked data, you're going to get the data that the role creating the new table has access to in the masked table. So, in your example, TABLE_A has unmasked data, because it was created by a role that has access to it. The masking policy does not automatically get applied to the new table.
#2 - As for #2, I believe your only issue is that the JSON in your example isn't correctly formed, which is why you are getting NULL values. When I fixed this json to the following, it works fine using the same function and masking policy that you've posted:
{
"detail":{
"latitude": 132034034.00,
"longitude": 12393438583732,
"id": 1,
"customAddress" : "XXX Road, XXX city, UK"
},
"source": "AAA"
}
Masked Result:
{
"detail": {
"customAddress": "XXX Road, XXX city, UK",
"id": 1,
"latitude": "******",
"longitude": "******"
},
"source": "AAA"
}
The issue of the tables not been masked is explained well by #Mike in his answer. The solution can be just to create the derivative tables using a role which is restricted by the masking policy.
The issue of the views is about the type of masked value "******", which is a string type, while the actual type of fields latitude and longitude are float.
When creating the view, I still cast the latitude and longitude fields to float type:
create or replace view DB.SCHEMA_A.VIEW_A as (
select
everything:account::string as account,
everything:detail:latitude::float as detail_latitude,
everything:detail:longitude::float as detail_longitude,
from
DB.PUBLIC.RAW_DATA
where
everything:source::string = 'AAA'
There is a hidden error of casting "******" to float but snowflake still go ahead and create the view. But when I query the data with the BI_ANALYST role, it returns 0 row.
So the workaround is casting those fields to variant type:
create or replace view DB.SCHEMA_A.VIEW_A as (
select
everything:account::string as account,
everything:detail:latitude::variant as detail_latitude,
everything:detail:longitude::variant as detail_longitude,
from
DB.PUBLIC.RAW_DATA
where
everything:source::string = 'AAA'
Which is not perfect because it completely changed the definition of the view, none of roles can get the actual float/number type of the data, even including accountadmin

How to import documents that have arrays with the Cosmos DB Data Migration Tool

I'm trying to import documents from a SQL Server database. Each document will have a list of products that a customer has bought, for example:
{
"name": "John Smith"
"products": [
{
"name": "Pencil Sharpener"
"description": "Something, this and that."
},
{
"name": "Pencil case"
"description": "A case for pencils."
}
]
}
In the SQL Server database, the customer and products are stored in separate tables with a one-to-many relationship between the customer and products:
Customer
Id INT
Name VARCHAR
Product
Id INT
CustomerId INT (FK)
Name VARCHAR
Description VARCHAR
I've checked through the documentation , but can't see any mention of how to write the SQL query to map the one-to-many relationships to a single document.
I think there may be a way to do it as on the Target Information step (and when selecting DocumentDB - Bulk import (single partition collections)) there's the option to provide a bulk insert stored procedure. Maybe the products can be assigned to the document's products array from within there. I'm just not sure how to go about doing it as I'm new to Cosmos DB.
I hope that's clear enough and thanks in advance for your help!
It seems that you’d like to return products info formatted as json when you import data from SQL Server using the Azure Cosmos DB: DocumentDB API Data Migration tool. Based on your customer and products table structure and your requirement, I do the following test, which works fine on my side. You can refer to it.
Import data from SQL Server to JSON file
Query
select distinct c.Name, (SELECT p.Name as [name], p.[Description] as [description] from [dbo].[Product] p where c.Id = p.CustomerId for JSON path) as products
from [dbo].[Customer] c
JSON output
[
{
"Name": "Jack",
"products": null
},
{
"Name": "John Smith",
"products": "[{\"name\":\"Pencil Sharpener\",\"description\":\"Something, this and that\"},{\"name\":\"Pencil case\",\"description\":\"A case for pencils.\"}]"
}
]
Parsing the products
On the 'Target Information' step, you'll need to use your own version of BulkTransformationInsert.js. On line 32 is a 'transformDocument' function where you can edit the document. The following will parse the products and then assign them back to document before returning;
function transformDocument(doc) {
if (doc["products"]) {
let products = doc["products"];
let productsArr = JSON.parse(products);
doc["products"] = productsArr;
}
return doc;
}

Convert column into nested field in destination table on load job in Big Query

I am currently running a job to transfer data from one table to another via a query. But I can't seem to find a way convert a column into a nested field containing the column as a child field. For example, I have a column customer_id: 3 and I would like to convert it to {"customer": {"id":3}}. Below is a snippet of my job data.
query='select * FROM ['+ BQ_DATASET_ID+'.'+Table_name+'] WHERE user="'+user+'"'
job_data={"projectId": PROJECT_ID,
'jobReference': {
'projectId': PROJECT_ID,
'job_id': str(uuid.uuid4())
},
'configuration': {
'query': {
'query': query,
'priority': 'INTERACTIVE',
'allowLargeResults': True,
"destinationTable":{
"projectId": PROJECT_ID,
"datasetId": user,
"tableId": destinationTable,
},
"writeDisposition": "WRITE_APPEND"
},
}
}
Unfortunately, if the "customer" RECORD does not exist in the input schema, it is not currently possible to generate that nested RECORD field with child fields through a query. We have features in the works that will allow schema manipulation like this via SQL, but I don't think it's possible to do accomplish this today.
I think your best option today would be an export, transformation to desired format, and re-import of the data to the desired destination table.
a simple solution is to run
select customer_id as customer.id ....

Resources