The problem is that I need to include a function in the where clause of the SQL generated by the feathers
If the where clause is assigned only the function the SQL is generated correct, but I of cause missing the status part
options.where=fn
SELECT
"id", "feature_name", "status", "priority", "label", "st_asgeojson" FROM "gis34_registration"."geojson_tasks" AS "geojson_tasks"
WHERE
ST_Intersects(geom, ST_transform(ST_MakeEnvelope(12.370044675467057, 55.73287419556607, 12.385791865781385, 55.7422305387, 4326), 25832))
This is the sql that I need feathers to generate
options.where.status='Registreret'
options.where.fn=fn
SELECT
"id", "feature_name", "status", "priority", "label", "st_asgeojson" FROM "gis34_registration"."geojson_tasks" AS "geojson_tasks"
WHERE
status = 'Registreret' AND
fn = ST_Intersects(geom, ST_transform(ST_MakeEnvelope(12.370044675467057, 55.73287419556607, 12.385791865781385, 55.7422305387, 4326), 25832))
This is the sql that I need feathers to generate
SELECT
"id", "feature_name", "status", "priority", "label", "st_asgeojson" FROM "gis34_registration"."geojson_tasks" AS "geojson_tasks"
WHERE
status = 'Registreret' AND
ST_Intersects(geom, ST_transform(ST_MakeEnvelope(12.370044675467057, 55.73287419556607, 12.385791865781385, 55.7422305387, 4326), 25832))
How do i get the feathers to generate SQL with a function and attributes together?
The answer is to use the predefined porperty $and. The code below generates the required SQL
options.where.status='Registreret'
options.where.$and=fn
Related
I have created a Blazor application with .Net6 framework. This application is generating multiple threads to transfer data from multiple PostgreSQL databases (tenant) to a single PostgreSQL database (group) using dblink.
DBLink group-tenant connections "reporting_cn_salesinvoicing{x}" have been created at the moment of the database creation.
For example if the group has three tenants, it also has three dblink servers:
reporting_lk_salesinvoicing1
reporting_lk_salesinvoicing2
reporting_lk_salesinvoicing3
And also has three dblink connections:
reporting_cn_salesinvoicing1
reporting_cn_salesinvoicing2
reporting_cn_salesinvoicing3
DBLink connections are being checked every time before a command is executed using the following command:
SELECT dblink_connect('reporting_cn_salesinvoicing{0}', 'reporting_lk_salesinvoicing{0}')
Each thread is operating a foreach loop for every table transfer with the following command:
INSERT INTO public."sales_document_line" ("tenantx_id", "id", "sales_header_id", "description", "quantity", "amount") SELECT 1, "id", "sales_header_id", "description", "quantity", "amount" FROM dblink('reporting_cn_salesinvoicing{x}', 'SELECT {x}, "id", "sales_header_id", "description", "quantity", "amount" FROM public."sales_document_line" ') AS rt(id uuid, sales_header_id uuid, description text, quantity numeric, amount numeric) ON CONFLICT ("tenantx_id", "id") DO UPDATE SET "sales_header_id" = excluded."sales_header_id", "description" = excluded."description", "quantity" = excluded."quantity", "amount" = excluded."amount"
If the above query is executed in parallel for different tenant databases usually it throws the following error:
On Postgres11
2F003: password is required
DETAIL: Non-superusers must provide a password in the connection string.
On Postgres14
08001: could not establish connection
DETAIL: missing "=" after "reporting_cn_salesinvoicing{x}" in connection info string
While some other time it exposés:
'An exception has been raised that is likely due to a transient failure.'
This error seems misleading to me because if there is only one thread is working.
I have tried to set lot of different connection parameters or changes the threading population without success.
UPDATE
After lot of hours, seems that even with parallel connections if the credentials are specified on the dblink instead of named connnetion it works:
INSERT INTO public."sales_document_line" ("tenantx_id", "id", "sales_header_id", "description", "quantity", "amount") SELECT {x}, "id", "sales_header_id", "description", "quantity", "amount" FROM dblink('dbname=salesinvoicing-qa{x} port=5432 host=xxxxxxxxxxx user=postgres password=xxxxxxxxxxxx', 'SELECT "id", "sales_header_id", "description", "quantity", "amount" FROM public."sales_document_line" ') AS rt(id uuid, sales_header_id uuid, description text, quantity numeric, amount numeric) ON CONFLICT("tenantx_id", "id") DO UPDATE SET "sales_header_id" = excluded."sales_header_id", "description" = excluded."description", "quantity" = excluded."quantity", "amount" = excluded."amount";
Any ideas?
I'm trying to import documents from a SQL Server database. Each document will have a list of products that a customer has bought, for example:
{
"name": "John Smith"
"products": [
{
"name": "Pencil Sharpener"
"description": "Something, this and that."
},
{
"name": "Pencil case"
"description": "A case for pencils."
}
]
}
In the SQL Server database, the customer and products are stored in separate tables with a one-to-many relationship between the customer and products:
Customer
Id INT
Name VARCHAR
Product
Id INT
CustomerId INT (FK)
Name VARCHAR
Description VARCHAR
I've checked through the documentation , but can't see any mention of how to write the SQL query to map the one-to-many relationships to a single document.
I think there may be a way to do it as on the Target Information step (and when selecting DocumentDB - Bulk import (single partition collections)) there's the option to provide a bulk insert stored procedure. Maybe the products can be assigned to the document's products array from within there. I'm just not sure how to go about doing it as I'm new to Cosmos DB.
I hope that's clear enough and thanks in advance for your help!
It seems that you’d like to return products info formatted as json when you import data from SQL Server using the Azure Cosmos DB: DocumentDB API Data Migration tool. Based on your customer and products table structure and your requirement, I do the following test, which works fine on my side. You can refer to it.
Import data from SQL Server to JSON file
Query
select distinct c.Name, (SELECT p.Name as [name], p.[Description] as [description] from [dbo].[Product] p where c.Id = p.CustomerId for JSON path) as products
from [dbo].[Customer] c
JSON output
[
{
"Name": "Jack",
"products": null
},
{
"Name": "John Smith",
"products": "[{\"name\":\"Pencil Sharpener\",\"description\":\"Something, this and that\"},{\"name\":\"Pencil case\",\"description\":\"A case for pencils.\"}]"
}
]
Parsing the products
On the 'Target Information' step, you'll need to use your own version of BulkTransformationInsert.js. On line 32 is a 'transformDocument' function where you can edit the document. The following will parse the products and then assign them back to document before returning;
function transformDocument(doc) {
if (doc["products"]) {
let products = doc["products"];
let productsArr = JSON.parse(products);
doc["products"] = productsArr;
}
return doc;
}
I am following Link to integrate cloudant no sql-db.
There are methods given create DB, Find all, count, search, update. Now I want to update one key value in one of my DB doc file. how Can i achieve that. Document shows like
updateDoc (name, doc)
Arguments:
name - database name
docID - document to update
but when i pass my database name and doc ID its throwing database already created can not create db. But i wanted to updated doc. So can anyone help me out.
Below is one of the doc of may table 'employee_table' for reference -
{
"_id": "0b6459f8d368db408140ddc09bb30d19",
"_rev": "1-6fe6413eef59d0b9c5ab5344dc642bb1",
"Reporting_Manager": "sdasd",
"Designation": "asdasd",
"Access_Level": 2,
"Employee_ID": 123123,
"Employee_Name": "Suhas",
"Project_Name": "asdasd",
"Password": "asda",
"Location": "asdasd",
"Project_Manager": "asdas"
}
So I want to update some values from above doc file of my table 'employee_table'. So what parameters I have to pass to update.
first of all , there is no concept named table in no sql world.
second, to update document first you need to get document based on any input field of document. you can also use Employee_ID or some other document field. then use database.get_query_result
db_name = 'Employee'
database = client.create_database(db_name,throw_on_exists=False)
EmployeeIDValue = '123123'
#here throw_on_exists=False meaning dont throw error if DB already present
def updateDoc (database, EmployeeIDValue):
results = database.get_query_result(
selector= { 'Employee_ID': {'$eq': EmployeeIDValue} }, )
for result in results:
my_doc_id = result["_id"]
my_doc = database[my_doc_id] #===> this give you your document.
'''Now you can do update'''
my_doc['Employee_Name'] = 'XYZ'
my_doc.save() ====> this command updates current document
I am currently running a job to transfer data from one table to another via a query. But I can't seem to find a way convert a column into a nested field containing the column as a child field. For example, I have a column customer_id: 3 and I would like to convert it to {"customer": {"id":3}}. Below is a snippet of my job data.
query='select * FROM ['+ BQ_DATASET_ID+'.'+Table_name+'] WHERE user="'+user+'"'
job_data={"projectId": PROJECT_ID,
'jobReference': {
'projectId': PROJECT_ID,
'job_id': str(uuid.uuid4())
},
'configuration': {
'query': {
'query': query,
'priority': 'INTERACTIVE',
'allowLargeResults': True,
"destinationTable":{
"projectId": PROJECT_ID,
"datasetId": user,
"tableId": destinationTable,
},
"writeDisposition": "WRITE_APPEND"
},
}
}
Unfortunately, if the "customer" RECORD does not exist in the input schema, it is not currently possible to generate that nested RECORD field with child fields through a query. We have features in the works that will allow schema manipulation like this via SQL, but I don't think it's possible to do accomplish this today.
I think your best option today would be an export, transformation to desired format, and re-import of the data to the desired destination table.
a simple solution is to run
select customer_id as customer.id ....
I am writing to from an ODBC to a SQL Server table via the RODBC package, specifically the function sqlSave. It seems that the default var types is charvar(255) for this function. I tried to use the argument of varTypes that is listed within the documentation but it fails.
Here is the table called spikes20 with the Class structure, this in turn is what I am trying to save via sqlSave
sapply(spikes20, class)
Date Day EWW PBR BAC CHTP FB SPY
"Date" "factor" "numeric" "numeric" "numeric" "numeric" "numeric" "numeric"
Here is the code which attempts to write to the SQL Server
require(RODBC)
varTypes = c(as.Date="Date")
channel <-odbcConnect("OptionsAnalytics", uid="me", pwd="you")
sqlSave (channel, spikes20, tablename = NULL, append=TRUE, rownames = FALSE, colnames = TRUE, safer = FALSE, addPK = FALSE, varTypes=varTypes )
The error message that I get says:
Warning messages:
In sqlSave(channel, spikes20, tablename = NULL, append = TRUE, rownames = FALSE, :
column(s) as.Date 'dat' are not in the names of 'varTypes'
I tried to change the varType to:
varTypes=c(Date="Date")
then the error message becomes:
Error in sqlSave(channel, spikes20, tablename = NULL, append = TRUE, rownames = FALSE, :
[RODBC] Failed exec in Update
22007 241 [Microsoft][ODBC SQL Server Driver][SQL Server]Conversion failed when converting date and/or time from character string.
Any help will be appreciated. It seems I cannot decipher to use varTypes correctly...
First, are you really trying to append to a table named NULL?
As far as issues with varTypes goes, in my experience I have had to provide a mapping for all of the variables in the data frame even though the documentation for the varTypes argurment says:
"an optional named character vector giving the DBMSs datatypes to be used for
some (or all) of the columns if a table is to be created"
You need to make sure that the names of your varTypes vector are the column names and the values are the data types as recommended here. So following their example you would have:
tmp <- sqlColumns(channel, correctTableName)
varTypes = as.character(tmp$TYPE_NAME)
names(varTypes) = as.character(tmp$COLUMN_NAME)
varTypes = c(somecolumn="datetime") works for me.