How to set object tags dynamically in snowflake? I have written
```set tag identifier(?) = '$(object_tag_1)',
identifier(?) = '$(object_tag_2)',
identifier(?) = '$(object_tag_3)';',
binds:[tag_database_object_tag_1, tag_database_object_tag_2, tag_database_object_tag_3]});```
This is part of a java script and I noticed that I am not getting the desired value.
Related
How can I unload snowflake data to s3 without using any file format?
For unloading the data into a specific extension we use file format in snowflake.
E.g. code
copy into 's3://mybucket/unload/'
from mytable
storage_integration = myint
file_format = (format_name = my_csv_format);
But what I want is to store data without any extension.
SINGLE is what I was looking for. It is one of parameters we can use with COPY command which creates the file without extension.
Code:
copy into 's3://mybucket/unload/'
from mytable
storage_integration = myint
file_format = (format_name = my_csv_format)
SINGLE = TRUE;
Go through note of below link for better understanding:
https://docs.snowflake.com/en/sql-reference/sql/create-file-format.html#:~:text=comma%20(%2C)-,FILE_EXTENSION,-%3D%20%27string%27%20%7C%20NONE
You can add the parameter FILE_EXTENSION = NONE to your file format. With this parameter Snowflake is not adding a file extension based on your file format (in this case .csv), but is using the passed extension (NONE or any other).
copy into 's3://mybucket/unload/'
from mytable
storage_integration = myint
file_format = (format_name = my_csv_format file_extension = NONE);
https://docs.snowflake.com/en/sql-reference/sql/copy-into-location.html
i am trying to unload Snowflake tale data into S3 bucket in parquet format. but getting below error.
`SQL compilation error: COPY statement only supports simple SELECT from stage statements for import.`
below is the syntax of copy statement
`create or replace stage STG_LOAD
url='s3://bucket/foler'
credentials=(aws_key_id='xxxx',aws_secret_key='xxxx')
file_format = (type = PARQUET);
copy into STG_LOAD from
(select OBJECT_CONSTRUCT(country_cd,source)
from table_1
file_format = (type='parquet')
header='true';`
please let me know if i am missing anything here.
You have to identify named stages using the # symbol. Also the header option is true rather than 'true'
copy into #STG_LOAD from
(select OBJECT_CONSTRUCT(country_cd,source)
from table_1 )
file_format = (type='parquet')
header=true;
(Submitted on behalf of a Snowflake User)
I have a test s3 folder called s3://bucket/path/test=integration_test_sanity/file.parquet
I want to be able to load this into snowflake using the COPY INTO command but I want to be able to load all the test folders which have a structure like test=*/file.parquet.
I've tried:
COPY INTO raw.test_sanity_test_parquet
FROM 's3://bucket/path/'
CREDENTIALS=(AWS_KEY_ID='XXX' AWS_SECRET_KEY='XXX')
PATTERN='test=(.*)/.*'
FILE_FORMAT = (TYPE = parquet)
and also
COPY INTO raw.test_sanity_test_parquet
FROM 's3://bucket/path/'
CREDENTIALS=(AWS_KEY_ID='XXX' AWS_SECRET_KEY='XXX')
PATTERN='test=.*/.*'
FILE_FORMAT = (TYPE = parquet)
Neither of these works. I was wondering what regex parser is used by Snowflake and which regex I should use to get this to work.
This works but I can't filter on just test folders which can cause issues
COPY INTO raw.test_sanity_test_parquet
FROM 's3://bucket/path/'
CREDENTIALS=(AWS_KEY_ID='XXX' AWS_SECRET_KEY='XXX')
PATTERN='.*/.*'
FILE_FORMAT = (TYPE = parquet)
Any recommendations? Thanks!
Try this:
COPY INTO raw.test_sanity_test_parquet
FROM 's3://bucket/path/'
CREDENTIALS=(AWS_KEY_ID='XXX' AWS_SECRET_KEY='XXX')
PATTERN='.*/test.*[.]parquet'
FILE_FORMAT = (TYPE = parquet)
I'm creating a simple Master-Detail relationship with ClientDataSets on Delphi XE3 + SqlServer. I have configured Master-Detail through DatasetField at the client application and with the property MasterSource in the Detail TUniQuery at the server application. I'm using one DataSetProvider and one DataSource.
Server Application
**Master table**
object qyREMISION_COMPRA: TUniQuery
Connection = DMBase.BD
SQL.Strings = (SELECT R.ID_REMISION_COMPRA, R.FECHA, R.FACTURA
FROM REMISION_COMPRA R
WHERE R.ID_REMISION_COMPRA =:ID_REMISION_COMPRA)
object qyREMISION_COMPRAID_REMISION_COMPRA: TIntegerField
AutoGenerateValue = arAutoInc
FieldName = ID_REMISION_COMPRA
end
**Detail table**
object qyREMISION_COMPRA_PRODUCTO: TUniQuery
Connection = DMBase.BD
SQL.Strings = (SELECT RP.ID_REMISION_COMPRA_PRODUCTO, RP.ID_REMISION_COMPRA, RP.ID_PRODUCTO
FROM REMISION_COMPRA_PRODUCTO RP
WHERE RP.ID_REMISION_COMPRA=:ID_REMISION_COMPRA
ORDER BY RP.ID_REMISION_COMPRA_PRODUCTO)
SQLUpdate.Strings = (UPDATE REMISION_COMPRA_PRODUCTO
SET ID_REMISION_COMPRA = :ID_REMISION_COMPRA, ID_PRODUCTO = :ID_PRODUCTO
WHERE ID_REMISION_COMPRA_PRODUCTO = :Old_ID_REMISION_COMPRA_PRODUCTO)
MasterSource = datasetREMISION_COMPRA
MasterFields = ID_REMISION_COMPRA
DetailFields = ID_REMISION_COMPRA
**DataSetProvider**
object dspREMISION_COMPRA: TDataSetProvider
DataSet = qyREMISION_COMPRA
Options = [poCascadeDeletes, poCascadeUpdates, poPropogateChanges, poUseQuoteChar] end
Client Application
**Master ClientDataSet**
object cdsREMISION_COMPRA: TClientDataSet
ProviderName = 'dspREMISION_COMPRA'
RemoteServer = dmProvs.dspCompra
object cdsREMISION_COMPRAqyREMISION_COMPRA_PRODUCTO: TDataSetField
FieldName = 'qyREMISION_COMPRA_PRODUCTO'
end
**Detail ClientDataSet**
object cdsREMISION_COMPRA_PRODUCTO: TClientDataSet
DataSetField = cdsREMISION_COMPRAqyREMISION_COMPRA_PRODUCTO
To save the changes to the database, I only do for the master clientdataset cdsREMISION_COMPRA.ApplyUpdates(0)
When I do an insert works perfectly, but when I do an update I have problems with triggers in the database because the aplication execute the detail first and then the update of the master table. This is normal? I'm doing something wrong?
I am implementing full text search on Jackrabbit Repository. After going through the examples given at http://jackrabbit.apache.org/ocm-search.html, I am able to write perform full text search on repository when only 'and' is required in the predicate. For example:
select * from test where name like '%abc%' and type = 'mainPage' and language = 'english'
can be written as
Filter filter = queryManager.createFilter(Paragraph.class);
filter.addContains('name', 'abc');
filter.addEqualTo('type', 'mainPage');
filter.addEqualTo('language ', 'english');
But, if I try to write the OCM implementation for the following query
select * from test where (name like '%abc%' or name like '%def%' ) and type = 'mainPage' and language = 'english'
as given bellow, I am getting empty list
Filter mainFilter= queryManager.createFilter(Paragraph.class);
Filter filter = queryManager.createFilter(Paragraph.class);
filter.addContains('name', 'abc');
Filter filter1 = queryManager.createFilter(Paragraph.class);
filter.addContains('name', 'def');
mainFilter = filter .addOrFilter(filter1 );
mainFilter .addEqualTo('type', 'mainPage');
mainFilter .addEqualTo('language ', 'english');
I think I am not able to use OCM full text search properly. Please suggest me the right way to implement OCM full text search in which predicate contains a large number of 'and' and 'or' conditions.
When I used
filter.addLike('name','%def%');
It's working fine. I am still wondering why addContains() is not working.