DolphinDB error: SegmentedTable does not support direct access. Please use sql query to retrieve data - database

dbDir = '/tests/dolphindb/valueDB'
devDir = '/tests/dolphindb/dev.csv'
db = database(dbDir)
dev = db.loadTable(`dev)
saveText(dev, devDir)
I want to export table "dev" as 'csv' file but I encountered this error message:
Execution was completed with exception
SegmentedTable does not support direct access. Please use sql query to retrieve data
I wonder if I have to load all data into memory to export it as 'csv' file.

Yes, the input table for saveText must be a non-partitioned table.

Related

R : problem with the dplyr::tbl() function due to restricted permission

I work with large databases that needs to be stored into a server.
So, to work with them on Rstudio I have to open a connection to my Microsoft SQL Server with the dbConnect function :
conn <- dbConnect(odbc(),"myconnection",uid="***",pwd="***",schema="dbo",access="readonly")
and in order to use dplyr, I have to create data references with the tbl function :
data <- tbl(conn, "data")
But one of the online dataframe contains a columns that I can't read because I dont have the access, but I can read everything else.
The SQL query behind the tbl() function is :
SELECT * FROM data
and this is my problem.
Even when I try to select a specific column it doesn't work (see below), so I can't create my references and I can't work.
select(tbl(conn, "data"), "columnX")
=
SELECT columnX FROM data
I think this is the tbl() function and the call of "SELECT *" that blocks me.
Do you know what can I do ? Is there smilar functions that could resolve my problem ?
If you know the columns that you have access to, then one option is to bypass the default access SELECT * FROM ... with your own SQL query.
A remote table is defined by two components:
The database conneciton
The query to the database
When you connect with the default approach tbl(conn, 'data') then it defaults to a query SELECT * FROM data.
But here is another approach:
custom_query = 'SELECT columnX FROM data'
remote_table = tbl(conn, dbplyr::sql(customer_query))

Unable to create external table in snowflake

I am trying to create external table in snowflake and it fails with the below error.
SQL compilation error: invalid property 'auto_refresh' for 'different storage type from cloud provider'"
Here are the queries which I am trying.
CREATE OR REPLACE EXTERNAL TABLE DEV_EXT_TABLE WITH LOCATION =
#XXX/dev1/metadata/ FILE_FORMAT = (TYPE = PARQUET SKIP_HEADER = 3);
and
CREATE OR REPLACE EXTERNAL TABLE DEV_EXT_TABLE AUTO_REFRESH = TRUE
WITH LOCATION = #XXX/dev1/metadata/ FILE_FORMAT = (TYPE = PARQUET
SKIP_HEADER = 3);
My account is in AWS whereas stage in Google Cloud Platform and this seems to be supported.
https://docs.snowflake.com/en/user-guide/tables-external-auto.html
Also does snowflake supports Auto refresh or not in cross deployments
Regarding the following:
=> My account is in AWS whereas the stage is in Google Cloud Platform and this seems to be supported.
https://docs.snowflake.com/en/user-guide/tables-external-auto.html
The parameter controlling this feature has not been fully rolled out yet, however, documentation implies that the feature is GA which causes confusion.
Please open a case with Snowflake Support to have the parameter enabled to allow cross cloud auto refresh.
Additionally, SKIP_HEADER is the option for CSV type only not available for PARQUET.

Bulk Load in SQL Server failed to load float values

I have to transfer data from one database server to a SQL Server. I'm using SQLServerBulkCopy to do that:
// connection1 is with the source system and
// connection2 is with the destination SQL Server
Statement statement = connnection1.createStatement();
ResultSet resultSet = statement.executeQuery("select * from db.table");
SQLServerBulkCopy bulkCopy = new SQLServerBulkCopy(connection2);
bulkCopy.setDestinationTableName("tableName");
bulkCopy.writeToServer(resultSet);
I'm getting following error while doing that:
com.microsoft.sqlserver.jdbc.SQLServerException: Data type float is not supported in bulk copy.
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDriverError(SQLServerException.java:226)
at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.getDestTypeFromSrcType(SQLServerBulkCopy.java:1443)
at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.createInsertBulkCommand(SQLServerBulkCopy.java:1464)
at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.sendBulkCopyCommand(SQLServerBulkCopy.java:1611)
at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.doInsertBulk(SQLServerBulkCopy.java:1553)
at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.access$200(SQLServerBulkCopy.java:63)
at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy$1InsertBulk.doExecute(SQLServerBulkCopy.java:705)
at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7240)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:2869)
at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.sendBulkLoadBCP(SQLServerBulkCopy.java:733)
at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.writeToServer(SQLServerBulkCopy.java:1669)
at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.writeResultSet(SQLServerBulkCopy.java:641)
at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.writeToServer(SQLServerBulkCopy.java:579)
Is there anyway to get around this issue?
Try creating SSIS package in SSDT. Or , use "Import and Export Data" tool if it's just basic import.

read file from Azure Blob Storage into Azure SQL Database

I have already tested this design using a local SQL Server Express set-up.
I uploaded several .json files to Azure Storage
In SQL Database, I created an External Data source:
CREATE EXTERNAL DATA SOURCE MyAzureStorage
WITH
(TYPE = BLOB_STORAGE,
LOCATION = 'https://mydatafilestest.blob.core.windows.net/my_dir
);
Then I tried to query the file using my External Data Source:
select *
from OPENROWSET
(BULK 'my_test_doc.json', DATA_SOURCE = 'MyAzureStorage', SINGLE_CLOB) as data
However, this failed with the error message "Cannot bulk load. The file "prod_EnvBlow.json" does not exist or you don't have file access rights."
Do I need to configure a DATABASE SCOPED CREDENTIAL to access the file storage, as described here?
https://learn.microsoft.com/en-us/sql/t-sql/statements/create-database-scoped-credential-transact-sql
What else can anyone see that has gone wrong and I need to correct?
OPENROWSET is currently not supported on Azure SQL Database as explained in this documentation page. You may use BULK INSERT to insert data into a temporary table and then query this table. See this page for documentation on BULK INSERT.
Now that OPENROWSET is in public preview, the following works. Nb the key option is in case your blob is not public. I tried it on a private blob with the scoped credential option and it worked. nnb if you are using a SAS key make sure you delete the leading ? so the string should start with sv as shown below.
Make sure the blobcontainer/my_test_doc.json section specifies the correct path e.g. container/file.
CREATE DATABASE SCOPED CREDENTIAL MyAzureBlobStorageCredential
WITH IDENTITY = 'SHARED ACCESS SIGNATURE',
SECRET = 'sv=2017****************';
CREATE EXTERNAL DATA SOURCE MyAzureBlobStorage
WITH ( TYPE = BLOB_STORAGE,
LOCATION = 'https://yourstorage.blob.core.windows.net',
CREDENTIAL= MyAzureBlobStorageCredential);
DECLARE #json varchar(max)
SELECT #json = BulkColumn FROM OPENROWSET(BULK 'blobcontainer/my_test_doc.json',
SINGLE_BLOB, DATA_SOURCE = 'MyAzureBlobStorage',
FORMATFILE_DATA_SOURCE = 'MyAzureBlobStorage') as j;
select #json;
More detail provided in these docs

PowerBuilder DSN Creation

I am new to PowerBuilder.
I want to retrieve the data from MSAccess tables and update it to corresponding SQL tables. I am not able to create a permanent DSN for MSAccess because I have to select different MSAccess files with same table information. I can create a permanent DSN for SQL server.
Please help me to create DSN dynamically when selecting the MSAccess file and push all the tables data to SQL using PowerBuilder.
Also give the full PowerBuilder code to complete the problem if its possible.
In Access we strongly suggest not using DSNs at all as it is one less thing for someone to have to configure and one less thing for the users to screw up. Using DSN-Less Connections You should see if PowerBuilder has a similar option.
Create the DSN manually in the ODBC administrator
Locate the entry in the registry
Export the registry syntax into a .reg file
Read and edit the .reg file dynamically in PB
Write it back to the registry using PB's RegistrySet ( key, valuename, valuetype, value )
Once you've got your DSN set up, there are many options to push data from one database to the other.
You'll need two transaction objects in PB, each pointing to its own database. Then, you could use a Data Pipeline object to manage the actual data transfer.
You want to do the DSNLess connection referenced by Tony. I show an example of doing it at PBDJ and have a code sample over at Sybase's CodeXchange.
I am using this code, try it!
//// Profile access databases accdb format
SQLCA.DBMS = "OLE DB"
SQLCA.AutoCommit = False
SQLCA.DBParm = "PROVIDER='Microsoft.ACE.OLEDB.12.0',DATASOURCE='C:\databasename.accdb',DelimitIdentifier='No',CommitOnDisconnect='No'"
Connect using SQLCA;
If SQLCA.SQLCode = 0 Then
Open ( w_rsre_frame )
else
MessageBox ("Cannot Connect to Database", SQLCA.SQLErrText )
End If
or
//// Profile access databases mdb format
transaction aTrx
long resu
string database
database = "C:\databasename.mdb"
aTrx = create transaction
aTrx.DBMS = "OLE DB"
aTrx.AutoCommit = True
aTrx.DBParm = "PROVIDER='Microsoft.Jet.OLEDB.4.0',DATASOURCE='"+database+"',PBMaxBlobSize=100000,StaticBind='No',PBNoCatalog='YES'"
connect using aTrx ;
if atrx.sqldbcode = 0 then
messagebox("","Connection success to database")
else
messagebox("Error code: "+string(atrx.sqlcode),atrx.sqlerrtext+ " DB Code Error: "+string(atrx.sqldbcode))
end if
// do stuff...
destroy atrx

Resources