Bulk Load in SQL Server failed to load float values - sql-server

I have to transfer data from one database server to a SQL Server. I'm using SQLServerBulkCopy to do that:
// connection1 is with the source system and
// connection2 is with the destination SQL Server
Statement statement = connnection1.createStatement();
ResultSet resultSet = statement.executeQuery("select * from db.table");
SQLServerBulkCopy bulkCopy = new SQLServerBulkCopy(connection2);
bulkCopy.setDestinationTableName("tableName");
bulkCopy.writeToServer(resultSet);
I'm getting following error while doing that:
com.microsoft.sqlserver.jdbc.SQLServerException: Data type float is not supported in bulk copy.
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDriverError(SQLServerException.java:226)
at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.getDestTypeFromSrcType(SQLServerBulkCopy.java:1443)
at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.createInsertBulkCommand(SQLServerBulkCopy.java:1464)
at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.sendBulkCopyCommand(SQLServerBulkCopy.java:1611)
at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.doInsertBulk(SQLServerBulkCopy.java:1553)
at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.access$200(SQLServerBulkCopy.java:63)
at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy$1InsertBulk.doExecute(SQLServerBulkCopy.java:705)
at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7240)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:2869)
at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.sendBulkLoadBCP(SQLServerBulkCopy.java:733)
at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.writeToServer(SQLServerBulkCopy.java:1669)
at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.writeResultSet(SQLServerBulkCopy.java:641)
at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.writeToServer(SQLServerBulkCopy.java:579)
Is there anyway to get around this issue?

Try creating SSIS package in SSDT. Or , use "Import and Export Data" tool if it's just basic import.

Related

Perl DBI / MS ODBC Driver (LinuxL:RHEL) / SQL-Server: How to insert/update BLOB varbinary(max) data?

New to SQL-Server. I'm attempting to load a pdf to a SQL-Server table (data type varbinary(max)) via PERL/MS ODBC driver/DBD::ODBC using the following (simplified) code:
use DBI qw(:sql_types);
open my $pdfFH, "test.pdf";
my #pdf = <$pdfFH>; close $pdfFH;
my $pdfStr = join('', #pdf);
my $dbh = <...valid db-handle ...>;
my $sth = $dbh->prepare(qq(
insert into
TestTable(Report)
values
(?)));
$sth->bind_param(1,$pdfStr,DBI::SQL_VARBINARY);
$sth->execute;
Error:
DBD::ODBC::st bind_param failed: [Microsoft][ODBC Driver 17 for SQL Server]Invalid precision value (SQL-HY104) at ./t_sqlserver.pl line 37.
DBD::ODBC::st execute failed: [Microsoft][ODBC Driver 17 for SQL Server]COUNT field incorrect or syntax error (SQL-07002) at ./t_sqlserver.pl line 38.
I am able to successfully load other data types. An alternative is to load the pdf locally from the file system using OPENROWSET(BULK...) but I would prefer to load directly to avoid moving the file from Linux to Windows.
The driver should be clever enough to guess the correct type most of the times. Try binding the parameter without specifying the type at all.

DolphinDB error: SegmentedTable does not support direct access. Please use sql query to retrieve data

dbDir = '/tests/dolphindb/valueDB'
devDir = '/tests/dolphindb/dev.csv'
db = database(dbDir)
dev = db.loadTable(`dev)
saveText(dev, devDir)
I want to export table "dev" as 'csv' file but I encountered this error message:
Execution was completed with exception
SegmentedTable does not support direct access. Please use sql query to retrieve data
I wonder if I have to load all data into memory to export it as 'csv' file.
Yes, the input table for saveText must be a non-partitioned table.

How do I specify a specific database in a SQL server when creating an ODBC connection on Windows?

I am working off of a server housing various SQL databases (accessed via Microsoft SQL Server Management Studio) and am going to use R to perform analyses and explore a specific database within the server. I have network security that permits communication between machines, drivers installed on the R server, and RODBC installed.
When I attempt to establish a Windows ODBC connection in the Control panel>Administrative>Data Sources, I can only add a data source for the entirety of the SQL server, not just for the specifc database I want to look at. I pasted the code I have been experimenting with below.
library(RODBC)
channel <- odbcConnect("Example", uid="xxx", pwd=****");
sqlTables(channel)
sqlTables(ch, tableType = "TABLE")
res <- sqlFetch(ch, "samp.le", max = 15) #not recognizing as a table
library(RODBC)
ch <- odbcDriverConnect('driver={"SQL Server"}; server=Example; database=dbasesample; uid="xxxx", pwd = "****"')
Response: Warning messages:
1: In odbcDriverConnect("driver={\"SQL Server\"}; server=sample; database=dbasesample; uid=\"xxxx", pwd = \"xxxx\"") :
[RODBC] ERROR: state IM002, code 0, message [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified
2: In odbcDriverConnect("driver={\"SQL Server\"}; server=sample; database=dbasesample; uid=\"xxxx\", pwd = \"xxxx!\"") :
ODBC connection failed
Any insight into this issue would be much appreciated.
Although while querying with the sqlQuery() function you can specify database, schema and table, e.g.
library(RODBC)
con = odbcConnect(dsn = 'local')
sample_query = sqlQuery(con,'select * from db.dbo.table')
I have not found a way to define the database from within the function parameters while using sqlFetch() or sqlSave(). An indirect way would be to define the default database in the dsn (as written in the comments). But then, you would need a different dsn for each database you would like to use.
A better solution would be to use the odbc and DBI packages instead of RODBC, and define the database in the connection statement e.g.
library(dplyr)
library(DBI)
library(odbc)
con <- dbConnect(dsn = 'local',database = 'db')
copy_to(con, rr2, temporary = F)
By the way, I found copy_to to be much faster than the equivalent sqlSave of RODBC.

Waiting for DB restore to finish using sqlalchemy on SQL Server 2008

I'm trying to automate my db restores during development, using TSQL on SQL Server 2008, using sqlalchemy with pyodbc as a transport.
The command I'm executing is:
"""CREATE DATABASE dbname
restore database dbname FROM DISK='C:\Backups\dbname.bak' WITH REPLACE,MOVE 'dbname_data' TO 'C:\Databases\dbname_data.mdf',MOVE 'dbname_log' TO 'C:\Databases\dbname_log.ldf'"""
Unfortunately, the in SQL Management Studio, after the code has run, I see that the DB remains in state "Restoring...".
If I restore through management studio, it works. If I use subprocess to call "sqlcmd", it works. pymssql has problems with authentication and doesnt even get that far.
What might be going wrong?
The BACKUP and RESTORE statements run asynchronously so they don't terminate before moving on to the rest of the code.
Using a while statement as described at http://ryepup.unwashedmeme.com/blog/2010/08/26/making-sql-server-backups-using-python-and-pyodbc/ solved this for me:
# setup your DB connection, cursor, etc
cur.execute('BACKUP DATABASE ? TO DISK=?',
['test', r'd:\temp\test.bak'])
while cur.nextset():
pass
Unable to reproduce the problem restoring directly from pyodbc (without sqlalchemy) doing the following:
connection = pyodbc.connect(connection_string) # ensure autocommit is set to `True` in connection string
cursor = connection.cursor()
affected = cursor.execute("""CREATE DATABASE test
RESTORE DATABASE test FROM DISK = 'D:\\test.bak' WITH REPLACE, MOVE 'test_data' TO 'D:\\test_data.mdf', MOVE 'test_log' to 'D:\\test_log.ldf' """)
while cursor.nextset():
pass
Some questions that need clarification:
What is the code in use to do the restore using sqlalchemy?
What version of the SQL Server ODBC driver is in use?
Are there any messages in the SQL Server log related to the restore?
Thanks to geographika for the Cursor.nextset() example!
For SQL Alchemy users, and thanks to geographika for the answer: I ended up using the “raw” DBAPI connection from the connection pool.
It is exactly as geographika's solution but with a few additional pieces:
import sqlalchemy as sa
driver = 'SQL+Server'
name = 'servername'
sql_engine_str = 'mssql+pyodbc://'\
+ name\
+ '/'\
+ 'master'\
+ '?driver='\
+ driver
engine = sa.create_engine(sql_engine_str, connect_args={'autocommit': True})
connection = engine.raw_connection()
try:
cursor = connection.cursor()
sql_cmd = """
RESTORE DATABASE [test]
FROM DISK = N'...\\test.bak'
WITH FILE = 1,
MOVE N'test'
TO N'...\\test_Primary.mdf',
MOVE N'test_log'
TO N'...\\test_log.ldf',
RECOVERY,
NOUNLOAD,
STATS = 5,
REPLACE
"""
cursor.execute(sql_cmd)
while cursor.nextset():
pass
except Exception as e:
logger.error(str(e), exc_info=True)
Five things fixed my problem with identical symptoms.
Found that my test.bak file contained the wrong mdf and ldf files:
>>> cursor.execute(r"RESTORE FILELISTONLY FROM DISK = 'test.bak'").fetchall()
[(u'WRONGNAME', u'C:\\Program Files\\Microsoft SQL ...),
(u'WRONGNAME_log', u'C:\\Program Files\\Microsoft SQL ...)]
Created a new bak file and made sure to set the copy-only backup option
Set the autocommit option for my connection.
connection = pyodbc.connect(connection_string, autocommit=True)
Used the connection.cursor only for a single RESTORE command and nothing else
Corrected the test_data MOVE to test in my RESTORE command (courtesy of #beargle).
affected = cursor.execute("""RESTORE DATABASE test FROM DISK = 'test.bak' WITH REPLACE, MOVE 'test' TO 'C:\\test.mdf', MOVE 'test_log' to 'C:\\test_log.ldf' """)

PowerBuilder DSN Creation

I am new to PowerBuilder.
I want to retrieve the data from MSAccess tables and update it to corresponding SQL tables. I am not able to create a permanent DSN for MSAccess because I have to select different MSAccess files with same table information. I can create a permanent DSN for SQL server.
Please help me to create DSN dynamically when selecting the MSAccess file and push all the tables data to SQL using PowerBuilder.
Also give the full PowerBuilder code to complete the problem if its possible.
In Access we strongly suggest not using DSNs at all as it is one less thing for someone to have to configure and one less thing for the users to screw up. Using DSN-Less Connections You should see if PowerBuilder has a similar option.
Create the DSN manually in the ODBC administrator
Locate the entry in the registry
Export the registry syntax into a .reg file
Read and edit the .reg file dynamically in PB
Write it back to the registry using PB's RegistrySet ( key, valuename, valuetype, value )
Once you've got your DSN set up, there are many options to push data from one database to the other.
You'll need two transaction objects in PB, each pointing to its own database. Then, you could use a Data Pipeline object to manage the actual data transfer.
You want to do the DSNLess connection referenced by Tony. I show an example of doing it at PBDJ and have a code sample over at Sybase's CodeXchange.
I am using this code, try it!
//// Profile access databases accdb format
SQLCA.DBMS = "OLE DB"
SQLCA.AutoCommit = False
SQLCA.DBParm = "PROVIDER='Microsoft.ACE.OLEDB.12.0',DATASOURCE='C:\databasename.accdb',DelimitIdentifier='No',CommitOnDisconnect='No'"
Connect using SQLCA;
If SQLCA.SQLCode = 0 Then
Open ( w_rsre_frame )
else
MessageBox ("Cannot Connect to Database", SQLCA.SQLErrText )
End If
or
//// Profile access databases mdb format
transaction aTrx
long resu
string database
database = "C:\databasename.mdb"
aTrx = create transaction
aTrx.DBMS = "OLE DB"
aTrx.AutoCommit = True
aTrx.DBParm = "PROVIDER='Microsoft.Jet.OLEDB.4.0',DATASOURCE='"+database+"',PBMaxBlobSize=100000,StaticBind='No',PBNoCatalog='YES'"
connect using aTrx ;
if atrx.sqldbcode = 0 then
messagebox("","Connection success to database")
else
messagebox("Error code: "+string(atrx.sqlcode),atrx.sqlerrtext+ " DB Code Error: "+string(atrx.sqldbcode))
end if
// do stuff...
destroy atrx

Resources