Error when copying CSV files from Windows directory into SQL Server DB by using Apache NiFi - sql-server

I am trying to copy CSV files from my local directory into a SQL Server database running in my local machine by using Apache NiFi.
I am new to the tool and I have been spending few days googling and building my flow. I managed to connect to source and destination but still I am not able to populate the database since I get the following error: "None of the fields in the record map to the columns defined by the tablename table."
I have been struggling with this for a while and I have not been able to find a solution in the Web. Any hint would be highly appreciated.
Here are further details.
I have built a simple flow using GetFile and PutDatabaseRecord processors 1.
My input is a simple table with 8 columns 2.
My configurations for GetCSV process are here (I have added the input directory and left the rest as default): 3
The configuration for PutDatabaseRecord process is here (I have referred to the CSVReader and DBCPConnectionPool controller services, used the MS SQL 2012+ database type (I have 2019 version), configured INSERT statement type, inserted the schema and correct table name and left everything else as default): 4
The CSVReader configuration looks as shown here (Schema Access Strategy = Use String Fields From Header; CSV Format = Microsoft Excel): 5
And this is the configuration of the DBCPConnectionPool (I have added the correct URL, DB driver class name, driver location, DB user and password): 6
Finally, this is a snapshot of the description of the table I have created in the database to host the content: 7
Many thanks in advance!

The warning "None of the fields in the record map to the columns defined by the tablename table." is also obtained when the processor is not able to find the table and this can happen also when the table name is correctly configured in PutDatabaseRecord but there is some issue with user access rights (which ended up to be the actual cause of my error ...).

Related

Schema error when running update-database command to create identity tables for npgsql data provider

I am manually creating the identity tables for a new ASP.NET 6 project following this tutorial. Such tutorial is for MS SQL Server and I am using PostgreSQL so I made the appropriate modifications. Although the process is straightforward, I have a problem when reaching the update-database step where I get the following error: "Couldn´t set schema (Parameter ´schema')". My connection string is as follows: "host=localhost; database=testdb001; schema=testdb001; port=5433; user id=some-user; password=some-password;". I found that such error disappears and the identity tables are successfully created if I remove the schema parameter in the connection string but tables are created in the public schema that PostgreSQL automatically includes when a new DB is created. However, I do not want this to happen because I want to use another schema name. I visited connection strings website for PostgreSQL (https://www.connectionstrings.com/postgresql/) and I see a link for Npgsql, and all the examples here do not contain the schema parameter. This is the first time that I use npgsql. Is there a way to create the identity tables in a specific schema name?
Maybe a workaround is to specify a schema name in the search_path parameter in postgresql.conf file but this would lead to add a name every time I define a new schema. I think that the schema name in the connection string is a great choice but I wonder why this is not accepted for npgsql.
Respectfully,
Jorge Maldonado
I found the solution.
For npgsql you must use search path as a connection string parameter instead of schema. So a connection string is as follows
"host=localhost; database=testdb001; search path=testdb001; port=5433; user id=some-user; password=some-password;"
and not
"host=localhost; database=testdb001; schema=testdb001; port=5433; user id=some-user; password=some-password;"
Regards,
Jorge Maldonado

Importing table from Postgres to MS Access and losing records

I have a postgres table containing nearly 700,000 records, I import that table into MS access (via an ODBC data source) and end up with only 250,000 records.
I start with an empty MS access database (520 kbytes). Select (external data)/(New data source)/(from other sources)/(ODBC database)/(Import the source data)/(Machine data source) I pick my ODBC postgres database, and select the table I want, wait for 30 seconds, then I get a message box saying all objects have been successfully imported followed by being asked if I want to save the import steps.
There are no error messages, but the number of rows in my MS access version of the table is around 250,000.
Other info...
I'm using MS office 365 version 1710
I'm using postgres 9.5.7
I'm using the PostgrSQL ANSI ODBC driver (not sure which version)
There are no signs of any error messages (or warnings).
After the import the Access database is still only 375 Mbytes, well short of the 2 Gbyte limit.
I've checked the 'ODBC data sources' app to check how the postgres ODBC link is configured, there's no obvious problem with it.
The final message that MS access gives me after the import includes 'all objects imported without errors'
There is no obvious difference between the records that are getting through and those that aren't.
Why am I losing records, and what can I do to cure it?
Thanks
If you attempt to "slurp" all records from the database at one time, the ODBC driver will stop fetching at some point and just return what it has without warning. It's annoying. As far as I know this has nothing to do with the 32-bit limit.
The way to solve this is to not fetch all records at once, but use the declare/fetch option on the driver. You can do this permanently on the ODBC settings by going to your ODBC properties, selecting "Datasource" and then on "Page 2" checking the "Use Declare/Fetch" and setting your cache (# of rows) size. I recommend a number somewhere between 5,000 and 50,000. Each batch represents a hit to the database, so you want it to be reasonably large to begin with.
From all practical purposes, the use of declare/fetch will be totally transparent to your application. You won't even notice. You will on the database admin side, but if your fetch size is sufficiently large, it won't be an issue.
You can also do one-time edits to your connection string from your particular query. You would add the following to make this work:
UseDeclareFetch=1;Fetch=25000;

SELECT * INTO another database on a different SQL instance running older SQL Server

I need to copy some tables from a SQL Server 2016 instance to a SQL Server 2008 instance like
select *
into [sql8].[DatabaseA].[dbo].[Customers]
from [DatabaseA].[dbo].[Customers]
but I get an error:
Msg 117, Level 15, State 1, Line 9
The object name 'sql8.DatabaseA.dbo.Customers' contains more than the maximum number of prefixes. The maximum is 2.
I have tried generating a script of the data but my machine runs out of memory during SQLCMD execution from the command line.
Looking for recommendations / pointer.
Thanks
I'm guessing you may need to set up the sql8 server as a linked server from the Server holding the DB you're trying to get the data into. In the image I would be trying to get the data into a db on the MJAYWCO1 server. [sql8] would be the server you want to create a link "to". "[sql8].[DatabaseA].[dbo].[Customers]"
To Do this from the ssms GUI goto Server.ServerObjects.LinkedServers:
Another possibility: Have you tried to import it directly to the new DB? Assuming you can connect to the old database from the new database with creds...
If this doesn't work, you can use the Export Data from the DB you are trying to get the data (under Import data in the second Image) from the "old DB" to create an XML or .CSV file, or whatever might be an applicable format. Use this and the Import Wizard from the "new DB"
Please forgive me if I misunderstood the question as English is my first language and I went to government schools.

SQLE_NOT_PUBLIC_ID Sybase mobilink error

I am working on an ios project that has a Sybase (ultralite) database that is synchronized with a Sybase Sql Anywhere 12 database using mobilink.
Everything was properly, until i decided today to add some fields to the main database so that they synchronize to the main database.
I have updated the schema of the consolidated database from the main engine, then i have updated the schema of the remote database from the consolidated engine, and then i mapped the added fields together, and I deployed a new ultralite database.
Please note that it's not the first time I do a similar task, i always add fields, and sync databases..
after the update, when i synchronize using the blank ultralite database, mobilink will fail giving only this error: Synchronization Failed: -1305 (MOBILINK_COMMUNICATIONS_ERROR) %1:201 %2: %3:0
I have researched Error Number 201 in sybase and it points to: SQLE_NOT_PUBLIC_ID
and in the sybase documentation the error's probably cause is:
"The option specified in the SET OPTION statement is PUBLIC only. You cannot define this option for any other user."
I have tried to redeploy, I have tried to move the engine to a windows pc, all give the same error.. and i have no clue where this SET OPTION statement came from and how can i solve it..
Any hints are appreciated!
The problem was just caused by small network timeout value while setting up mobilink parameters.
info.stream_parms = (char*) #"host=192.168.0.100;port=3309;timeout=1"
i just changed the value from timeout=1 to timeout=300 and it worked!

Advice on transfering data from one db to another, syntax

I'm fairly new to SQL Server. I have done basic admin, backups etc. I have also spent 2 years doing MySQL for a software company offering software support for their MySQL bespoke program. I'm mainly a tech guy (desktop, Networking) but getting my head round this DB stuff!
I have started with a company that run SQL Server 2005 and need some stuff doing, and I am struggling with the syntax more than anything. The company have 4 SQL Servers running the same db's (program wise) for 4 differing locations.
What I am trying to do is copy the updated cost price list from table 1 to the other tables with * criteria. Basically copy table.parts from server1.parts to server2.parts * currencyconvertion field * markup (%)
That bit seems to be quite easy except I cannot get the db's to link. I enter the server name which contains - and the syntax says wrong eg uk-server1 'can't find 'uk'? Also I am unsure in the 4 part address is correct servername, dbname, schema, table?
Right ok. Previously when tried I was unable to link the two servers. I have now resolved this and the server is now linked. I have been told that maybe there is a need for [] to quote'' server name. I have tried this with no success. The problem seems to be the name of the server having a - uk-efacs. as soon as I type this and remember it is now linked the herror is can't find server efacs an uk is wrong?? It's not ready the full server name? WHY?
Figured this out by trial and error just needs [] by server name ie [uk-efacs].db.table.field. This now is ok just need to work on my syntax as the query shows errors.
Try creating a Linked Server record on the server you're running this from. In Object Explorer (in SSMS) expand Server Objects, right click Linked Servers and select new. Select SQL Server and type the name of your remote server and then try your query again. Bit puzzled as the snippet you provided
update partmaster
set partmaster.fsunit = uk-efacs.efacsdb.partmaster.fsunit * uk-efacs.efacsdb.currency.currate * 1.32
Seems to parse just fine.

Resources