I have a problem I am trying to resolve. We have a SQL Server 2005 running a commercial ERP system. The implication for this is that we cannot change the database structure and all of the character fields are CHAR or VARCHAR rather than Unicode types (NCHAR, NVARCHAR).
We also have multiple instances of the ERP software, based on country. Each country has it's own database on the same database server, which results in variations in the table names based on the instance of the ERP software that is running. For example, the US customer table is called US_CUSTOMER and the UK one is GB_CUSTOMER. We have created a separate database that essentially mirrors the ERP system tables with synonyms, and then views that handle all of our SQL transactions against these synonyms. This was done to use LINQ TO SQL. Thanks for reading this far :)
The issue we have is we are now implementing Simplified Chinese for the application. In the customer ERP system, they set the code page for the ERP system so that when the ERP system writes to the base tables, the data is written as multi-byte. My question is how can I get this multi-byte information translated back to Simplified chinese? I would like to be able to do this at the database level, since I have both a web application and SSRS reports that need to take advantage of it.
Any ideas or directions? I don't think I can change the codepage, since multiple countries are using the same database server (though different databases).
Thanks ahead of time
Are we saying that 2 varchar characters are being using to store 1 unicode character?
If so, try CAST to binary to nvarchar etc (or something similar)
Otherwise, look at COLLATE clauses to coerce data?
Edit:
A CLR function might be your only bet to use Remus' suggestion of MultiByteToWideChar
What we ended up doing for this is writing a CLR function that can be called from our SQL statement. We pass in the string and the desired code page and get a converted string returned. The performance is not what we hoped for, but it seemed to be the only path we could find.
Related
I have an SQLite3 database. I also have an SQL Server database with the same structure. I need to export the data from SQLite and insert it into the SQL Server database.
The export from SQLite and the modification of the generated export needs to be 100% scripted. Inserting into the SQL Server database will be done manually through SQL Server Management Studio.
I have a mostly good dump of the database through this answer here. I can modify most of the script as needed with sed.
The one thing I'm stuck on right now is that the SQLite database stores timestamps as number of seconds since UNIX epoch. The equivalent column in SQL Server is DATETIME. As far as I know, inserting an integer into a DateTime won't work.
Is there a way to specify that certain fields be converted a certain way upon dumping from SQLite? Meaning, specify that the integer fields be dumped as proper DateTime strings that SQL Server will understand?
Or, is there something I can run on the Linux command line that will somehow find these Integer timestamps and convert them?
EDIT: Anything that runs in a Bash script on Ubuntu is acceptable.
Three basic solutions: (1) modify the data before the dump; (2) manipulate the file after the dump, or (3) modify the data on import. Which you choose will depend on how much freedom you have to modify schemas.
If you wish to do it in SQLite, I'd suggest adding text columns with the dates stored as needed for import to SQL Server, then ignore or remove the original columns on dump. The SQLite doc page for datetime() may help, as might answers to this question.
Or, you can write a function in SQL Server that handles the import. Perhaps set it on an insert trigger.
Otherwise, a script that manipulates your dump file would work too. It sounds like you have a good handle on how to do this.
Looking at various installations of SQL Server, sometimes the 4th column returned by xp_msver will sometimes be nvarchar and sometimes it will be varchar. This appears to have no bearing on the version of SQL Server, since I see some copies of SQL Server 2000 up to 2012 return varchar, while others return nvarchar. This also does not seem to have a bearing on Windows version or bitness.
Why does this happen? and is there a way to either configure the output or know what data type will be used beforehand?
Edit: I am using Visual FoxPro to query this information, which has a number of issues dealing with unicode. So, I need to know how to handle the data and convert it to ANSI/single byte encoding - if it isn't already. I understand the limitations of ANSI/single byte, but the loss of data is considered acceptable here.
sqlexec(connhandle, "exec xp_msver")
If ADO were in the picture, I would just use the data type properties inherit in RecordSets, but i am limited to FoxPro and its own cursor functionality. When pulled into FoxPro, the Character_Value column - the 4th column in question here - is considered a MEMO data type, which is a fancy way of saying a string (of some kind or even binary data) possibly longer than 255 characters. It is really a catchall for long strings and any data types that FoxPro cannot handle, which is extremely unhelpful in this case.
There is a Microsoft KB article that explicitly uses xp_msver from FoxPro and states that SQL Server 7.0 and greater always returns Unicode for the stored procedure, but this is not always the case. Also, since xp_msver it is a stored procedure sp_help and sp_columns aren't of any use here.
In all honesty, I would prefer using SERVERPROPERTY(), but it is not supported in SQL Server 7.0, which is a requirement. I would prefer not to overcomplicate the code by having different queries for different versions of SQL Server. Also, using ##version is not a good option, since it would require parsing the text, would be prone to bugs, and doesn't provide all the information I need.
I inherited a SQL server database that is not well formatted. ( some consulting company came in to do the project and left without completing it)
the main issues I have with this database are:
Data types: a lot of tinyint and text types.
Tables are not normalized: some of the keys are names instead of seq ids.
A lot of tables that I am not sure are being used
a lot of stored procedures that i am not sure are being used
Badly named tables and stored procs
I also inherited the asp.net application that runs against this database.
I would like to clean this database up. I understand that changing the datatypes will have to happen at each table. for getting rid of all the extra tables and stored procs. what is the easiest way to do so.
any other tips to make it cleaner and smaller is appreciated.
I want to also mention that I have RedGate tools installed.( if that helps).
Thank you
Check out the Sql Server Data Tools they allow to create a project from a live database. Some of the things you can do in there is right click 'Find Usages' for the tables, views and functions.
So long as the previous developer used stored procedures and views rather than querying directly, it should find references to your project that way, without killing your project.
Also, for finding stored procedures that are not used, put in some basic logging at the top of each stored procedure in your application, after X amount of days, those that haven't been logged in your table are likely safe to remove, else a tedious search through your .NET code will find them.
I'm not a good SQL programmer, I've got only the basics, but I've heard of some BCP thing for fast data loading. I've searched the internet and it seems to be a command-line only utility, and not something you can use in code.
The thing is, I want to be able to make very fast inserts and updates in a SQL Server 2008 database. I would like to have a function in the database that would accept:
The name of the table I want to execute an insert/update operation against
The names of the columns I'll be feeding data to
The data in a CSV format or something that SQL can read stupid-fast
A flag indicating weather the function should perform an insert or update operation
This function would then read this CSV string and genarate the necessary code for inserting/updating the table.
I would then write code in C# to call that function passing it the table name, column names, a list of objects serialized as a CSV string and the insert/update flag.
As you can see, this is intended to be both fast and generic, suitable for any project dealing with large amounts of data, and thus a candidate to my company's framework.
Am I thinking right? Is this a good idea? Can I use that BCP thing, and is it suitable to every case?
As you can see, I need some directions on this... thanks in advance for any help!
In C#, look at SQLBulkCopy. It's what SSIS uses in the background.
For true bcp/BULK INSERT, you'd need bulkadmin rights which may not be allowed
Have you considered using SQL Server Integrated Services (SSIS). It's designed to do exactly what you describe. It is very fast. You can insert data on a transactional basis. And you can set it up to run on a schedule. And much more.
I am considering migrating to Firebird. To have a "quick start" approach I downloaded the trial of a conversion tool (DBConvert) and tried it.
I just picked up a random tool, this tool doesn't convert procedures, functions and triggers (I don't think it is a limit of the trial since there is not an explicit reference to sp, sf and triggers in the link above).
Anyway by trying that tool I had the message:
The DB cannot be converted
successfully because some FK names are
too long.
This is because in some tables I have FK whose description is > 32 chars.
Is this a real Firebird limit or it is possible to overcome it somehow (of course renaming the FK is an extreme option because it is extra work)?
Anyway how to convert a SQL Server database fully to Firebird? Is there a valid tool? Did someone succeed in a conversion of non trivial databases?
You can use some tools like Interbase Datapump and you can also check this
For size of FK : you have to rename them :(
You can also try to make this with Database Worbench
I doubt you'll be able to just "convert" that all. Firebird/Interbase and Microsoft SQL Server use quite different data types, their SQL language dialect is somewhat different, and so forth.
You could probably get a 60-80% conversion - but the rest will always be manual effort that's needed.
If your conversion fails just because of those FK constraints: drop those in SQL Server before the conversion, and re-create them on the Firebird side after conversion.
Or: drop them in SQL Server and re-create them with shorter names, and then do the conversion.
I know two more tools that might help you in the conversion.
The ESF Database Migration Toolkit
link text
and the DeZign for Databases
link text