I have observed that Burmese language is shown as "boxes" are record level in SQL server 2012. Both the fields shown in the screenshot are nvarchar type with more than the required length.Is this expected ? If so why.
If you are storing it in nvarchar then it is OK,
You can test it by copy and paste one of the row data into Google Translate where Burma language is selected as source, if you see the text in Burma language characters, then it is OK
It is related with the editor
You have to install correct Burmese font Zawgyi or Myanmar-1.
Related
I've a table with an nvarchar(MAX) column containing text formatted as Json of various length, around 15000; 2 records have a length of 53000 in that column and these are not shown using the "Edit top 200" command, but are shown using a Select query in a standard window.
Is there a length limit on the columns that is possible to read and edit using the "Edit top 200" command in Sql server management? I'm pretty sure there's a limit, but I can't find references about that in the documentation and I'd also expect some message in the field, a placeholder like "your data is too long to be shown here". Having just the field empty could be a little bit confusing.
Sql server 14.0.2037; Management studio 15.0.18390.0
Thanks
Using Visual Studio 2015 Enterprise
I'm trying to change a few values inside the Script Transformation Editor but they are grayed out and I can't modify them.
Here I'm trying to change ScriptLanguage to Microsoft Visual Basic:
Here I would like to change the length of this HashValue column
I've tried restarting visual studio as well as removing the script and adding it back to no avail.
EDIT: I figured out the second one by changing the data type to DT_WSTR
First Issue
Note: Once you accessed the script editor window you cannot change the it's language.
But you can change your scripts default language from visual studio options. All you have to do is go to Tools and select Options.... Under the Business Intelligence Designers option, select Integration Services Designer and change the script language to whichever you prefer your default to be.
Second Issue
You cannot change the length column of type Integer:
DT_I1 is relative to Sql tinyInt data type (0 to 255)
DT_I2 is relative to Sql Smallint data type (-2^15 (-32,768) to 2^15-1 (32,767))
DT_I4 is relative to Sql Int data type (-2^31 (-2,147,483,648) to 2^31-1 (2,147,483,647))
DT_I8 is relative to Sql Big Int data type (-2^63 (-9,223,372,036,854,775,808) to 2^63-1 (9,223,372,036,854,775,807))
Only length for DT_STR and DT_WSTR can be changed
MSDN articles about SSIS and Sql data types:
https://msdn.microsoft.com/en-us/library/ms141036(v=sql.120).aspx
https://msdn.microsoft.com/en-us/library/ms187752.aspx
https://msdn.microsoft.com/en-us/library/ms187745.aspx
Script Component Language
The Script Component ScriptLanguage property should generally be editable, UNTIL you have used the 'Edit Script...' dialog (since this builds up the backing project which can't be converted automatically). Try creating a new Script Component and editing this value first, but I was not able to replicate this being disabled at the start with my copy of VS2015.
Data Type Properties
Data type properties are controlled mainly by the selected DataType. In this case, you have a four-byte signed integer (DT_I4), which doesn't have any other settings. Other data types have different properties, i.e.:
DT_STR (string) can set Length and CodePage (character set),
DT_WSTR (Unicode string) can only set Length,
and DT_NUMERIC can set Scale and Precision.
There is an example csv file:
category,fruits,cost
'Fruits','Apple,banana,lemon','10.58'
When I import this csv into SQL Server 2014
by clicking the database in "Object explorer"=>Task=>Import data.
No matter how I play around with column delimiter options, the row 2 will always become
5 columns (Fruits,Apple,banana,lemon,10.58) instead of the desired 3 columns
('Fruits','Apple,banana,lemon','10.58'). (So I want 'Apple,banana,lemon' to be in one column.)
The solution here How do I escape a single quote in SQL Server? doesn't work. Any guru could enlighten? Python, Linux bash, SQL or simple editor tricks are welcome! Thank you!
No matter how I play around with column delimiter options
That's not the option you need to play with - it's the Text Qualifier:
And it now imports easily.
I have searched for this specific solution and while I have found similar queries, I have not found one that solves my issue. I am manually importing a tab-delimited text file of data that contains international characters in some fields.
This is one such character: Exhibit Hall C–D
it's either an em dash or en dash in between the C & D. It copies and pastes fine, but when the data is taken into SQL Server 2000, it ends up looking like this:
Exhibit Hall C–D
The field is nvarchar and like I said, I am doing the import manually through Enterprise Manager. Any ideas on how to solve this?
The problem is that the encoding between the import file and SQL Server is mismatched. The following approach worked for me in SQL Server 2000 importing into a database with the default encoding (SQL_Latin1_General_CP1_CI_AS):
Open the .csv/.tsv file with the free text editor Notepad++, and ensure that special characters appear normal to start with (if not, try Encoding|Encode in...)
Select Encoding|Convert to UCS-2 Little Endian
Save as a new .csv/.tsv file
In SQL Server Enterprise Manager, in the DTS Import/Export Wizard, choose the new file as the data source (source type: Text File)
If not automatically detected, choose File type: Unicode (in preview on this page, the unicode characters will still look like black blocks)
On the next page, Specify Column Delimiter, choose the correct delimiter. Once chosen, Unicode characters should appear correctly in the Preview pane
Complete import wizard
I would try using the bcputility ( http://technet.microsoft.com/en-us/library/ms162802(v=sql.90).aspx ) with the -w parameter.
You may also want to check the text encoding of the input file.
I am working with SQL Server 2008. My task is to investigate the issue where FTS cannot find the right result for Thai.
First, I have the table which enables the FTS on the column 'ItemName' which is nvarchar. The Catalog is created with the Thai Language. Note that the Thai language is one of the languages that doesn't separate the word by spaces, so 'หลวง' 'พ่อ' 'โสธร' are written like this in a sentence: 'หลวงพ่อโสธร'
In the table, there are many rows that include the word (โสธร); for example row#1 (ItemName: 'หลวงพ่อโสธร')
On the webpage, I try to search for 'โสธร' but SQL Server cannot find it.
So I try to investigate it by trying the following query in SQL Server:
select * from sys.dm_fts_parser(N'"หลวงพ่อโสธร"', 1054, 0, 0)
...to see how the words are broken. The first one is the text to be broken. The second parameter is to specify that we're using Thai (WorkBreaker, so on). Here is the result:
row#1 (display_item: 'ງลวง', source_item: 'หลวงพ่อโสธร')
row#2 (display_item: 'พຝโส', source_item: 'หลวงพ่อโสธร')
row#3 (display_item: 'ธร', source_item: 'หลวงพ่อโสธร')
Notice that the first and second row display the wrong display_item 'ງ' in the 'ງลวง' isn't even Thai characters. 'ຝ' in 'พຝโส' is not a Thai character either.
So the question is where did those alien characters come from? I guess this why I cannot search for 'โสธร' because the word breaker is broken and keeping the wrong character in the indexes.
Please help!
This should be due to the different Dialect of thai selected while the indexing was applied.
From FTS properties check what is your selected language / culture