A multi user program, written in VB6, saves scanned pictures and shows/displays them on Image control. Users can scan pictures and same and/or other users can view them. Pictures are saved in a Text column (datatype = Text) of SQL Server. Saving and showing pictures works fine as long as Control Panel -> Region -> Administrative -> Language for non-Unicode programs (say non-uni-lang) is English (United States) or probably the same on both scanners' and viewers' PCs.
When viewers' non-uni-lang is different then they can't view pictures, pictures get corrupted.
I have noticed that the Recordset receives different data/text on machines with different non-uni-lang. Base64 equivalent of the saved Text is same on PCs with different non-uni-lang but decoding them back to binary differs.
GetACP API can be used to get the value but i couldn't find any API to change the value. There is a registry key but it requires reboot to have effect.
Questions:
Is there any way to change non-uni-lang?
Is there any way, independent of non-uni-lang, to decode base64 back to
binary?
Edit 1
Base64 Encoding
select cast('' as xml).value('xs:base64Binary(sql:column("mytext"))', 'nvarchar(max)') from (
select CAST(mytext AS varbinary(MAX)) mytext from (
select CAST(ImageData AS nvarchar(MAX)) mytext from ImageTable where ImageID = 123
) t2
) t1
Base64 Decoding (Code by CVMichael)
https://www.vbforums.com/showthread.php?197891-CryptBinaryToString-(new-BASE-64-function-so-RESOLVED)&p=1166849#post1166849
Edit 2
Eliminated SQL Server. I copied base64 string in a text file, read it as byte array in vb6, decoded and made multiple conversions and it showed the image on English PC but failed on Arabic PC. Please download the working code from https://drive.google.com/file/d/1Jv0ELXvxhd9E0V6RhiqrexVyiunG9UE1/view?usp=sharing
Related
I have a CSV in Polish that I want to get into SQL w/ SSIS.
I open it in Notepad++ and it says UTF 8.
If it doesn't actually say UTF-8-BOM in the status bar then Notepad++ is only guessing the encoding. Try selecting Encoding > Encode in UTF-8-BOM, save the file, then close and reopen it to confirm the change. After saving it with a BOM (Byte Order Mark) try importing it via SSIS again using code page 65001 (UTF-8) setting and see if it works.
#AlwaysLearning
I convert the file as the user above suggested and it now shows UTF8 BOM in the corner. I save it.
So here's the vicious circle:
a) when choosing the CSV in SSIS as UTF-8 in the preview I can see my polish letters properly. Until I hit Run. Then I get this error:
Error at Data Flow Task [SQL Server Destination [9]]: The column "ColumnName" cannot be processed because more than one code page (65001 and 1252) are specified for it.
I get it for each column.
b) When I change the filetype in the connection manager to 1252, I can immediately see in the preview that my Polish letters are lost. But now running it works like a charm and I get no errors.
Screenshot1
Screenshot2
Here's what I've tried:
Changing to 1250, 65001 etc
Ticking Unicode
changing Locale to Polish, polish(Poland), English
googling
searching stack
Posting This question to stack:
I have large string in my postgresql database. It's a base64 encoded mp3 and I have to select the column with that large string and get all the data with one select. If I write normal select
SELECT * FROM public.song_data WHERE id=1;
it will return just 204 kB from that string and the string has 2.2 MB.
Also the datagrip shows me just 204 kB of data from that string. Is there a way to get all the data with just one select?
It's strange. Are you sure so your data was not trimmed somewhere? You can use function length for check of actual size.
postgres=# select length('aaa');
┌────────┐
│ length │
╞════════╡
│ 3 │
└────────┘
(1 row)
Two MB are nothing for Postgres, but some clients (or protocols) can problem with it. Sometimes is necessary to use functions lo_import and lo_export as workaround for client / protocol limits. For selecting data from table you should to use SELECT statement. There is not any other way. Theoretically you can transform any string to large object and then by function lo_export you can download this large object from database with LO special protocol. For 2MB it should not be necessary I think.
Please, try to check if your data was stored to postgres correctly. Theoretical limit for text, varchar is 1GB. Practical limit is less value - about 100MB. It is significantly higher value than 2MB.
Postgres has special data type for binary data - bytea. It does conversation to hex code by default and base64 encoding is supported too.
You can select number of chars you want to show using left with negative values (which removes characters from the end) and concat:
SELECT left(concat(public.song_data,' '), -1) from mytable;
The client was actually the problem. Datagrip can’t print of 2MB. I tried another client (DBeaver and Heidisql) and it was ok. Then I selected a row of 2MB with a php select and I got all that data.
I have an annoying issue working with SQL Server DATETIME objects in Excel 2013. The problem has been stated several times here in SO, and I know that the work around is to just reformat the DATETIME objects in Excel by doing this:
Right click the cell
Choose Format Cells
Choose Custom
In the Type: input field enter yyyy-mm-dd hh:mm:ss.000
This works fine BUT I loathe having to do this every time. Is there a permanent work around to this aside from creating macros? I need to maintain the granularity of the DATETIME object so I cannot use a SMALLDATETIME. I am currently using Microsoft SQL Server Management Studio 2008 r2 on a win7 machine.
Thanks in advance.
-Stelio K.
Without any code it's hard to guess how the data gets from SQL Server to Excel. I assume it's not through a data connection, because Excel wouldn't have any issues displaying the data as dates directly.
What about data connections?
Excel doesn't support any kind of formatting or any useful designer for that matter, when working with data connections only. That functionality is provided by Power Query or the PivotTable designer. Power Query is integrated in Excel 2016 and available as a download for Excel 2010+.
Why you need to format dates
Excel doesn't preserve type information. Everything is a string or number and its display is governed by the cell's format.
Dates are stored as decimals using the OLE Automation format - the integral part is the number of dates since 1900-01-01 and the fractional part is the time. This is why the System.DateTime has those FromOADate and ToOADate functions.
To create an Excel sheet with dates, you should set the cell format at the same time you generate the cell.
How to format cells
Doing this is relatively if you use the Open XML SDK or a library like EPPlus. The following example creates an Excel sheet from a list of customers:
static void Main(string[] args)
{
var customers = new[]
{
new Customer("A",DateTime.Now),
new Customer("B",DateTime.Today.AddDays(-1))
};
File.Delete("customers.xlsx");
var newFile = new FileInfo(#"customers.xlsx");
using (ExcelPackage pck = new ExcelPackage(newFile))
{
var ws = pck.Workbook.Worksheets.Add("Content");
// This format string *is* affected by the user locale!
// and so is "mm-dd-yy"!
ws.Column(2).Style.Numberformat.Format = "m/d/yy h:mm";
//That's all it needs to load the data
ws.Cells.LoadFromCollection(customers,true);
pck.Save();
}
}
The code uses the LoadFromCollection method to load a list of customers directly, without dealing with cells. true means that a header is generated.
There are equivalent methods to load data from other source: LoadFromDatatable, LoadFromDataReader, LoadFromText for CSV data and even LoadFromArrays for jagged object arrays.
The weird thing is that specifying the m/d/yy h:mm or mm-dd-yy format uses the user's locale for formatting, not the US format! That's because these formats are built-in into Excel and are treated as the locale-dependent formats. In the list of date formats they are shown with an asterisk, meaning they are affected by the user's locale.
The reason for this weirdness is that when Excel moved to the XML-based XLSX format 10 years ago, it preserved the quirks of the older XLS format for backward-compatibility reasons.
When EPPlus saves the xlsx file it detects them and stores a reference to the built-in format ID (22 and 14 respectively) instead of storing the entire format string.
Finding Format IDs
The list of standard format IDs is shown in the NumberingFormat element documentation page of the Open XML standard. Excel originally defined IDs 0 (General) through 49.
EPPlus doesn't allow setting the ID directly. It checks the format string and maps only the formats 0-49 as shown in the GetBfromBuildIdFromFormat method of ExcelNumberFormat. In order to get ID 22 we need to set the Format property to "m/d/yy h:mm"
Another trick is to check the stylesheets of an existing sheet. xlsx is a zipped package of XML files that can be opened with any decompression utility. The styles are stored in the xl\styles.xml file.
I am updating a CGI application that accesses an MSSQL 2008 database containing customer data. The database is managed by a third-party application, so I cannot change the data structure.
One of the tables ('guests') contains a column 'mug_shot' of type 'Image'. The column contains a JPEG image of each guest. When I retrieve data from this column, it always appears to be in text format. For example, when I perform the following query:
my $mugshotQuery = "SELECT TOP 1 mug_shot FROM guests where guest_no = ?";
my $mugshotStatementHandle = $dbh->prepare($mugshotQuery);
$mugshotStatementHandle->execute($guest_number);
and fetch the data:
my $mugshotHash = $mugshotStatementHandle->fetchrow_hashref();
$mugshotHash->{mug_shot} contains a hexadecimal representation of the JPEG binary data. Here is a shortened example:
ffd8ffe000104a46494600010101004c004c0000ffe1004245786966000049492a000800000002001a01050001000000260000001b0105000100000030000000000000007d3b8c0440420f000000ed168e0440420f000000ffdb0043000302020302020303030304030304050805050404050a070706080c0a0c0c0b0a0b0b0d0e12100d0e110e0b0b1016101113141515150c0f171816141812141514ffdb00430103040405040509050509140d0b0d1414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414141414ffc00011080156010003012200021101031101ffc4001f0000010501010101010100000000000000000102030405060708090a0bffc400b5100002010303020403050504040000017d01020300041105122131410613516107227114328191a1082342b1c11552d1f02433627282090a161718191a25262728292a3435363738393a434445464748494a535455565758595a636465666768696a737475767778797a838485868788898a92939495969798999aa2a3a4a5a6a7a8a9aab2b3b4b5b6b7b8b9bac2c3c4c5c6c7c8c9cad2d3d4d5d6d7d8d9dae1e2e3e4e5e6e7e8e9eaf1f2f3f4f5f6f7f8f9faffc4001f0100030101010101010101010000000000000102030405060708090a0bffc400b51100020102040403040705040400010277000102031104052131061241510761711322328108144291a1b1c109233352f0156272d10a162434e125f11718191a262728292a35363738393a434445464748494a535455565758595a636465666768696a737475767778797a82838485868788898a92939495969798999aa2a3a4a5a6a7a8a9aab2b3b4b5b6b7b8b9bac2c3c4c5c6c7c8c9cad2d3d4d5d6d7d8d9dae2e3e4e5e6e7e8e9eaf2f3f4f5f6f7f8f9faffda000c03010002110311003f00bd3dc482790798ff0078ff0011f5a68b9931feb1ff0033493f133f7f98ff003a6923d39aed48f997e63fed1267fd63
Therefore, my attempt to display the image fails:
print STDOUT "Content-type: image/jpeg\n";
print STDOUT "Content-length: \n\n";
binmode STDOUT;
print STDOUT $mugshotHash->{mug_shot};
The browser reports that the image is invalid. Why is the data returned as text/hexadecimal instead of binary data, and what can I do to fetch the binary data?
There is a flag to return image data in binary format:
$dbh->{syb_binary_images} = 1;
After I set this flag, the images are returned in the correct format. For good measure, I also used the following to make sure that the images are not truncated:
$dbh->{LongTruncOK} = 0;
$dbh->do("set textsize 1000000");
I am using Access database for one system, and SQL server for another system. The data gets synced between these two systems.
The problem is that one of the fields in a table in Access database is a Memo field which is in double-byte format. When I read this data using DataGridView in a Windows form, the text is displayed as ???.
Also, when data from this field is inserted in sql server database nvarchar(max) field, non-English characters are inserted as ???.
How can I fetch data from memo field, convert its encoding to Unicode, so that it appears correctly in SQL server database as well?
Please help!!!
I have no direct experience with datagrid controls, but I already noticed that some database values are not correctly displayed through MS-Access controls. Uniqueidentifiers, for example, are set to '?????' values when displayed on a form. You could try this in the debug window, where "myIdField" control is bound to "myIdField" field from the underlying recordset (unique Identifier type field):
? screen.activeForm.recordset.fields("myIdField")
{F0E3C822-BEE9-474F-8A4D-445A33F363EE}
? screen.activeForm.controls("myIdField")
????
Here is what the Access Help says on this issue:
The Microsoft Jet database engine stores GUIDs as
arrays of type Byte. However, Microsoft Access can't return Byte data
from a control on a form or report. In order to return the value of a
GUID from a control, you must convert it to a string. To convert a
GUID to a string, use the StringFromGUID function. To convert a string
back to a GUID, use the GUIDFromString function.
So if you are extracting values from controls to update a table (either directly or through a recordset), you might face similar issuers ...
One solution will be to update data directly from the recordset original value. Another option would be to open the original recordset with a query containing necessary conversion instructions so that the field will be correctly displayed through the control.
What I usually do in similar situation, where I have to manipulate uniqueIdentifier fields from multiple datasources (MS-Access and SQL Server for Example), is to 'standardize' these fields as text in the recordsets. Recordsets are then built with queries such as:
SQL Server
"SELECT convert(nvarchar(36),myIdField) as myIdField, .... FROM .... "
MS-Access
"SELECT stringFromGUID(myIdField) as myIdField, .... FROM .... "
I solved this issue by converting the encoding as follows:
//Define Windows 1252, Big5 and Unicode encodings
System.Text.Encoding enc1252 = System.Text.Encoding.GetEncoding(1252);
System.Text.Encoding encBig5 = System.Text.Encoding.GetEncoding(950);
System.Text.Encoding encUTF16 = System.Text.Encoding.Unicode;
byte[] arrByte1 = enc1252.GetBytes(note); //string to be converted
byte[] arrByte2 = System.Text.Encoding.Convert(encBig5, encUTF16, arrByte1);
string convertedText = encUTF16.GetString(arrByte2);
return convertedText;
Thank you all for pitching in!