Delphi stream to/from database with FireDAC - sql-server

Originaly I want to save/retrieve report from fastreports which uses SaveToStream/LoadToStream for this purpose. I use RAD Studio XE6(upd1).
In database I have table Reports with index field 'StVn'type int and field 'Definition' type ntext. Database is MSSQL and for saving report I use:
FDCommand.CommandText.Text:='UPDATE Reports SET Definition= :pDefinition WHERE StVn=1';
FDCommand.Params.ParamByName('pDefinition').LoadFromStream(MyStream, ftWidememo);
FDCommand.Execute;
and for retrieving:
FDQuery.SQL.Text:='SELECT * FROM Reports WHERE StVn=1';
FDQuery.Open();
MyStream:=FDQuery.CreateBlobStream(FDQuery.FieldByName('Definition'),bmRead);
This worked for some short reports, but for any real one saving/restoring corrupts report definition.
So I make an test case on new form with just an Memo and tried to save/restore it with same data acess setup (FDConnection, FDCommand, FDQuery) and following code:
procedure TForm1.BMemoSaveClick(Sender: TObject);
var TmpStream:TStream;
begin
TmpStream:=TMemoryStream.Create;
Memo1.Lines.SaveToStream(TmpStream);
ShowMessage(IntToStr(TmpStream.Size));
FDCommand1.Params.Clear;
FDCommand1.CommandText.Text:='UPDATE Reports SET Definition= :pDefinition WHERE StVn=1';
FDCommand1.Params.ParamByName('pDefinition').LoadFromStream(TmpStream,ftWideMemo);
FDCommand1.Execute();
TmpStream.Free;
end;
procedure TForm1.BMemoLoadClick(Sender: TObject);
var TmpStream:TStream;
begin
FDQuery.SQL.Text:='SELECT * FROM Reports WHERE StVn=1';
FDQuery.Open();
TmpStream:=FDQuery.CreateBlobStream(FDQuery.FieldByName('Definition'),bmRead);
ShowMessage(IntToStr(TmpStream.Size));
Memo1.Lines.LoadFromStream(TmpStream);
TmpStream.Free;
end;
As you can see I have inserted ShowMessage to see the stream size at saving and at retrieving and if I save just default text 'Memo1' I get length of 7 at saving and length of 14 at loading the memo (it is allways doubled).
Any ideas what I am doing wrong ?

Note, I have not verified the database saving/loading as I don't have MSSQL, but I'm pretty sure this is the cause:
By default, TString uses the default encoding (TEncoding.Default), which is most likely ANSI (in my case Windows-1252), hence the length for the memo text showing as 7 bytes: 5 for "Memo1" and two for the CRLF.
However, your column is of type NTEXT which stores text as UTF-16. When you read it back you do so as a blob and FireDAC does not perform any character conversion1 then, hence the doubling in size.
I would suggest you treat the report as binary data and store it as such using an "image" type column and use ftBlob instead of ftWideMemo.

Related

IBM DB2 values displayed as utf-8 text

Once I connect to the database (DB2) to check the values in the tables, if they have special chars then I see their utf-8 text value:
I expected instead to see the correct: Tükörfúrógép.
I am still able to handle the value properly, but is there any configuration in the db that I am missing to display the value properly when checking the table?
More Info:
Connected to DB with Intellij and also tried with DbVisualizer.
The following JDBC connection was used in intellij:
jdbc:db2://(...)?characterEncoding=UTF-8;
Tried both with the characterEncoding and without getting the same results.
I am still able to handle the value properly, but is there any configuration in the db that I am missing to display the value properly when checking the table?
DB Version: v11 LUW
JDBC: com.ibm.db2.jcc -- db2jcc4 -- Version 10.5
Encoding being used: UTF-8
db2 "select char(value,10), char(name,10) from sysibmadm.dbcfg where
name like 'code%'"
1 2
---------- ---------- 1208 codepage UTF-8 codeset
2 record(s) selected.
UPDATE 1:
I was able to directly insert in the database values with special
chars, so starting to think this is not DB2 configuration missing but
maybe jdbc or other related issue.
You must have the following HEX string representation for given string Tükörfúrógép in UTF-8 database:
54C3BC6BC3B67266C3BA72C3B367C3A970.
But you have the following instead with repeating garbage symbols:
54C383C2BC6BC383C2B67266C383C2BA72C383C2B367C383C2A970
You may try to manually remove such a byte sequence with the following statement, but it's better to understand a root cause of such a garbage appearance in this column.
VALUES REPLACE (x'54C383C2BC6BC383C2B67266C383C2BA72C383C2B367C383C2A970', x'83C2', '');
SELECT REPLACE (TOWN, x'83C2', '') FROM ...;

Large Number Entry in Oracle APEX

I have a Number field in DB and Oracle APEX.
My Issue is:
If Users want to entry the number data with this format "1.000.000,01", then takes Charakter Error that the entry must be Number.
How can I solve this problem in Application Layer ? In Database Layer there are some solutions , but in Application Layer so far I can not find any solution.
As Summary: I want to entry number as 1.000.000,12 in Application and I want to see it in the same format.
NOT: A procedure runs in the Application to insert the data in DB.
You can/should set appropriate format mask, e.g. 999G999G990D00 where
G represents thousands character (dot in your case)
D represents decimal character (comma in your case)
But, where do you set NLS numeric characters (represented by G and D)? In Apex 20.2, it is to be set in:
application builder
shared components
globalization attributes
security
initialization PL/SQL code - in here, you'll probably see what they are set to. Change those values, if necessary. For example:
begin
execute immediate q'[alter session set nls_numeric_characters = ',.']';
execute immediate q'[alter session set nls_date_format = 'dd.mm.yyyy hh24:mi:ss']';
end;

RODBC ERROR: 'Calloc' could not allocate memory

I am setting up a SQL Azure database. I need to write data into the database on daily basis. I am using 64-bit R version 3.3.3 on Windows10. Some of the columns contain text (more than 4000 characters). Initially, I have imported some data from a csv into the SQL Azure database using Microsoft SQL Server Management Studios. I set up the text columns as ntext format, because when I tried using nvarchar the max was 4000 and some of the values got truncated even though they were about 1100 characters long.
In order to append to the database I am first saving the records in a temp table when I have predefined the varTypes:
varTypesNewFile <- c("Numeric", rep("NTEXT", ncol(newFileToAppend) - 1))
names(varTypesNewFile) <- names(newFileToAppend)
sqlSave(dbhandle, newFileToAppend, "newFileToAppendTmp", rownames = F, varTypes = varTypesNewFile, safer = F)
and then append them by using:
insert into mainTable select * from newFileToAppendTmp
If the text is not too long, the above does work. However, sometimes I get the following error during the sqlSave command:
Error in odbcUpdate(channel, query, mydata, coldata[m, ], test = test, :
'Calloc' could not allocate memory (1073741824 of 1 bytes)
My questions are:
How can I counter this issue?
Is this the format I should be using?
Additionally, even when the above works, it takes about an hour to upload about 5k of records. Is it not too long? Is this the normal amount of time it should take? If not, what could I do better.
RODBC is very old, and can be a bit flaky with NVARCHAR columns. Try using the RSQLServer package instead, which offers an alternative means to connect to SQL Server (and also provides a dplyr backend).

TClientDataset Widestring field doubles in size after reading NVARCHAR from database

I'm converting one of our Delphi 7 projects to Delphi X3 because we want to support Unicode. We're using MS SQL Server 2008/R2 as our database server. After changing some database fields from VARCHAR to NVARCHAR (and the fields in the accompanying ClientDatasets to ftWideString), random crashes started to occur. While debugging I noticed some unexpected behaviour by the TClientDataset/DbExpress:
For a NVARCHAR(10) databasecolumn I manually create a TWideStringField in a clientdataset and set the 'Size' property to 10. The 'DataSize' property of the field tells me 22 bytes are needed, which is expected since TWideStringField's encoding is UTF-16, so it needs two bytes per character and some space for storing the length. Now when I call 'CreateDataset' on the ClientDataset and write the dataset to XML (using .SaveToFile), in the XML file the field is defined as
<FIELD WIDTH="20" fieldtype="string.uni" attrname="TEST"/>
which looks ok to me.
Now, instead of calling .CreateDataset I call .Open on the TClientDataset so that it gets its data through the linked components ->TDatasetProvider->TSQLDataset (.CommandText = a simple select * from table)->TSQLConnection. When I inspect the properties of the field in my watch list, Size is still 10, Datasize is still 22. After saving to XML file however, the field is defined as
<FIELD WIDTH="40" fieldtype="string.uni" attrname="TEST"/>
..the width has doubled?
Finally, if I call .Open on the TClientDataset without creating any fielddefinitions in advance at all, the Size of the field will afterwards be 20(incorrect !) and Datasize 42. After saving to XML, the field is still defined as
<FIELD WIDTH="40" fieldtype="string.uni" attrname="TEST"/>
Does anyone have any idea what is going wrong here?
Check the fieldtype and it's size at the SQLCommand component (which is before DatasetProvider).
Size doubling may be a result of two implicit "conversions": first - server provides NVarchar data which is stored into ansi-string field (and every byte becomes a separate character), second - it is stored into clientdataset's field of type Widestring and each character becomes 2 bytes (size doubles).
Note that in prior versions of Delphi string field size mismatch between ClientDataset's field and corresponding Query/Command field did not result in an exception but starting from one of XE*'s it offten results in AV. So you have to check carefully string field sizes during migration.
Sounds like because of the column datatype being changed, it has created unexpected issues for you. My suggestion is to
1. back up the table,multiple ways to doing this,pick your poison figuratively speaking
2. delete the table,
3. recreate the table,
4. import the data from the old table to the newly created table. See if that helps.
Sql tables DO NOT like it when column datatypes get changed, and unexpected issues may arise from doing just that. So try that, and worst case scenario, you have wasted maybe ten minutes of your time trying a possible solution.

convert memo field in Access database from double byte to Unicode

I am using Access database for one system, and SQL server for another system. The data gets synced between these two systems.
The problem is that one of the fields in a table in Access database is a Memo field which is in double-byte format. When I read this data using DataGridView in a Windows form, the text is displayed as ???.
Also, when data from this field is inserted in sql server database nvarchar(max) field, non-English characters are inserted as ???.
How can I fetch data from memo field, convert its encoding to Unicode, so that it appears correctly in SQL server database as well?
Please help!!!
I have no direct experience with datagrid controls, but I already noticed that some database values are not correctly displayed through MS-Access controls. Uniqueidentifiers, for example, are set to '?????' values when displayed on a form. You could try this in the debug window, where "myIdField" control is bound to "myIdField" field from the underlying recordset (unique Identifier type field):
? screen.activeForm.recordset.fields("myIdField")
{F0E3C822-BEE9-474F-8A4D-445A33F363EE}
? screen.activeForm.controls("myIdField")
????
Here is what the Access Help says on this issue:
The Microsoft Jet database engine stores GUIDs as
arrays of type Byte. However, Microsoft Access can't return Byte data
from a control on a form or report. In order to return the value of a
GUID from a control, you must convert it to a string. To convert a
GUID to a string, use the StringFromGUID function. To convert a string
back to a GUID, use the GUIDFromString function.
So if you are extracting values from controls to update a table (either directly or through a recordset), you might face similar issuers ...
One solution will be to update data directly from the recordset original value. Another option would be to open the original recordset with a query containing necessary conversion instructions so that the field will be correctly displayed through the control.
What I usually do in similar situation, where I have to manipulate uniqueIdentifier fields from multiple datasources (MS-Access and SQL Server for Example), is to 'standardize' these fields as text in the recordsets. Recordsets are then built with queries such as:
SQL Server
"SELECT convert(nvarchar(36),myIdField) as myIdField, .... FROM .... "
MS-Access
"SELECT stringFromGUID(myIdField) as myIdField, .... FROM .... "
I solved this issue by converting the encoding as follows:
//Define Windows 1252, Big5 and Unicode encodings
System.Text.Encoding enc1252 = System.Text.Encoding.GetEncoding(1252);
System.Text.Encoding encBig5 = System.Text.Encoding.GetEncoding(950);
System.Text.Encoding encUTF16 = System.Text.Encoding.Unicode;
byte[] arrByte1 = enc1252.GetBytes(note); //string to be converted
byte[] arrByte2 = System.Text.Encoding.Convert(encBig5, encUTF16, arrByte1);
string convertedText = encUTF16.GetString(arrByte2);
return convertedText;
Thank you all for pitching in!

Resources