I'm saving some data to my database but there is one field which becomes negative when I save the data.
["invoice_id"]=> int(20210126075173)
This is taken from a var_dump() and is the number that I'm trying to save. This looks completely normal but when I look into the database I get the number
-1990019803
The thing is, when I run the code on my machine, it works like it's intended and saves the number as it should but when I do this on the actual server it returns that negative number. My local database is also a clone from the live database.
"Invoice_id" is a varchar field I tried changing it to INT or BIGINT but that makes no difference.
Why you change invoice_id from var_char field to int field? I think if it is varchar, you can add "" to save into database.
you can do several things to prevent this from happening:
make the database column a UNSIGNED INTEGER by using $table->unsignedInteger('column name');
Check the value before inserting to db with a if statement ( (int) ["invoice_id"] >= 0 ) ?? ["invoice_id"] * -1
The problem of difference between your local and remote server might be related to the database server(mysql,postgrs,...) version or system locale.I have not seen this before and have no idea what is causing it.
Make sure to validate the data before inserting it into database to prevent unwanted values (which takes you to second option)
Why are you looking to cast ["invoice_id"]=> int(20210126075173) to an int, when it seems like you're already giving it an integer?
You could try:
["invoice_id"]=> 20210126075173
or if you're casting a string to an int:
["invoice_id"]=> int() '20210126075173'
Related
i have some vb code which loads some data from an sql-server with this code snippet to a local table(both databases are connected via ADODB) :
adorec_local.Fields(str_array_fields(int_i, 1)) = adorec_server.Fields(str_array_fields(int_i, 2))
For example I have a decimal value on the server like "100.50". If this value is transfered to the local table the value in the table is shown as "10050" without the seperator.
When I look into the direct coding window the value is converted from "100.50" to "100,50" which seems correct to me. If I put this value directly into the local table it works without any issue.
Any ideas what's the problem here?
Both fields(local and server) are defined as decimal(8,2).
Thank you in advance!
Edit: Tried the "Double" datatype in Access and it works with the correct values. But I want the decimal value to stay consistent.
I'm trying to run an INSERT query but it asks me to convert varchar to null. Here's the code:
INSERT Runtime.dbo.History (DateTime, TagName, vValue)
VALUES ('2015-09-10 09:00:00', 'ErrorComment', 'Error1')
Error message:
Error converting data type nvarchar to (null).
The problem is at the vValue column.
column vValue(nvarchar, null)
How it looks in the database:
The values inside vValue are placed by the program I'm using. I'm just trying to manually insert into the database.
Last post was with the wrong column, I apologize.
After contacting Wonderware support i found out that INSERT is not supported on the vValue column by design. It’s a string value and updates are supposed to carry out via the StringHistory table.
What is the type of the column value in the database ?
If it's float, you should insert a number, not string.
Cast "error1" to FLOAT is non-sense.
Float is a number exemple : 1.15, 12.00, 150.15
When you try to CAST "Error1" to float, he tries to transform the text "error1" to number and he can't, it's logic.
You should insert a number in the column.
I think I can help you with your problem since I've got a decent test environment to experiment with.
Runtime.dbo.History is not a table you can interact directly with, it is a View. In our case here the view is defined as:
select * from [INSQL].[Runtime].dbo.History
...Which I believe implies the History data you are viewing is from the Historian flat file storage itself, a Wonderware Proprietary system. You might see some success if you expand the SQL Server Management Studio's
Server Objects -> Linked Servers -> INSQL
...and play with the data there but I really wouldn't recommend it.
With that said, for what reason do you need to insert tag history? There might be other workarounds for the purpose you need.
I'm converting one of our Delphi 7 projects to Delphi X3 because we want to support Unicode. We're using MS SQL Server 2008/R2 as our database server. After changing some database fields from VARCHAR to NVARCHAR (and the fields in the accompanying ClientDatasets to ftWideString), random crashes started to occur. While debugging I noticed some unexpected behaviour by the TClientDataset/DbExpress:
For a NVARCHAR(10) databasecolumn I manually create a TWideStringField in a clientdataset and set the 'Size' property to 10. The 'DataSize' property of the field tells me 22 bytes are needed, which is expected since TWideStringField's encoding is UTF-16, so it needs two bytes per character and some space for storing the length. Now when I call 'CreateDataset' on the ClientDataset and write the dataset to XML (using .SaveToFile), in the XML file the field is defined as
<FIELD WIDTH="20" fieldtype="string.uni" attrname="TEST"/>
which looks ok to me.
Now, instead of calling .CreateDataset I call .Open on the TClientDataset so that it gets its data through the linked components ->TDatasetProvider->TSQLDataset (.CommandText = a simple select * from table)->TSQLConnection. When I inspect the properties of the field in my watch list, Size is still 10, Datasize is still 22. After saving to XML file however, the field is defined as
<FIELD WIDTH="40" fieldtype="string.uni" attrname="TEST"/>
..the width has doubled?
Finally, if I call .Open on the TClientDataset without creating any fielddefinitions in advance at all, the Size of the field will afterwards be 20(incorrect !) and Datasize 42. After saving to XML, the field is still defined as
<FIELD WIDTH="40" fieldtype="string.uni" attrname="TEST"/>
Does anyone have any idea what is going wrong here?
Check the fieldtype and it's size at the SQLCommand component (which is before DatasetProvider).
Size doubling may be a result of two implicit "conversions": first - server provides NVarchar data which is stored into ansi-string field (and every byte becomes a separate character), second - it is stored into clientdataset's field of type Widestring and each character becomes 2 bytes (size doubles).
Note that in prior versions of Delphi string field size mismatch between ClientDataset's field and corresponding Query/Command field did not result in an exception but starting from one of XE*'s it offten results in AV. So you have to check carefully string field sizes during migration.
Sounds like because of the column datatype being changed, it has created unexpected issues for you. My suggestion is to
1. back up the table,multiple ways to doing this,pick your poison figuratively speaking
2. delete the table,
3. recreate the table,
4. import the data from the old table to the newly created table. See if that helps.
Sql tables DO NOT like it when column datatypes get changed, and unexpected issues may arise from doing just that. So try that, and worst case scenario, you have wasted maybe ten minutes of your time trying a possible solution.
I try to read numeric/decimal/money columns from SQL Server via ODBC in the following manner:
SQL_NUMERIC_STRUCT decimal;
SQLGetData(hSqlStmt, iCol, SQL_C_NUMERIC, &decimal, sizeof(decimal), &indicator);
All these types are returned as SQL_NUMERIC_STRUCT structure, and I specify the SQL_C_NUMERIC type to SQLGetData() API.
In the database, column is defined as decimal(18, 4) for example, or money. But the problem is that the returned data has decimal.precision always set to 38 (max possible value) and decimal.scale always set to zero. So if the actual stored number is 12.65, the returned value in SQL_NUMERIC_STRUCT structure is equal to 12. So the fractional part is simply discarded, for a very strange reason.
What can I be doing wrong?
OK, this article explains the problem. The solution is so cumbersome, that I decided to avoid using SQL_NUMERIC_STRUCT altogether, influenced by this post. Now I specify SQL_C_WCHAR (SQL_C_CHAR would do as well) and read the numeric/decimal/money columns as text strings directly. Looks like the driver does the conversion.
I have a column with bigint datatype in SQL Server 2005.
I want to store 0347 in that.. (0 should not be removed) means their must be at least four value like: 0034 , 0007, 0423,4445.
SQL will not store the 0 if you use a bigint.
You could use
select right('00000000'+ltrim(Str(<bigIntField>)),4) as DisplayVal
Change the '4' to what size you want to zero fill the fields to.
You can't store a formatted value like that in an integer field. You'd need to store as a VARCHAR.
Unless you have a very good reason, I'd keep it as you have it in the DB, but just format the number for display in the UI.
As far as I know, you can't store formatted data in an integer type field.
Run sprintf or similar over the data when you get it out of the database instead.