Inconsistent results with the same query in PostgreSQL? - database

I have a stored procedure that only sums and retrieves data with SELECT statements ( by using other stored procedures ). There is no update or insert at all in these stored procedures and of course all of involved data are static. This main stored procedure returns double. The weird thing is if I execute it a couple of times, usually after 3 - 4 times, the results are different from the first 3 - 4 results then it stays like that although I keep executing it.
I thought its because of computer precision problem, but the results are still under 1 million and the difference can be up to 4 to 8.

The problem is most likely here:
IF (FOUND) THEN
vSaldo := vRecSearchSaldoAwal.balance;
RAISE NOTICE 'Balance Adjustment: %', vRecSearchSaldoAwal.balance;
...
SELECT (_dateStart - INTERVAL '1 day'), NULL, NULL, vSaldo, TX_STOCKADJ, 'Saldo Sebelumnya' INTO vRetval;
RETURN NEXT vRetval;
ELSEIF (NOT FOUND) THEN
vSaldo := SumQtyBeginningBal(_idproduct, _idGudang);
...
The phrase ELSEIF (NOT FOUND) THEN looks at the FOUND value which has been set by the last SELECT statement inside the IF/IFELSE block. You probably want to put a simple ELSE instead.

Related

Unable to pass empty string into non-null database field

I'm stumped on something which should be very straight-forward. I have a SQL Server database, and I'm trying to update a non-nullable varchar or nvarchar field with an empty string. I know it's possible, because an empty string '' is not the same thing as NULL. However, using the TADOQuery, it is not allowing me to do this.
I'm trying to update an existing record like so:
ADOQuery1.Edit;
ADOQuery1['NonNullFieldName']:= '';
//or
ADOQuery1.FieldByName('NonNullFieldName').AsString:= '';
ADOQuery1.Post; //<-- Exception raised while posting
If there is anything in the string, even just a single space, it saves just fine, as expected. But, if it is an empty string, it fails:
Non-nullable column cannot be updated to Null.
But it's not null. It's an empty string, which should work just fine. I swear I've passed empty strings many, many times in the past.
Why am I getting this error, and what should I do to resolve it?
Additional details:
Database: Microsoft SQL Server 2014 Express
Language: Delphi 10 Seattle Update 1
Database drivers: SQLOLEDB.1
Field being updated: nvarchar(MAX) NOT NULL
I can reproduce your reported problem using the code below with SS2014, the OLEDB driver and
Seattle and the difference in behaviour when the table has been created with MAX as the column size and a specific number (4096 in my case). I thought I would post this is as an alternative
answer because it not only shows how to investigate this difference systematically
but also identifies why this difference arises (and hence how to avoid it in future).
Please refer to and execute the code below, as written, i.e. with the UseMAX define
active.
Turning on "Use Debug DCUs" in the the project options before executing the code, immediately
reveals that the described exception occurs in Data.Win.ADODB at line 4920
Recordset.Fields[TField(FModifiedFields[I]).FieldNo-1].Value := Data
of TCustomADODataSet.InternalPost and the Debug evaluation window reveals that
Data at this point is Null.
Next, notice that
update jdtest set NonNullFieldName = ''
executes in an SSMS2014 Query window without complaint (Command(s) completed successfully.), so it seems that the
fact that Data is Null at line 4920 is what is causing the problem and the next question is "Why?"
Well, the first thing to notice is that the form's caption is displaying ftMemo
Next, comment out the UseMAX define, recompile and execute. Result: No exception
snd notice that the form's caption is now displaying ftString.
And that's the reason: Using a specific number for the column size means that
the table metadata retrieved by the RTL causes the client-side Field to be created
as a TStringField, whose value you can set by a string assignment statement.
OTOH, when you specify MAX, the resulting client-side Field is of type ftMemo,
which is one of Delphi's BLOB types and when you assign
string values to an ftMemo field, you are at the mercy of code in Data.DB.Pas , which does all the reading (and writing) to the record buffer using a TBlobStream. The problem with that is that as far as I can see, after a lot of experiments and tracing through the code, the way a TMemoField uses a BlobStream fails to properly distinguish between updating the field contents to '' and setting the field's value to Null (as in System.Variants).
In short, whenever you try to set a TMemoField's value to an empty string, what actually happens is that the field's state is set to Null, and this is what causes the exception in the q. AFAICS, this is unavoidable, so no work-around is obvious, to me at any rate.
I have not investigated whether the choice between ftMemo and ftString is made by the Delphi RTL code or the MDAC(Ado) layer it sits upon: I would expect it is actually determined by the RecordSet TAdoQuery uses.
QED. Notice that this systematic approach to debugging has revealed the
problem & cause with very little effort and zero trial and error, which was
what I was trying to suggest in my comments on the q.
Another point is that this problem could be tracked down entirely without
resorting to server-side tools including the SMSS profiler. There wasn't any need to use the profiler to inspect what the client was sending to the server
because there was no reason to suppose that the error returned by the server
was incorrect. That confirms what I said about starting investigation at the client side.
Also, using a table created on the fly using IfDefed Sql enabled the problem effectively to be isolated in a single step by simple observation of two runs of the app.
Code
uses [...] TypInfo;
[...]
implementation[...]
const
// The following consts are to create the table and insert a single row
//
// The difference between them is that scSqlSetUp1 specifies
// the size of the NonNullFieldName to 'MAX' whereas scSqlSetUp2 specifies a size of 4096
scSqlSetUp1 =
'CREATE TABLE [dbo].[JDTest]('#13#10
+ ' [ID] [int] NOT NULL primary key,'#13#10
+ ' [NonNullFieldName] VarChar(MAX) NOT NULL'#13#10
+ ') ON [PRIMARY]'#13#10
+ ';'#13#10
+ 'Insert JDTest (ID, [NonNullFieldName]) values (1, ''a'')'#13#10
+ ';'#13#10
+ 'SET ANSI_PADDING OFF'#13#10
+ ';';
scSqlSetUp2 =
'CREATE TABLE [dbo].[JDTest]('#13#10
+ ' [ID] [int] NOT NULL primary key,'#13#10
+ ' [NonNullFieldName] VarChar(4096) NOT NULL'#13#10
+ ') ON [PRIMARY]'#13#10
+ ';'#13#10
+ 'Insert JDTest (ID, [NonNullFieldName]) values (1, ''a'')'#13#10
+ ';'#13#10
+ 'SET ANSI_PADDING OFF'#13#10
+ ';';
scSqlDropTable = 'drop table [dbo].[jdtest]';
procedure TForm1.Test1;
var
AField : TField;
S : String;
begin
// Following creates the table. The define determines the size of the NonNullFieldName
{$define UseMAX}
{$ifdef UseMAX}
S := scSqlSetUp1;
{$else}
S := scSqlSetUp2;
{$endif}
ADOConnection1.Execute(S);
try
ADOQuery1.Open;
try
ADOQuery1.Edit;
// Get explicit reference to the NonNullFieldName
// field to make working with it and investigating it easier
AField := ADOQuery1.FieldByName('NonNullFieldName');
// The following, which requires the `TypInfo` unit in the `USES` list is to find out which exact type
// AField is. Answer: ftMemo, or ftString, depending on UseMAX.
// Of course, we could get this info by inspection in the IDE
// by creating persistent fields
S := GetEnumName(TypeInfo(TFieldType), Ord(AField.DataType));
Caption := S; // Displays `ftMemo` or `ftString`, of course
AField.AsString:= '';
ADOQuery1.Post; //<-- Exception raised while posting
finally
ADOQuery1.Close;
end;
finally
// Tidy up
ADOConnection1.Execute(scSqlDropTable);
end;
end;
procedure TForm1.Button1Click(Sender: TObject);
begin
Test1;
end;
The problem occurs when using MAX in the data type. Both varchar(MAX) and nvarchar(MAX) exploit this behavior. When removing MAX and replacing it with a large number, such as 5000, then it allows empty strings.

SQL Server : While loop with nested If

I'm creating a stock market database and am stumped that the following works correctly EXCEPT for the last select that returns results (after which the select does not change on subsequent loops). I've tried to simplify the code as follows, thanks in advance for feedback (I'm still noob):
Three tables:
BuyOrders
SellOrders
MatchedOrders
Stored procedure to process a NewBuyOrder:
Insert NewBuyOrder to BuyOrders;
While (NewBuyOrder.SharesRemaining > 0 )
SELECT TOP 1
FROM SellOrders
WHERE SellOrders.Price <= NewBuyOrder.Price
ORDER BY SellOrders.Price, SellOrders.TimePlaced;
IF NewBuyOrder.SharesRemaining < SellOrders.SharesAvailable
UPDATE SellOrders.SharesAvailable = [difference];
UPDATE BuyOrders = 0;
INSERT INTO MatchedOrders;
SET NewBuyOrder.SharesRemaining = 0;
BREAK;
ELSE
UPDATE SellOrders = 0;
UPDATE BuyOrders = [difference];
INSERT INTO MatchedOrders;
SET NewBuyOrder.SharesRemaining = [difference];
CONTINUE;
In hope it might help someone else, I found the issue . . . I'm using local variables to store the matched SellOrderID. As such if the Select returns no match on a second pass through then the local variables were not getting updated (and hence erroneously reused in subsequent while loops until the If kicked in).
So I put a SET SellOrders.ID = 0 into the WHILE loop before the Select then below the Select added a IF SellOrders.ID = 0 and inside that a SET NewBuyOrder.SharesRemaining = 0 and BREAK (then made the first IF above into an ELSE IF).
I need to revisit the process to see if I can make it more elegant but would sincerely welcome thoughts on better ways to accomplish a process for matching the best available counteroffers in sequence. I've read but don't know much about cursors, plus think it transactionally superior not to SELECT a prioritized table of all matches rather than using my iterative loop -- but also have read suggestions not to use loops in SQL. Comments?
In addition I note the following: By itself a Select with no results returns a null set. Thus my original plan was to Select into my SP local variables and then use an IF EXISTS. I assume the local variable exists upon instantiation (even with no value) but am surprised that after a Select into the local variable with no results also did not fail an IF NULL test (i.e. presumably NULL cannot be inserted into a variable). What then is the value of an instantiated local variable with no value -- Blank?

Strange Behaviour on MSSQL Stored Procedure using Conditional WHERE with CONTAINS (Full Text Index)

I need some help from a MS SQL Master...
Short version:
When I execute a Conditional Where followed by a Contains, my query delays 1 minute (In its normal execution, it takes 200 milliseconds).
With this query, everything works fine:
Where
Contains(table.product_name, #search_word)
But with a Conditional Where, it takes 1 minute to execute:
Where
(#ExecuteWhereStatement = 0 Or (Contains(table.product_name, #search_word))
Long Version:
I'm using a stored procedure that receives some parameters. This Stored Procedure query a really large table, but everything is indexed properly and the query goes very well so far.
The main query is a little big, so I want to make the WHERE clause more smart possible, to avoid repeat multiple times the same statement.
The whole idea of the DataBase, is a history of purchases made by the State. So this query involves 3 tables:
Table 1 (table_purchase) - The purchase itself
id_purchase int (PK)
date_purchase datetime
buyer_code int (Nullable)
Table 2 (table_purchase_product) - The Items of a Purchase
id_product int (PK)
id_purchase int (FK of table_purchase)
product_quantity int (Nullable)
product_name varchar(255) (Nullable) (Full-Text-Indexed)
product_description varchar(2000) (Nullable) (Full-Text-Indexed)
id_product_bid_winner int (FK of table_product_bid)
Table 3 (table_product_bids) - The Bids for Each product of a Purchase
id_product_bid int (PK)
id_product int (FK of table_purchase_product)
product_brand varchar(255) (Nullable) (Full-Text-Indexed)
bid_value decimal (20,6)
So basicly, We have a "Purchase", that has several "Products (or Items)", and each "Product" has some "Bids (or Prices)"
And there is the Bad Girl (The SQL Stored Procedure):
ALTER PROCEDURE [dbo].[procPesquisaFullText]
#search_date datetime,
#search_word varchar(8000),
#search_brand varchar(255),
#only_one_bid bit = 0,
#search_buyer_code int = 0,
#quantityFrom decimal(20,6) = 0,
#quantityTo decimal(20,6) = 0
AS
BEGIN
SET NOCOUNT ON;
Declare #ExecuteWordSearch AS bit;
if (#uasg != 0 And #search_word = '')
begin
Set #ExecuteWordSearch = 0;
Set #search_word = 'nothing';
end
else
begin
Set #ExecuteWordSearch = 1;
end
Declare #ExecuteBrandSearch AS bit;
if (#search_brand = '')
begin
Set #ExecuteBrandSearch = 0;
Set #search_brand = 'nothing';
end
else
begin
Set #ExecuteMarcaSearch = 1;
end
begin
SELECT
pp.id_product,
pp.id_purchase,
pp.description
FROM
table_purchase_product pp
inner join table_purchase p on p.id_purchase = pp.id_purchase
WHERE
(p.date_purchase >= #search_date)
and (#search_buyer_code = 0 or (l.buyer_code = #search_buyer_code))
and (#quantityFrom = 0 or (li.product_quantity >= #QuantityFrom))
and (#quantityTo = 0 or (li.product_quantity <= #QuantityTo))
and (contains(pp.product_description, #search_word) or contains(pp.product_name, #search_word))
and (#only_one_bid = 0
or ((Select COUNT(*) From table_product_bid Where table_product_bid.id_product = pp.id_product) = 1))
and (#ExecuteBrandSearch = 0 Or (exists(
select 1
from table_product_bid ppb
where ppb.id_product_bid = pp.id_product_bid_winner
and contains(ppb.product_brand, #search_brand)
)
))
ORDER BY p.date_purchase DESC
end
END
So far, so good...
In the beginning I set two variables, used inside the query.
The first, verify if the user specified a "Buyer Code" AND didn't specify a "Search Word" (So, not the Product's description nor the Product's name is verified)
The second, verify if the user specified a "Specific Brand". If so, then the Winning Bid's BRAND is verified to match the users one.
Observation: You'll notice that when the "Search Words" is empty, I set them to "nothing". I do it because if the search term in the Contains is empty, it throws me a exception, even when it's not executed (I tested it in another query, absolutely isolated too)
As You can see, my user is able to search for:
- "Products" of Some Distinct Buyer "Purchase" (passing the #search_buyer_code parameter)
- A "Product" that contains a distinct word in its name or description
- A "Product" that has the Winner Bid of a specific Brand
- A "Product" that has only 1 bid at all
- A "Product" with a maximum and minimum quantity
And You'll notice that I used a lot of Conditions INSIDE the Where, producing a very dynamic Where, instead of using a "BIG If Else" statement, and repeating a lot of code. (I guess some "Googlers" will land here looking for Conditionally Wheres, and If so, I'm glad to help!)
Ok, so everything works veeery great at all. The query executes flawless. But here is the strange, damn, tricky issue:
If I want the user to be able to specify only a "Buyer Code" for Purchase, but No Word to Search of the Product using the code above (which is the first piece of code in the stored procedure does):
Changing from:
and (contains(pp.product_description, #search_word) or contains(pp.product_name, #search_word))
To:
and (#ExecuteWordSearch = 0 Or (contains(pp.product_description, #search_word) or contains(pp.product_name, #search_word)))
The query delays near 1 minute! (the execution is about 200 milliseconds for the query above).
But WHY??? I Use the same Logic of in all "Conditionally Wheres". I also use the same logic of having a flag/variable to indicate when execute the Where clause in the Word Search and the Brand Search, but the Brand Search works PERFECTLY! So Why, WHY only when I use the condition followed by a Contains my query delays 1 minute????
And this issue is not related with the amount of data, because I tried removing the entire Contains condition, allowing a lot of data to return, and it takes 1 second maximum...
Ow, It's a Microsoft SQL Server 2008 R2.
Thanks already for You read so far!
I cannot find the documentation I had around a very similar issue, but it sounded so familiar, I at least wanted to share what I remembered. Part of the issue is that for Sql Server, the full-text search engine is separate from the regular query execution engine, and so when you mix the two, in some cases, performance can tank. This is particularly true when the condition is an 'OR' rather than and 'AND'. (I remember hitting this exact situation). Conditional ANDs worked fine. But for OR, it's as if each condition gets evaluated repeatedly row by row.
Among the workarounds, one is, as already suggested, create your sql dynamically before execution.
Another would be to break the full-text and non-full text conditions into two search functions (literally UDF's) and then do whatever is needed (INTERSECT, EXCEPT, etc) with the two resultsets.
Try changing your WHERE clause to use a CASE statement, e.g.:
WHERE
CASE
WHEN #ExecuteWhereStatement = 0 THEN 1
WHEN #ExecuteWhereStatement = 1 THEN
CASE
WHEN CONTAINS([table].product_name, #search_word) THEN 1
ELSE 0
END
END = 1;

Getting "Multiple-step operation generated errors" In ASP Classic Using Merged Data Set Function

I am using a function I read about here to merge a series of recordsets that are generated by a stored procedure that is called in a loop. I have used this function before in other stored procedure cases and it has never produced this problem.
The research I did online points to basically one of two reasons:
1) I am trying to update or insert a date/time value that is not formatted correctly into a SQL Server 2005 table
2) I am trying to insert a, for example, CHAR(60) string into a CHAR(50) field.
Reason one is not applicable here since I am not using dates (in datetime format at least).
Reason two seemed to be the most likely issue. So I did a series of Response.Write() to spit out the objField.Name, objField.Type, objField.ActualSize, and if it was a numeric field the objField.Precision and objField.NumericScale.
Let us say that the stored SQL procedure is called twice as I am querying for values that are occurring in the same time frame but 2 different states. The loop I have in the ASP page does a For Each on the state in the state list and calls the stored procedure for each of the elements in the state list. I then call the MergeRecordSets function so it combines the 2 results into one. The general rule is that the data types and sizes of the columns in each resultset must be the same. With my Response.Write() checks of each of the columns returned in the 2 data sets I have found that they are identical. Doing my checks I also found that it breaks on the first column that is a NUMERIC column. The previous columns it was OK with were all CHAR or VARCHAR.
Is there any other reason why this error would come up?
The following is how I am calling the record merger function. The oQueryResult is going to be the final output (the combined records). objSingleRS is the result set returned by the stored procedure.
If oQueryResult Is Nothing Then
Set oQueryResult = objSingleRS
Else
Set oQueryResult = MergeRecordSets(Array(oQueryResult, objSingleRS))
End If
Here is the merge function. The line in which the code breaks is marked below.
Function MergeRecordSets(arrRecordsets)
Dim x, y, objCurrentRS
Dim objMergedRecordSet, objField, blnExists
Set objMergedRecordSet = Server.CreateObject("ADODB.Recordset")
For x=0 To UBound(arrRecordsets)
Set objCurrentRS = arrRecordsets(x)
For Each objField In objCurrentRS.Fields
blnExists = False
For y=0 To objMergedRecordSet.Fields.Count-1
If LCase(objMergedRecordSet.Fields(y).Name) = Lcase(objField.Name) Then
blnExists = True : Exit For
End If
Next
If Not(blnExists) Then
objMergedRecordSet.Fields.Append objField.Name, objField.Type, objField.DefinedSize
'objMergedRecordSet.Fields(objMergedRecordset.Fields.Count-1).Attributes = 32 'adFldIsNullable
End If
Next
Next
objMergedRecordSet.Open
For x=0 To UBound(arrRecordsets)
Set objCurrentRS = arrRecordsets(x)
Do Until objCurrentRS.EOF
objMergedRecordSet.AddNew
For Each objField In objCurrentRS.Fields
If Not(IsNull(objField.Value)) Then
'Response.Write(objField.Name & "<br>")
'Response.Write(objField.Type & "<br>")
objMergedRecordSet.Fields(objField.Name).Value = objField.Value 'Here is where it throws the Error.
End If
Next
objCurrentRS.MoveNext
Loop
Next
objMergedRecordSet.MoveFirst
Set MergeRecordSets = objMergedRecordSet
End Function
Here is the full error message returned:
Microsoft Cursor Engine error '80040e21'
Multiple-step operation generated errors. Check each status value.
/includes/funcs/Common.asp, line 4109
You mentioned that you have numeric columns, but you never set the Precision and NumericScale properties when you create the new Field in objMergedRecordSet. You need to set these properties for adNumeric and adDecimal fields.
objMergedRecordSet.Fields.Append objField.Name, objField.Type, objField.DefinedSize
With objMergedRecordSet.Fields(objField.Name)
.Precision = objField.Precision
.NumericScale = objField.NumericScale
End With
Also make sure you are not trying to put a NULL into a column that will not accept a NULL value. There is also the possibility of a type mismatch to cause this error so make sure you are passing a numeric value.
- Freddo

SDAC -RecordCount and FetchAll

I am using SDAC components to query a SQL Server 2008 database. It has a recordcountproperty as all datasets do and it also has the FetchAll property (which I think it is called packedrecords on clientdatasets). Said that, I got a few questions:
1 - If I set FetchAll = True the recordcount property returns ok. But in this case, when I have a large database and my query returns a lot of lines, sometimes the memory grows a lot (because it is fetching all data to get the recordcount of course).
2 - If I set FetchAll = False, the recordcount returns -1 and the memory does not grow. But I really need the recordcount. And I also wanna create a generic function for this, so I dont have to change all my existent queries.
What can I do to have the recordcount working and the memory usage of the application low in this case?
Please, do not post that I dont need recordcount (or that I should use EOF and BOF) because I really do and this is not the question.
I thought about using a query to determine the recordcount, but it has some problems since my query is going to be executed twice (1 for recordcount, 1 for data)
EDIT
#Johan pointed out a good solution, and it seems to work. Can anybody confirm this? I am using 1 TMSCconnection for every TMSQuery (because i am using threads), so I dont think this will be a problem, will it?
MSQuery1.FetchAll := False;
MSQuery1.FetchRows := 10;
MSQuery1.SQL.Text := 'select * from cidade';
MSQuery1.Open;
ShowMessage(IntToStr(MSQuery1.RecordCount)); //returns 10
MSQuery1.Close;
MSQuery2.SQL.Text := 'SELECT ##rowcount AS num_of_rows';
MSQuery2.Open;
ShowMessage(MSQuery2.FieldByName('num_of_rows').AsString); //returns 289
EDIT 2*
MSQuery1 must be closed, or MSQuery2 will not return the num_of_rows. Why is that?
MSQuery1.FetchAll := False;
MSQuery1.FetchRows := 10;
MSQuery1.SQL.Text := 'select * from cidade';
MSQuery1.Open;
ShowMessage(IntToStr(MSQuery1.RecordCount)); //returns 10
//MSQuery1.Close; <<commented
MSQuery2.SQL.Text := 'SELECT ##rowcount AS num_of_rows';
MSQuery2.Open;
ShowMessage(MSQuery2.FieldByName('num_of_rows').AsString); //returns 0
Run your query as normal, than close the query
MSQuery1.SQL.Text := 'select * from cidade';
MSQuery1.Open;
MSQuery1.Close;
You need the close otherwise SQL-server has not closed the cursor yet, and will not register the query as 'completed'.
and run the following query right afterwards:
SELECT ##rowcount AS num_of_rows
This will select the total number of rows your last select read.
It will also select the number of rows your update/delete/insert statement affected.
See: http://technet.microsoft.com/en-us/library/ms187316.aspx
Note that this variable is per connection, so queries in other connections do not affect you.
I use ODAC and I believe SDAC inherits from the same base classes and works the same way as ODAC. In ODAC, there is an option called QueryRecCount under Options in your query component. Look for TCustomDADataSet.Options.QueryRecCount in your help file.
Setting QueryRecCount = True and FetchAll = False will reduce your memory usage and give you the record count. But SDAC will run a second query in the background to get the record count so it does add a little bit of extra time to your query.
Take a look at the Devart forum entry at http://www.devart.com/forums/viewtopic.php?t=8143.

Resources