System: MS-SQLServer 2016, Apache NiFi 1.11.4
The workflow for inserting records in MS-SQL with PutSQL processor works fine.
The workflow for updating records hangs:
2 flowfiles in queue
PutSQL processor shows "active threat"
but no log entry, no error message, no further worflow in success, failure or retry
Screenshot: PutSQL processor
The UODATE statement is:
UPDATE [MyERP].[dbo].[Cust_audit]
SET [SYNC_SYNCED_AT] = CURRENT_TIMESTAMP,
[SYNC_STATUS] = 'OK'
WHERE [SYNC_RECID] = ?
and the attributes are
sql.args.1.type=4
sql.args.1.value=373737
Any idea, whats going wrong?
I found the solution:
In PutSQL Processor "Support Fragmented Transactions" was still on "true" but should be "false"
Sorry for asking stupid questions ;)
Related
I am stucked in this challenge and not sure why is it not completing. Please have a look at below details.
Error Message -
Challenge Not yet complete... here's what's wrong: The Fulfillment Cancellation Automation process does not appear to be working properly. Make sure that a cancelled Fulfillment updates the Adventure Package correctly.
My Process builder is as follows:
Object: Fulfillment
Entry Criteria: [Fulfillment__c].Status__c = Cancelled AND [Fulfillment__c].Schedule_Date__c > TODAY()
Immediate Actions:
Based on [Fulfillment__c].Opportunity.OpportunityLineItems
Field Update Filter condition :
Line Item ID equals Formula [FullFillment__c].AdventurePackageId__c
Field to Update :
Sales Price equal to [Fulfillment__c].Deposit__c
I did some finding on web and have changed the below things as well but not working for me.
The Explorer__c field was set to "Required" and "What to do if the lookup record is deleted?" was set to "Don't allow deletion of the lookup record that's part of a lookup relationship.".
I updated the "Required" to false and changed "What to do if the lookup record is deleted?" to "Clear the value of this field. You can't choose this option if you make this field required."
I have unrequired the Explorer__c field on the layout too.
After all the above changes, I am still not able to complete the challenge.
Any help will be really appreciated.
Thanks in advance.
I'm getting this as well, and I think there may well be a bug in their test.
I've manually tested the processes, and it works as described. The Sales Price on the Adventure Package gets updated to the Fulfillment's Deposit amount.
Looking in the debug logs, the query clearly selects 1 record (which is what we'd expect) into a List called fullfillmentList before the code immediately fails an assertion with the message Fulfillment list is empty.
this error is showing because, you might have deactivated the previous process flow i.e Fulfillment Creation, which also should be active for completion of this step in the superbadge
(Submitting on behalf of a Snowflake user)
At the time of query execution on Snowflake, I need its query id. So I am using following code snippet:
cursor.execute(query, _no_results=True)
query_id = cursor.sfqid
cursor.query_result(query_id)
This code snippet working fine for small running queries. But for query which takes more than 40-45 seconds to execute, query_result function fails with KeyError u'rowtype'.
Stack trace:
File "snowflake/connector/cursor.py", line 631, in query_result
self._init_result_and_meta(data, _use_ijson)
File "snowflake/connector/cursor.py", line 591, in _init_result_and_meta
for column in data[u'rowtype']:
KeyError: u'rowtype'
Why would this error occur? How to solve this problem?
Any recommendations? Thanks!
The Snowflake Python Connector allows for async SQL execution by using cur.execute(sql, _no_results=True)
This "fire and forget" style of SQL execution allows for the parent process to continue without waiting for the SQL command to complete (think long-running SQL that may time-out).
If this is used, many developers will write code that captures the unique Snowflake Query ID (like you have in your code) and then use that Query ID to "check back on the query status later", in some sort of looping process. When you check back to get the status, you can then get the results from that query_id using the result_scan( ) function.
https://docs.snowflake.net/manuals/sql-reference/functions/result_scan.html
I hope this helps...Rich
using sql server and delphi 10.3.1, and firedac.
I am using cached updates, with autocommit on.
I keep managing to get my data into a state where the record has been deleted from the database, and I have also deleted that record in the dataset.
then, when it attempts to commit the change to the database(where the data no longer exists), I get an error:
[my application] raised exception class emssqlNativeException with message [firedac][Phys][odbc][sqlncli11.dll] SQL_NO_DATA
and then I can't clear the cached updates flag on the dataset, because there is stuff 'sitting' there.
my question - how can I get it to NOT return that error? because it's really not an error, it's trying to delete a record that no longer exists. I am not finding ANY documentation on the update options on a query, so is there a flag there I need to set?
You can handle update errors in OnUpdateError and perform any additional checks before deciding how to proceed. Blindly pretending all deletes worked would be something like:
procedure TForm1.FDQuery1UpdateError(ASender: TDataSet; AException:
EFDException; ARow: TFDDatSRow; ARequest: TFDUpdateRequest; var AAction:
TFDErrorAction);
begin
if ARequest = ARDelete then AAction := eaApplied;
end;
Read the online help for OnUpdateError for more information.
Recently I met a strange problem, see code snips as below:
var
sqlCommand: string;
connection: TADOConnection;
qry: TADOQuery;
begin
connection := TADOConnection.Create(nil);
try
connection.ConnectionString := 'Provider=Microsoft.Jet.OLEDB.4.0;Data Source=Test.MDB;Persist Security Info=False';
connection.Open();
qry := TADOQuery.Create(nil);
try
qry.Connection := connection;
qry.SQL.Text := 'Select * from aaa';
qry.Open;
qry.Append;
qry.FieldByName('TestField1').AsString := 'test';
qry.Post;
beep;
finally
qry.Free;
end;
finally
connection.Free;
end;
end;
First, Create a new access database named test.mdb and put it under the directory of this test project, we can create a new table named aaa in it which has only one text type field named TestField1.
We set a breakpoint at line of "beep", then lunch the test application under ide debug mode, when ide stops at the breakpoint line (qry.post has been executed), at this time we use microsoft access to open test.mdb and open table aaa you will find there are no any changes in table aaa, if you let the ide continue running after pressing f9 you can find a new record is inserted in to table aaa, but if you press ctrl+f2 to terminate the application at the breakpoint, you will find the table aaa has no record been inserted, but in normal circumstance, a new record should be inserted in to the table aaa after qry.post executed.
who can explain this problem , it troubles me so long time. thanks !!!
BTW, the ide is delphi 2010, and the access mdb file is created by microsoft access 2007 under windows 7
Access won't show you records from transactions that haven't been committed yet. At the point where you pause your program, the implicit transaction created by the connection hasn't been committed yet. Haven't experimented, but my guess would be that the implicit transaction will be committed after you free the query. So if you pause just after that, you should see your record in MS Access.
After more information from Ryan (see his answer to himself), I did a little more investigating.
Having a primary key (autonumber or otherwise) doesn't seem to affect the behaviour.
Table with autonumber column as primary key
connection.Execute('insert into aaa (TestField1) values (''Test'')');
connection.Execute('select * from aaa');
connection.Execute('delete * from aaa');
beep;
finally
connection.Free;
end;
Stopping on the "select" does not show the new record.
Stopping on the "delete" shows the new record.
Stopping on the "beep" still shows all records in the table even after repeated refresh's.
Stopping on the "connection.Free" shows no more records in the table. Huh?
Stopping on a "select" inserted between the "delete" and the "beep" shows no more records in the table.
Same table
connection.Execute('insert into aaa (TestField1) values (''Test'')');
beep;
connection.Execute('delete * from aaa');
beep;
beep;
Stopping on each statement shows that Access doesn't receive the "command" until at least one other statement has been executed. In other words: the beep after the "Execute" statement must have been processed before the statement is processed by Access (it may take a couple of refreshes to show up, the first refresh isn't always enough). If you stop on the first beep after the "Execute" statement nothing has happened in Access and won't if you reset the program without executing any other statements.
Stepping into the connection.Execute (Use debug dcu's on): the effect of the executed sql statement is now visible in Access on return to the beep. Actually, it is visible much earlier. For example stepping into the "delete" statement, the record becomes marked #deleted somewhere still in the ADODB code.
In fact, when stepping through the adodb code, the record becomes visible in Access when stopped in the OnExecuteComplete handler. Not when stopped on the "begin", but when stopped on the "if Assigned" immediately thereafter. The same applies to the delete statement. The effect becomes visible in Access when stopped on the if statement in the OnExecuteComplete handler in AdoDb.
Ado does have an ExecuteOption to execute statements asynchronously. It wasn't in effect during all this (its not included by default). And while we are dealing with an out-of-process COM server and with call backs such as the OnExecuteComplete handler, that handler was executed before returning to the statement right after the ConnectionObject.Execute statement in the TAdoConnection.Execute method in AdoDb.
All in all I think it isn't so much a matter of synchronous or asynchronous execution, but more a matter of when references are released (we are dealing with COM and interface reference counting), or with thread and process timing issues (in app, Access and between them), or with a combination thereof.
And the debugger may just be muddling things more than clarifying them. Would be interesting to see what happens in D2010 with its single thread debugging capabilities, but haven't got it available where I am (now and for the next two weeks).
First , Marjan, Thank you for your answer, I am very sure I had clicked the refesh button in that time, but there was still nothing changed....
After many experiments, I found that if I inserted an auto-increment id into table fields as primary key , this strange behaviour would not happen, although i have done this , there is another strange behaviour , I will show my code snips , as below:
procedure TForm9.btn1Click(Sender: TObject);
var
sqlCommand: string;
connection: TADOConnection;
begin
connection := TADOConnection.Create(nil);
try
connection.ConnectionString := 'Provider=Microsoft.Jet.OLEDB.4.0;Data Source=Test.MDB;Persist Security Info=False';
connection.Open();
connection.Execute('insert into aaa (TestField1) values (''Test'')');
connection.Execute('select * from aaa');
connection.Execute('delete * from aaa'); // breakpoint 1
beep; // breakpoint2
finally
connection.Free;
end;
end;
Put two breakpoints at line “delete” and “beep”, when codes stoped at breakpoint1, you can refresh the database , and you would find that the record was inserted, continue running when the codes stoped at the breakpoint2, you would find the record was still in there..... If at this time you pressed ctrl+f2, the record would be not deleted.... if connection.execute is a real sychronouse procedure , this should not happend. sorry for checking your answer so late, because i am on our dragon boat festival...
Marjan, thanks for your response again, but i can't accept this behaviour what the connection enginee processes, today I find something useful on MSDN website, see:
http://msdn.microsoft.com/en-us/library/ms719649(v=VS.85).aspx
I have resolved the problem fortunately according to the article, Actually, the default value of the property "Jet OLEDB:Implicit Commit Sync" is false, According to the explanation of this property, Be false implies that the implicit transaction will use asynchronouse mode. so what we can do is set this property be true by using code snips as below :
connection.Properties.Item['Jet OLEDB:Implicit Commit Sync'].Value := true;
BTW, according to that article, this property can only be set by using the Properties property of the connection object, otherwise if it is set in connection string, an error will occur
Is there a concise list of SQL Server stored procedure errors that make sense to automatically retry? Obviously, retrying a "login failed" error doesn't make sense, but retrying "timeout" does. I'm thinking it might be easier to specify which errors to retry than to specify which errors not to retry.
So, besides "timeout" errors, what other errors would be good candidates for automatic retrying?
Thanks!
You should retry (re-run) the entire transaction, not just a single query/SP.
As for the errors to retry, I've been using the following list:
DeadlockVictim = 1205,
SnapshotUpdateConflict = 3960,
// I haven't encountered the following 4 errors in practice
// so I've removed these from my own code:
LockRequestTimeout = 1222,
OutOfMemory = 701,
OutOfLocks = 1204,
TimeoutWaitingForMemoryResource = 8645,
The most important one is of course the "deadlock victim" error 1205.
I would extend that list, if you want absolutely complete list use the query and filter the result.
select * from master.dbo.sysmessages where description like '%memory%'
int[] errorNums = new int[]
{
701, // Out of Memory
1204, // Lock Issue
1205, // Deadlock Victim
1222, // Lock request time out period exceeded.
7214, // Remote procedure time out of %d seconds exceeded. Remote procedure '%.*ls' is canceled.
7604, // Full-text operation failed due to a time out.
7618, // %d is not a valid value for a full-text connection time out.
8628, // A time out occurred while waiting to optimize the query. Rerun the query.
8645, // A time out occurred while waiting for memory resources to execute the query. Rerun the query.
8651, // Low memory condition
};
You can use a SQL query to look for errors explicitly requesting a retry (trying to exclude those that require another action too).
SELECT error, description
FROM master.dbo.sysmessages
WHERE msglangid = 1033
AND (description LIKE '%try%later.' OR description LIKE '%. rerun the%')
AND description NOT LIKE '%resolve%'
AND description NOT LIKE '%and try%'
AND description NOT LIKE '%and retry%'
Here's the list of error codes:
539,
617,
952,
956,
983,
1205,
1807,
3055,
5034,
5059,
5061,
5065,
8628,
8645,
8675,
10922,
14258,
20689,
25003,
27118,
30024,
30026,
30085,
33115,
33116,
40602,
40642,
40648
You can tweak the query to look for other conditions like timeouts or memory problems, but I'd recommend configuring your timeout length correctly up front, and then backing off slightly in these scenarios.
I'm not sure about a full listing of these errors, but I can warn you to be VERY careful about retrying queries. Often there's a larger problem afoot when you get errors from SQL, and simply re-running queries will only further compact the issue. For instance, with the timeout error, you typically will have either a network bottleneck, poorly indexed tables, or something on those lines, and re-running the same query will add to the latency of other queries already obviously struggling to execute.
The one sql server error that you should always catch on inserts and updates (and it is quite often missed), is the deadlock error no. 1205
Appropriate action is to retry the INSERT/UPDATE a small number of times.