Oracle Call Interface: Memory Fault while executing legacy PRO*C code - c

This is a menu module being run with legacy PROC code since early 2000's.
The utility of this particulat menu option is that an admin/Manager at the retail store can add the new users or update the existing user details. So that what ever the user record he adds thrown this menu program writen in (PROC),will be refelcted in DB. And the existing users list will be fetched from DB and displayed to the Manager on accessing this menu.
The User record is something like below:
Name Null? Type
-------------- -------- -----------------
USER_ID NOT NULL VARCHAR2(10 CHAR)
USER_NAME VARCHAR2(30 CHAR)
GROUP_ID VARCHAR2(10 CHAR)
UPDATE_USER_ID VARCHAR2(10 CHAR)
USER_PASSWORD VARCHAR2(10 CHAR)
Everything is running fine all these years. Recently Figured out Magaers facing problem adding new users through menu. Figured out that the users count reached 146. And while trying to add the new user(i.e, 147's record), running into 'Memory fault'.
Trace files
/home/<Manager_Id>/oradiag_<Manager_Id>/diag/clients/user_<Manager_Id>/host_2779636078_110/trace/ora_26828_140568928807424.trc
DDE: Flood control is not active
2022-12-04T06:02:17.410531-05:00
`Incident 1 created, dump file:` `/home/<Manager_Id>/oradiag_<Manager_Id>/diag/clients/user_<Manager_Id>/host_2779636078_110/incident/incdir_1/ora_26828_140568928807424_i1.trc`
oci-24550 [11] [[si_signo=11] [si_errno=0] [si_code=1] [si_int=0] [si_ptr=0x7fd800000000] [si_addr=0x1a55026]] [] [] [] [] [] [] [] [] [] []
And here is incident trace file showing the stack trace.
cat ora_26828_140568928807424_i1.trc
Dump file /home/<Manager_Id>/oradiag_<Manager_Id>/diag/clients/user_<Manager_Id>/host_2779636078_110/incident/incdir_1/ora_26828_140568928807424_i1.trc
[TOC00000]
Jump to table of contents
Dump continued from file: /home/<Manager_Id>/oradiag_<Manager_Id>/diag/clients/user_<Manager_Id>/host_2779636078_110/trace/ora_26828_140568928807424.trc
[TOC00001]
oci-24550 [11] [[si_signo=11] [si_errno=0] [si_code=1] [si_int=0] [si_ptr=0x7fd800000000] [si_addr=0x1a55026]] [] [] [] [] [] [] [] [] [] []
[TOC00001-END]
[TOC00002]
========= Dump for incident 1 (oci 24550 [11]) ========
Tracing is in restricted mode!
<error barrier> at 0x7ffeaf8edd98 placed dbge.c#1317
[TOC00003]
----- Short Call Stack Trace -----
dbgexPhaseII()+1869<-dbgexProcessError()+1871<-dbgePostErrorDirectVaList_int()+1672<-dbgePostErrorDirect()+408<-kpeDbgSignalHandler()+299<-skgesig_sigactionHandler()+258<-__sighandler()<-__strncpy_sse2_unaligned()+2598<-process_user_security()+2649<-display_user_security()+1479<-menu1()+4839<-main()+203<-__libc_start_main()+245[TOC00003-END]
[TOC00004]
As shown in the stack trace
2598<-process_user_security()+2649<-display_user_security()
inspected these function calls from the PRO*C code and unable to figure out why 'Memory fault' happening exactly after 146 users. in the DB we are able to add more users manually without any issues.
I have read the previous answer related to 'Memory fault' where there can be possibility of void pointer reference, index out of bound etc. But not overwhelmed as this is 18K LOC. need guidance from any one who can help in tracing the culprit from this gigantic one.

Related

Unable to add items to Roblox Table

I am having difficulty troubleshooting some code.
I have a for loop and in it I clone a part (called EnemySiteHub).
I expect that I can store each cloned part to a table (called EnemySiteTable).
Unfortunately, even though the loop runs successfully and I actually see the cloned EnemySiteHubs during a run of the game.. The table is size remains 0.
Trying to access the table in code gives a nil error.
Code snip:
local ENEMYSITE_COUNT = 5
local EnemySiteTable = {} -- [[ Store the table of enemy site objects ]]
-- Loops until there are the amount of enemy site hubs set in ENEMYSITE_COUNT
for i = 1, ENEMYSITE_COUNT do
--Makes a copy of EnemySiteHub
local enemySite = ServerStorage.EnemySites.EnemySiteHub:Clone()
enemySite.Parent = workspace.EnemySites
EnemySiteTable[i] = enemySite
This line of code causes causes the error below.
local enemySiteTableSize = #enemySiteTable
18:12:37.984 - ServerScriptService.MoveEnemyToSite:15: attempt to get length of a nil value
Any help will be appreciated.
#array is used to retrieve the length of arrays. You will have to use some sort of table.function() or use a for i,v in pairs(EnemySiteTable) loop.
Here's some more information: https://developer.roblox.com/en-us/articles/Table
Thanks #pyknight202
The problem originated somewhere else in my code.
The EnemySiteTable is in a module script.
This code below is the correct code to give access to the EnemySiteTable
--Have the table of enemies accessible
EnemySiteManager.EnemySiteTable = EnemySiteTable
I had an error (typo) in that line of code.
The effect of that error kept returning a nil table, giving a table size of 0.

Talend avoid duplicate external ID with Salesforce Output

We are importing data on Salesforce through Talend and we have multiple items with the same internal id.
Such import fails with error "Duplicate external id specified" because of how upsert works in Salesforce. At the moment, we worked that around by using the commit size of the tSalesforceOutput to 1, but that works only for small amount of data or it would exhaust Salesforce API Limits.
Is there a known approach to it in Talend? For example, to ensure items that have same external ID ends up in different "commits" of tSalesforceOutput?
Here is the design for the solution I wish to propose:
tSetGlobalVar is here to initialize the variable "finish" to false.
tLoop starts a while loop with (Boolean)globalMap.get("finish") == false as an end condition.
tFileCopy is used to copy the initial file (A for example) to a new one (B).
tFileInputDelimited reads file B.
tUniqRow eliminates duplicates. Uniques records go to tLogRow you have to replace by tSalesforceOutput. Duplicates records if any go to tFileOutputDelimited called A (same name as the original file) with the option "Throw an error if the file already exist" unchecked.
OnComponent OK after tUniqRow activates the tJava which set the new value for the global finish with the following code:
if (((Integer)globalMap.get("tUniqRow_1_NB_DUPLICATES")) == 0) globalMap.put("finish", true);
Explaination with the following sample data:
line 1
line 2
line 3
line 2
line 4
line 2
line 5
line 3
On the 1st iteration, 5 uniques records are pushed into tLogRow, 3 duplicates are pushed into file A and "finish" is not changed as there is duplicates.
On the 2nd iteration, operations are repeated for 2 uniques records and 1 duplicate.
On the 3rd iteration, operations are repeated for 1 unique and as there not anymore duplicate, "finish" is set to true and the loop automatically finishes.
Here is the final result:
You can also decide to use an other global variable to set the salesforce commit level (using the syntax (Integer)globalMap.get("commitLevel")). This variable will be set to 200 by default and to 1 in the tJava if any duplicates. At the same time, set "finish" to true (without testing the number of duplicates) and you'll have a commit level to 200 for the 1st iteration and to 1 for the 2nd (and no need more than 2 iterations).
You'll decide the better choice depending on the number of potential duplicates, but you can notice that you can do it whitout any change to the job design.
I think it should solve your problem. Let me know.
Regards,
TRF
Do you mean you have the same record (the same account for example) twice or more in the input?If so, can't you try to eliminate the duplicates and keep only the record you need to push to Salesforce?Else, if each record has specific informations (so you need all the input records to have a complete one in Salesforce), consider to merge the records before to push the result into Salesforce.
And finally, if you can't do that, push the doublons in a temporary space, push the records but the doublons into Salesforce and iterate other this process until there is no more doublons.Personally, if you can't just eliminate the doublons, I prefer the 2nd approach as it's the solution to have less Salesforce API calls.
Hope this helps.
TRF

Apex Trigger in Salesforce using 2 objects

I am new to Salesforce, project requests I keep track of last_id used.
I created 2 SF objects, one holds the last_id other holds total number of ids to assign. I would like user to enter a number and the number will be added to the last_id. Result will be stored in the tracking object.
Code:
tracking_next_id__c[] btnext = [SELECT last_end_id__c FROM tracking_next_id__c];
for (tracking__c updatedAccount : Trigger.new) 
{
updatedAccount.next_id__c = btnext[0].last_end_id__c + updatedAccount.total_account__c;
}
When I run the trigger; I get error Error:
Invalid Data.
Review all error messages below to correct your data.
Apex trigger getNextId caused an unexpected exception, contact your administrator: getNextId: execution of AfterUpdate caused by: System.FinalException: Record is read-only: Trigger.getNextId: line 11, column 1
After much research, found if I changed my trigger to before update instead of after update, it worked.
Right, you cannot edit a record in after insert trigger.

Yesod Persistent atomic interaction

I was completely missing the point of database opened connection and rollback feature so I was using runDB myAction every time, because I didn't realize what was going on. Today I made some tests to try to understand how it does the rollback, and one of them was this:
getTestR :: Handler Text
getTestR = do
runDB $ insert $ Test 0
runDB $ do
forM_ [1..] $ \n -> do
if n < 10
then do
insert $ Test n
return ()
else undefined
return "completed"
I got an undefined error at runtime, as expected, and only the first runDB action got in the database, the second runDB got rolled back and when I inserted another registry, its id started with 9 positions ahead the last persisted element.
Suppose I have to do 2 gets actions in the database, and I do them in two ways, first I do:
getTestR :: FooId -> BooId-> Handler Text
getTestR fooid booid = do
mfoo <- runDB $ get fooid
mboo <- runDB $ get booid
return "completed"
and then I try:
getTest'R :: FooId -> BooId-> Handler Text
getTest'R fooid booid = do
(mfoo, mboo) <- runDB $ do
mfoo <- get fooid
mboo <- get booid
return (mfoo,mboo)
return "completed"
Which would be the actual overall difference? I think that in this case database consistence is not an issue, but performance may be (or will Haskell laziness make them equal because mfoo and mboo are never used so they are never queried?). Probably these questions look very nonsense, but I would like to be sure I don't have gaps in my understandings.
I think you have answered your own question while discussing two DB actions. 'runDB' has following signature.
runDB :: YesodDB site a -> HandlerT site IO a
YesodDB is a ReaderT transformer monad. runDb lifts DB action to IO action. In the first example, there are two separate IO actions (not DB action). In the second snippet, there is only a single DB action. In the first example, one or both actions may succeed. But in the second one, you will either get result of two gets or an error.
As there are two IO actions wrapping up two runDBs, the DB interaction is not optimized, as each runDB represents a single action. In second however, the two actions will share same connection.
You might want to have a look at YesodPersistentBackend and use getDBRunner for for sharing connection from a pool.

How can I debug problems with warehouse creation?

When trying to create a warehouse from the Cloudant dashboard, sometimes the process fails with an error dialog. Other times, the warehouse extraction stays in a state of triggered even after hours.
How can I debug this? For example is there an API I can call to see what is going on?
Take a look inside the document inside the _warehouser database, and look for the warehouser_error_message element. For example:
"warehouser_error_message": "Exception occurred while creating table.
[SQL0670N The statement failed because the row size of the
resulting table would have exceeded the row size limit. Row size
limit: \"\". Table space name: \"\". Resulting row size: \"\".
com.ibm.db2.jcc.am.SqlException: DB2 SQL Error: SQLCODE=-670,
SQLSTATE=54010, SQLERRMC=32677;;34593, DRIVER=4.18.60]"
The warehouser error message usually gives you enough information to debug the problem.
You can view the _warehouser document in the Cloudant dashboard or use the API, e.g.
export cl_username='<your_cloudant_account>'
curl -s -u $cl_username -p \
https://$cl_username.cloudant.com/_warehouser/_all_docs?include_docs=true \
| jq [.warehouse_error_code]

Resources