ADF Calendar performance leak - calendar

I am using JDeveloper 11.1.2.3.0
I have implemented af:calenar functionality in my application. My calendar is based in a ViewObject that queries a database table with a big number of records (500-1000). Performing the selection through a select query to my database table is very fast, only some ms. The problem is that the time to load of my af:calendar is too long. It requires more than 5 seconds. If I just want to change the month, or the calendar view I have to wait approximately that amount of time. I searched a lot through the net but I found no explanation to this. Can anyone please explain why it takes so long? Has anyone ever faced this issue?
PS: I have tested even with JDeveloper 12 and the problem is identically the same

You should look into the viewobject tuning properties to see how many records you fetch in a single network access, and do the same check for the executable that populates your calendar.
Also try using the HTTP Analyzer to see what network traffic is going on and the ADF Logger to check what SQL is being sent to the DB.
https://blogs.oracle.com/shay/entry/monitoring_adf_pages_round_trips

Related

Does PowerQuery Data Preview work differently than Data Loading in PowerBI?

PowerQuery tends to load previews quickly but the very same data takes much longer to load into the PowerBI model when we click 'Close and Apply'. Is there some difference in the way these two things are done? They both depend only on a SQL server stored procedure and I cannot share any screenshots here due to confidentiality of company data. I am hoping someone else has had this issue and/or understands how PowerBI data loads work and can explain this difference.
Tried multiple data loads and varying the timeout period. I expected that lengthening the timeout period would make a difference but the load failed.
** I posted this question earlier today and got a pretty hostile reply and a down-vote so I deleted that one and tried to re-phrase it and repost.
PQ can work differently depending on the circumstances (flat files, query folding, transformations involved). PQ typically works by streaming data rather than loading an entire dataset in order to be more efficient with memory. Given you only preview 1000 records in the preview window, if no aggregations or sorts are happening, only 1000 records will be streamed so you can get a quick preview. When loading an entire dataset, then all records will need to go through the transformation steps rather than just the first 1000. This is a really in-depth topic and videos like the following give you some insight: https://www.youtube.com/watch?v=AIvneMAE50o
The following articles also explain this in detail:
https://bengribaudo.com/blog/2018/02/28/4391/power-query-m-primer-part5-paradigm
https://bengribaudo.com/blog/2019/12/10/4778/power-query-m-primer-part12-tables-table-think-i
https://bengribaudo.com/blog/2019/12/20/4805/power-query-m-primer-part13-tables-table-think-ii

How Can I Update My Datagridview Realtime With Two Or More User?

I have the same app. Installed in pc 1 and pc 2 the connection string is pointing to my database server using IP address, for example if user 1 accessing table_employee and user 2 accessing table_employee also, how can I update my datagridview realtime in the app. If user 1 making a changes in the same table and user 2 will know and his/her datagridview will update also?
I have use timer for every 3-5 secs datagridview will refresh with or without changes, using timer will make your app. Lag if there are thousands of data stored in database and datagridview will load/refresh that info. Every 3-5 secs in the future, and I need another solution/way/opinion to do this?
Thank you!
vb.net 2010, Microsoft Sql Server 2014
Fundamentally one cannot update a datagridview real time in Access. It is an 'event driven' application rather than an executable - intending to have a user/driver trigger events.
You know about the form timer event - and that is the normal work-around to this situation.
The fact that the sheet takes a long time to re-query is somewhat of a different issue. That may possibly be improved depending on the cause. Linked excel sheets are slower than linked Access tables, and other causes involving a query record set such as sorting and look ups might be able to be addressed in other ways so that the refresh is less noticeable.

How to keep large SOQL query cached in Salesforce Org. for speedy returns?

We are having an issue where we need event relations for people(s), and are having problems with this very large group of people having almost 400 total event relations in this one week we are testing on... When trying to grab this large groups event relations, it will take forever and possibly time out. However, if you try again right after a timeout it goes in a couple seconds and is great. I was thinking this was salesforce just chaching the soql query/information and so it could act very quickly the second time. I tried to kind of trick it into having this query cached and ready by having a batch job that ran regularly to query every members event relations so when they tried to access our app the timeout issue would stop.
However, this is not even appearing to work. Even though the batch is running correctly and querying all these event relations, when you go to the app after a while without using it, it will still timeout or take very long the first time then be very quick after that.
Is there a way to successfully keep this cached so it will run very quickly when a user goes and tries to see all the event relations of a large group of people? With the developer console we saw that the event relation query was the huge time suck in the code and the real issue. I have been kind of looking into the Platform Cache of salesforce. Would storing this data there provide the solution I am looking for?
You should look into updating your query to be selective by using indexes in the where cause and custom indexes if necessary.

Classic ASP Bottlenecks

I have 2 websites connecting to the same instance of MSSQL via classic ASP. Both websites are similar in nature and run similar queries.
One website chokes up every once in a while, while the other website is fine. This leads me to believe MSSQL is not the problem, otherwise I would think the bottleneck would occur in both websites simultaneously.
I've been trying to use Performance Monitor in Windows Server 2008 to locate the problem, but since everything is in aggregate form, it's hard to find the offending asp page.
So I am looking for some troubleshooting tips...
Is there a simple way to check all recent ASP pages and the see amount of time they ran for?
Is there a simple way to see live page requests as they happen?
I basically need to track down this offending code, but I am having a hard time seeing what happening in real-time through IIS.
If you use "W3C Extended Logging" as the log mode for your IIS logfiles, then you can switch on a column "time-taken" which will give you the execution time of each ASP in milliseconds (by default, this column is disabled). See here for more details.
You may find that something in one application is taking a lock in the database (e.g. through a transaction) and then not releasing it, which causes the other app to timeout.
Check your code for transactions and them being closed, and possibly consdier setting up tracing on the SQL server to log deadlocks.
Your best bet is to run SQL Server profiler to see what procedure or sql may be taking a long time to execute. You can also use Process Monitor to see any pages that may be taking a long time to finish execution and finally dont forget to check your IIS logs.
Hope that helps

How to resolve Sybase table locks (VB6)?

I am not a great VB programmer, but I am tasked with maintaining/enhancing a VB6 desktop application that uses Sybase ASE as a back-end. This app has about 500 users.
Recently, I added functionality to this application which performs an additional insert/update to a single row in the database, key field being transaction number and the field is indexed. The table being updated generally has about 6000 records in it, as records are removed when transactions are completed. After deployment, the app worked fine for a day and a half before users were reporting slow performance.
Eventually, we traced the performance issue to a table lock in the database and had to roll back to the previous version of the app. The first day of use was on Monday, which is generally a very heavy day for system use, so I'm confused why the issue didn't appear on that day.
In the code that was in place, there is a call to start a Sybase transaction. Within the block between the BeginTrans and CommitTrans, there is a call to a DLL file that updates the database. I placed my new code in a class module in the DLL.
I'm confused as to why a single insert/update to a single row would cause such a problem, especially since the system had been working okay before the change. Is it possible I've exposed a larger problem here? Or that I just need to reconsider my approach?
Thanks ahead for anyone who has been in a similar situation and can offer advice.
It turns out that the culprit was a message box that appears within the scope of the BeginTrans and CommitTrans calls. The user with the message box would maintain a blocking lock on the database until they acknowledged the message. The solution was to move the message box outside of the aforementioned scope.
I am not able to understand the complete picture without the SQL code, that you are using.
Also, if it is a single insert OR update, why are you using a transaction? Is it possible that many users will try to update the same row?
It would be helpful if you posted both the VB code and your SQL (with the query plan if possible). However with the information we have; I would run update statistics table_name against the table to make sure that the query plan is up to date.
If you're sure that your code has to run within a transaction have you tried adding your own transaction block containing your SQL rather than using the one already there?

Resources