CRM Dynamics Performance / Timeout issue - sql-server

We have Microsoft Dynamics 365 (CRM) on premise version.
This instance is used by around 100 users and there are 15+ custom applications written in .Net, which consumes CRM Web service to perform CRUD operations.
For fetching data there are direct SQL select statements and NO Web service existence across the custom applications. Data size is also not much high, few plugins and workflows defined in CRM system. Since long everything has worked but suddenly from last 2-3 months we have started seeing performance issues where end users are seeing slowness, or screen taking long time than expected to load controls, or Timeout errors.
This issue is not constant its an intermittent issue and it happens in business hours (PST/EST)
I wanted to know if there is any way to capture the logs about the issue in CRM, any, way in CRM where I can go and refer the log information or error traces which will help me to get the bottom of this issue?

I think the old tools/diagnostics/diag.aspx page should still work on-prem.
Just append that path to your Dynamics URL, e.g.: https://myOrg.mydomain.com/tools/diagnostics/diag.aspx
When you click Run it will generate some stats about the network and form performance.
Dynamics also has diagnostics tracing capabilities built in (or at least it used to - haven't tried recently.) This article has instructions on that.
Here's a summary (unconfirmed & untested)
On the CRM Server
Open registry (run regedit)
Navigate to HKEY_LOCAL_MACHINE\Software\Microsoft\MSCRM
Add new keys:
Name: TraceEnabled
Type: DWORD
Value: 1
Name: TraceDirectory
Type: String
Value: C:\CRMTrace
Name: TraceRefresh
Type: DWORD
Value: 99
Create the folder "CRMTrace" in C directory
Reset IIS (Run CMD as administrator >> execute this “iisreset” command )
This article has more, including PowerShell instructions.
Back in the day there was a desktop app called the Diagnostics Tool that allowed you to turn the logging on and off.
Also, please note that if you accidentally leave the logging on, it can fill up the C: drive and crash the server!

Related

Host server bloating with numerous fake session files created in hundreds every minute on php(7.3) website

I'm experiencing an unusual problem with my php (7.3) website creating huge number of unwanted session files on server every minute (around 50 to 100 files) and i noticed all of them having a fixed size of 125K or 0K in cPanel's file manager, hitting iNode counts going uncontrolled into thousands in hours & hundred thousands+ in a day; where as my website really have a small traffic of less than 3K a day and google crawler on top it. I'm denying all bad bots in .htaccess.
I'm able to control situation with help of a cron command that executes every six hours cleaning all session files older than 12hours from /tmp, however this isn't an ideal solution as fake session files getting created great in number eating all my server resources like RAM, Processor & most importantly Storage getting bloated impacting overall site performance.
I opened many of such files to examine but found them not associated with any valid user as i add user id, name, email to session upon successful authentication. Even assuming a session created for every visitor (without acc/login), it shouldn't go beyond 3K on a day but sessions count going as high as 125.000+ just in a day. Couldn't figure out the glitch.
I've gone through relevant posts and made checks like adding IP & UserAgent to sessions to track suspecious server monitoring, bot crawling, overwhelming proxy activities, but with no luck! I can also confirm by watching their timestamps that there is no human or crawler activity taken place when they're being created. Can see files being created every single minute without any break throughout the day!!.
Didn't find any clue yet in order to figure out root cause behind this weird behavior and highly appreciate any sort of help to troubleshoot this! Unfortunately server team unable to help much but added clean-up cron. Pasting below content of example session files:
0K Sized> favourites|a:0:{}LAST_ACTIVITY|i:1608871384
125K Sized> favourites|a:0:{}LAST_ACTIVITY|i:1608871395;empcontact|s:0:"";encryptedToken|s:40:"b881239480a324f621948029c0c02dc45ab4262a";
Valid Ex.File1> favourites|a:0:{}LAST_ACTIVITY|i:1608870991;applicant_email|s:26:"raju.mallxxxxx#gmail.com";applicant_phone|s:11:"09701300000";applicant|1;applicant_name|s:4:Raju;
Valid Ex.File2> favourites|a:0:{}LAST_ACTIVITY|i:1608919741;applicant_email|s:26:"raju.mallxxxxx#gmail.com";applicant_phone|s:11:"09701300000";IP|s:13:"13.126.144.95";UA|s:92:"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:82.0) Gecko/20100101 Firefox/82.0 X-Middleton/1";applicant|N;applicant_name|N;
We found that the issue triggered following hosting server's PHP version change from 5.6 to 7.3. However we noticed unwanted overwhelming session files not created on PHP 7.0! It's same code base we tested against three versions. Posting this as it may help others facing similar issue due to PHP version changes.

Get (Exception from HRESULT:0x800300002 (STG_E_FILENOTFOUND)) when running a new SSRS Report

I have created two SSRS reports within a Report Project using VS 2019 and they work fine.
When I create a third report using the same procedure and attempt run it I get "Exception from HRESULT:0x800300002 (STG_E_FILENOTFOUND)".
Can someone suggest what is causing this problem?
A bit late to the party, but I had the same issue.
For me, this was caused by Windows Security Ransomware Protection which was preventing PreviewProcessingService.exe from running.
You can allow PreviewProcessingService.exe, and anything else you don't want blocked, by following these steps.
Open Windows Security (Just search for Windows Security from the Start Menu)
Go to: Virus & threat protection, then Manage ransomware protection
Go to: Allow an app through controlled folder access
Click on Add an allowed app then select Recently blocked apps
Also worth noting, I've had multiple services blocked by this (npm, git, etc.). If something isn't working as expected, it's worth checking it it's being blocked.
Hopefully this helps you out.

Login on Microsoft Visual Fox Pro from BAT file

At work we use a program based on MS Visual FoxPro. Even though everybody uses the same password, and the information inside the program is not very delicate, I haven't been able to get the password removed. Simply because the developers want money to do the job and my boss doesn't want to pay.
I also use a BAT file to open my most used programs and websites, which are pretty much all on auto-login. Except the MS Visual FoxPro program.
I found a BAT script somewhere that waits a certain amount of time, and then mimics keyboards entries. But for some reason it doesn't seem to work on Win10.
So I am wondering if anybody knows a way to automatically sent the password via the BAT file?
The auto-login script I mention above was found here: Automatically open a browser and login to a site?
We use AutoHotkey to automate certain tasks with our own in-house VFP application. It works well. It supports Windows 10 (though we only use it on 7 and Server 2008 here)
So you have an application developed with M$ Foxpro (one of its various versions).
I cannot speak to how the developers 'built' your application. I can only speak for the various VFP applications that I have written.
When I created applications that 'asked' for Username/Password, I compared the input values against VFP Data table field values that were stored away in an encrypted manner so that the casual 'investigator' could not easily determine the values.
That assumes that the user's were allowed to create new Username/Password combinations - thereby requiring support of dynamic entries.
However the application developers could have done it in a variety of ways:
1. Store encrypted Username/Password values into a local VFP data table.
2. 'Hard code' the Username/Password into the application code prior to compilation (most definitely NOT preferred)
3. Run the input Username/Password against a Web Service where these values are stored on THEIR central system.
With those various ways as possibilities - making it more difficult to tell you which way to go, I'd recommend considering the following:
If an issue is BUSINESS CRITICAL, don't quibble over the Dollars.

How can I find why some classic asp pages randomly take a real long time to execute?

I'm working on a rather large classic asp / SQL Server application.
A new version was rolled out a few months ago with a lot of new features, and I must have a very nasty bug somewhere : some very basic pages randomly take a very long time to execute.
A few clues :
It isn't the database : when I run the query profiler, it doesn't detect any long running query
When I launch IIS Diagnostic tools, reqviewer shows that the request is in state "processing"
This can happen on ANY page
I can't reproduce it easily, it's completely random.
To have an idea of "a very long time" : this morning I had a page take more than 5 minutes to execute, when it normaly should be returned to the client in less than 100 ms.
The application can handle rather large upload and download of files (up to 2 gb in size). This is also handled with a classic asp script, using SoftArtisan FileUp. Don't think it can cause the problem though, we've had these uploads for quite a while now.
I've had the problem on two separate servers (in two separate locations, with different sets of data). One is running the application with good ol' SQL Server 2000 and the other runs SQL Server 2005. The web server is IIS 6 in both cases.
Any idea what the problem is or on how to solve that kind of problem ?
Thanks.
Sebastien
Edit :
The problem came from memory fragmentation. Some asp pages were used to download files from the server. File sizes could go from a few kb to more than 2 gb. These variations in size induced memory fragmentation. The asp pages could also take quite some time to execute (the time for the user to download the pages minus what is put in cache at IIS's level), which is not really standard for server pages that should execute quickly.
This is what I did to improve things :
Put all the download logic in a single asp page with session turned off
That allowed me to put that asp page in a specific pool that could be recycled every so often (download would now disturb the rest of the application no more)
Turn on LFH (Low Fragmention Heap), which is not by default on Windows 2003, in order to reduce memory fragmentation
References for LFH :
http://msdn.microsoft.com/en-us/library/aa366750(v=vs.85).aspx
Link (there is a dll there that you can use to turn on LFH, but the article is in French. You'll have to learn our beautiful language now!)
I noticed the same thing on a classic ASP + ajax application that I worked on. Using Timer, I timed the page load to be 153 milliseconds but in the firebug waterfall chart it randomly says 3.5 seconds. The Timer output is on the response and the waterfall chart claims that it's Firefox waiting for a response from the server. Because the waterfall chart also shows the response, I can compare the waterfall chart to the timer and there's a huge discrepancy 'every so often'
Can you establish whether this is a problem for all pages or a common subset of pages?
If a subset examine what these pages have in common, for example they all use a specific COM dll, that other pages don't.
Does this problem affect multiple clients or just a few?
IOW is there an issue with a specific browser OS version.
Is this public or intranet?
Can you reproduce the problem from a client you own?
Is there any chance there are some full-text search queries going on SQL Server?
Because if so, and if SQL Server has no access to internet, it may cause a 45-second delay every few hours or so when it tries to check the certifications (though this does not apply to SQL Server 2000).
For a detailed explanation of what I'm referring to, read this.
Are any other apps running on your web server? If so, is your problematic in the same app pool as any of them? If so, try creating a dedicated app pool for it. Maybe one of the other apps is having a problem and is adversely affecting yours.
One thing to watch out for is if you have server side debugging turned on in IIS, the web server will run in single threaded mode.
So if you try to load a page, and someone else has hit that url at the same time, you will be queued up behind them. It will seem like pages take a long time to load, but its simply because the server is doling out page requests in a single file line and sometimes you aren't at the front of the line.
You may have turned this on for debugging and forgot to turn it off for production.

Moss 2007 SSP Error "Search application '{0}' is not ready."

I'm trying to fix a broken SSP on a MOSS 2007 site. The problem I am running into manifests itself as follows...
In the SSP "Search Settings" page I get this message:
The search service is currently offline. Visit the Services on Server page in SharePoint Central Administration to verify whether the service is enabled. This might also be because an indexer move is in progress.
In the SSP "User Profiles and Properties" page I get this in red at the top:
An error has occurred while accessing the SQL Server database or the Office SharePoint Server Search service. If this is the first time you have seen this message, try again later. If this problem persists, contact your administrator.
I have contacted my administrator, but that is currently me and it turns out I don't know any more than I do about the problem.
In the Event Log I get the following message:
The Execute method of job definition Microsoft.Office.Server.Search.Administration.IndexingScheduleJobDefinition (ID 8714973c-0514-4e1a-be01-e1fe8bc01a18) threw an exception. More information is included below.
Search application '{0}' is not ready.
The Event ID is 6398, which isn't as useful as I had hoped, but I don find the message interesting in that it looks like a String.format call where the substituted value is missing. Unfortunately no interesting in that it tells me how to fix the problem.
Sharepoint's own log offers this:
UserProfileConfigManager.GetImportStatus() failed to obtain crawl status: System.InvalidOperationException: Search application '{0}' is not ready.
at Microsoft.Office.Server.Search.Administration.SearchApi..ctor(WellKnownSearchCatalogs catalog, SearchSharedApplication application)
at Microsoft.Office.Server.Search.Administration.SearchSharedApplication.get_SearchApi()
at Microsoft.Office.Server.UserProfiles.UserProfileConfigManager.c__DisplayClass3.b__0()
at Microsoft.Office.Server.Diagnostics.FirstChanceHandler.ExceptionFilter(Boolean fRethrowException, TryBlock tryBlock, FilterBlock filter, CatchBlock catchBlock, FinallyBlock finallyBlock)
I have tried stopping and starting the search service, removing and re-adding it from the administration panel, and pretty much every other thing I could find to do with Sharepoint's own administrative tools, which leads me to believe the problem here may be database or permissions related.
There was a second SSP set up on the same server, which I think may have been part of the original cause of the problem, but removing it has made no difference.
Maybe you can make sense of this - I'm new to sharepoint, so it makes little sense to me:
"Service Shared, after looking for the solution much encontre this forum where a person tapeworm the same problem. After reading a infinity of commentaries, which I made to solve the problem was to create a new shared service, later it assigns the other applications to him and later I put it like predetermined, it initiates the import of profiles, and later the hearings, clearly first I did it in a site of tests just in case something happened, later eliminates the First Shared Service and finally the error I am solved. The snapshot of the Registry of the configuration of the application in the data base has been stored correctly. Context: application `SharedServices2 ′"
You didn't mention anything about tapeworms, so maybe you're running a newer version.
Translation of:
http://tecnologiainformaticait.wordpress.com/2008/11/21/error-sharepoint-search-application-0-is-not-ready/
Personally, I'd try the msdn forums.
So it seems that the problem was a corrupted Shared Service Provider ( no idea how it came about, but there you go ) and the only working solution I could find was to delete it and start again.
I suspect there may have been a more elegant fix by changing something in the database somewhere, but I don't know the Sharepoint Database model well enough to find it in the time available.
As an additional warning to this, if you do delete your SSP you may find that it doesn't delete cleanly so that you get a bunch of SQL server tasks that still try to run on an empty database, which can cause problems if you have anything else running on the same database server.
Same problem. My DBA delete correctly the search database and it still doesn't work.
I'll post the solution on my blog when I found something.
For the moment, we open a MS call.
Created a new SSP
2- In central admin, click on shared Services Administration
3- Click on "Change Associations" and move all the web apps to the new SSP
Choose a new search_DB and select the good server that will index if you are in a farm
Problems created by this operation:
We notice that we lose statistics information for our sites.
if you tried this solution, give us your feed back too
Thanks.
http://dejacquelot.blogspot.com/

Resources