My method executes lots of asynchronous SQL requests and I constantly get connection timeout exceptions. What else can I do except increasing the connection timeout value and proper indexing? I mean with the database part, not with the code part. Can't change the code part. Besides, the application is running fine on different servers, but only I experience those timeout exceptions on my pc and local MS SQL Server 2008 R2 database (which is also on the same PC). So I think this is clearly a performance issue since the connection timeout is already set to 3 minutes. Maybe there is something I can change on the server? Maybe there is a number of simultanious requests constraint? Each of my requests needs clearly less that 3 minutes, but there are about 26 000 of them running asynchroniously, and only I experience those problems on my local PC and local DB.
I've run the process monitor and I see that at the time when my code starts the SQL Server eventually consumes 200 MB of RAM and takes up about a half of CPU processing time. But I still have 1 GB of RAM free, so this is not a memory problem.
I think the number of connection can be the cause. Make sure you close the connection properly or try to reduce the amount of them. You can also use pipes, which will overcome the limitations of usual connections.
Related
At approximately 2021-06-14 02:00 (UTC) my website began performing very poorly (some requests taking >10s). An analysis of the logs shows that the slowdown only affects routes which establish a DB connection. When I look at the DB, I can see that I am experiencing high Network I/O (ASYNC_NETWORK_IO) wait times.
My understanding is that this can point to a couple problems. Either there is an issue with the network, or more likely, a problem with the application consuming the DB results.
https://www.sqlshack.com/reducing-sql-server-async_network_io-wait-type/
Unfortunately, when I investigate any further, both the network and the client seem to be fine.
Problem with the network?
ping < 1ms
Utilizing a small fraction of 10GbE bandwidth
Auto-Negotiation is detecting the network bandwidth properly
Web server / Application processing results too slowly?
CPU usage is fine
No code changes when slowdown began
Request queue emptied quickly
Is there any other explanation for why I might see a sudden increase in ASYNC_NETWORK_IO wait time?
Update
Most resources (like the one I linked above) say that a problem with SQL Server itself is unlikely. However, when I switched out my application's connection string to use my test db (located on a different server), the site performance went back to normal, and no ASYNC_NETWORK_IO wait times could be seen in my test db.
If the issue was with the web server or the client application, I would have expected to see the connection to the test db cause high ASYNC_NETWORK_IO wait times there as well.
Does anyone have any ideas about what could be causing this from the SQL Server side?
I have a services hosted on IIS port for submit some information and when it is calling by thousand of users using mobile app on same time, Server is going to stuck and not able to response for the request.
At this time I observe in task manager that SQL took high utilization and memory approx. 70-75 % CPU and memory .
Due to this we need to restart the SQL server daily in morning and evening .(I know this is bad idea for performance and statistics but server hang up)
I have made the API using .NET framework and SQL server 2012.
Any idea what i can do to handle this issue?
The following methods can usually be taken to solve this problem.
Optimize the code of your application, especially the frequent connection and closing of the database. Each new connection will consume cpu resources, so if there is an idle connection, it is better to actively close it instead of waiting for GC to recycle it. For data queries, use index queries as much as possible, especially when there is a lot of data. I think this applies to you, because you will have a large number of users and user data.
Due to the special design mechanism of SQL Server, 100% of the digital memory on the server will be consumed by default, which will cause performance degradation. You can set max server memory to limit the amount that SQL Server allocates to the buffer pool, which is usually the largest memory consumer.
Effects of min and max server memory
As Lex li said, moving the database to a separate computer is a good way. Especially when there is a lot of user data and data processing. IIS and SQL Server are on the same machine. The server not only has to process requests and responses to applications, but also allocate resources to SQL Server to process queries. It is easy to encounter performance bottlenecks.
I am using Postgres Database as primary database with Rails application. Currently the configuration for the database is: CPU - 8 cores, RAM - 32 GB, hosted on AWS. However I need to scale this server as I am facing a lot of serious bottlenecks.
I need to be able to handle 10000 parallel requests to this server. Even if this is not achievable I at-least need to know what would be the maximum number that this database can handle. The requests includes complex SELECT and UPDATE queries.
I have changed settings such as max_connections to 10000 and in the rails_config/database.yml the connection_pool is set to 10000.
I am running a rails code which currently runs in 50 parallel threads. The code runs fine for a while until I receive this error:
PG::ConnectionBad: PQsocket() can't get socket descriptor. After this when I log into the postgres server and try to execute something, I get this:
db_production=# select id from product_tbl limit 5;
WARNING: terminating connection because of crash of another server process
DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
HINT: In a moment you should be able to reconnect to the database and repeat your command.
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Succeeded.
db_production=#
After this restarting the rails code also works fine but only for a while until I again receive the same error.
I looked at the CPU usage while the threads are running. Some observations:
One SELECT query uses all the CPU (shows 100% usage) for one core, the rest are sitting idle at 0%. This can be inferred because when only one process or query is running, only one CPU shows 95-100% usage. Does this mean I will only be able to run 8 queries or threads in parallel as there are only 8 cores?
RAM is kept underutilized. Even when 7 out of 8 queries or threads are running in parallel, the RAM used is only around 3 GB. Can I somehow increase this and accommodate more parallel queries?
For every running thread, this is the configuration: virtual memory usage->751M, resident memory usage->154-158M, shared memory used->149-150M, CPU%->60-100%, MEM%->0.5-0.6%. Is everything right here or I need to tweak some settings to make use of the resources more effectively?
What are the configuration settings I need to change in order to scale the database? What must be the ideal number of threads such that I at-least don't receive the connection error? What are the checks and steps to scale the database to handle large parallel connections? How can I make full use of the available resources?
I've got a SQL Server 2008 server that will frequently (multiple times throughout the day) reports
"SQL Server has encountered 64357 occurrence(s) of I/O requests taking
longer than 15 seconds to complete on file..."
.
I've noticed that in resource monitor when filtering to the sqlserver.exe process c:\pagefile.sys is showing up fairly often.
The server currently has 40MB of memory free and 260mb available. SQL Server is set to unlimited for ram and is using most of the 32GB on the server. It's a production server I've inherited where there isn't much downtime, so I haven't been able to change that.
I assume SQL is running out of memory and going to the page file?
SQL Server has encountered 64357 occurrence(s) of I/O requests taking longer than 15 seconds
Does not indicate a memory problem. It indicates a problem with the disks used to store your databases.
It seems so. Add more ram to the server and it will be fixed
We have a dev server running C# and talking to SQL server on the same machine.
We have another server running the same code and talking to SQL server on another machine.
A job does 60,000 reads (that is it calls a stored procedure 60,000 times - each read returns one row).
The job runs in 1/40th of the time on the first server compared to it running on the second server.
We're already looking at the 'internal' differences between the two SQL Servers (fragmentation, tempdb, memory etc) but what's a good way to determine how much slower the second config is simply because it has to go over the network ?
[rather confusingly I found a 'SQL Server Ping' tool but it doesn't actually attempt any timing measurement which, as far as I can see, is what we need]
Open SQL Server Management Server on the remote machine. Start a new query. Click Query, Include Client Statistics. Run your stored procedure. In the Client Statistics tab of the results, you'll see some basic information about how many packets were sent back & forth over the network. My guess is that for one read, you're not going to see that much overhead.
To get a better idea, I'd try doing a plain select of 60,000 records (since you said it's returning 60,000 records one by one) over the network from your remote machine. Again, that doesn't give you an idea of the stored procedure overhead, but it'll give you a quick seat-of-the-pants idea of the network speed between machines.
SQL Server ships with the Profiler utility. This will tell you what the execution time of your query is on each of your SQL Server instances. Note any discrepencies. Whatever time (in the ExecutionTime column) can not be accounted for here is transmission time... or client display time. Perhaps your client machine takes longer to render the results, or compute the results.
What results are you expecting? Running everything on one machine vs over a network will certainly give you different timings. Your biggest timing difference will be the network throughput. You need to communicate to the networked server both ways.
If you can set NOCOUNT to on, this will help in less network traffic.