After changing servers, we are now experiencing extremely slow execution times for just one query. When we analyze the sessions during the runtime, we see hundreds (sometimes over 1000) open sessions all with the same session id and they are blocking themselves. Here is an extract:
+----------------------+------------+-----------------+------------------+-----------+--------------------+-----------------------+---------------------+--------------------------+--------------------------------------------------------------------+
| waiting_task_address | session_id | exec_context_id | wait_duration_ms | wait_type | resource_address | blocking_task_address | blocking_session_id | blocking_exec_context_id | resource_description |
+----------------------+------------+-----------------+------------------+-----------+--------------------+-----------------------+---------------------+--------------------------+--------------------------------------------------------------------+
| 0x00000005A3B83468 | 161 | 19 | 121058 | CXPACKET | 0x0000000EB9B9C830 | 0x00000010BF029C28 | 161 | 3 | exchangeEvent id=Pipe8a2d88200 WaitType=e_waitPipeNewRow nodeId=12 |
| 0x00000010BE003C28 | 161 | 10 | 121079 | CXPACKET | 0x00000008A1A0E9C0 | 0x00000005734964E8 | 161 | 93 | exchangeEvent id=Pipe79fcc6200 WaitType=e_waitPipeNewRow nodeId=8 |
| 0x000000050BFA1088 | 161 | 42 | 121092 | CXPACKET | 0x000000058C7C12D0 | 0x00000010B484A8C8 | 161 | 27 | exchangeEvent id=Pipe5647e2110 WaitType=e_waitPipeNewRow nodeId=15 |
| 0x0000000DFB199088 | 161 | 20 | 121094 | CXPACKET | 0x0000000E77A4DCC0 | 0x00000005A3B82CA8 | 161 | 44 | exchangeEvent id=Pipe915578ed0 WaitType=e_waitPipeGetRow nodeId=15 |
| 0x0000000E501A64E8 | 161 | 66 | 121094 | CXPACKET | 0x000000088DB9DCB0 | 0x0000000E44591C28 | 161 | 81 | exchangeEvent id=Pipe79fcc8d00 WaitType=e_waitPipeGetRow nodeId=5 |
| 0x0000000E501A64E8 | 161 | 66 | 121094 | CXPACKET | 0x000000088DB9DCB0 | 0x00000003714868C8 | 161 | 82 | exchangeEvent id=Pipe79fcc8d00 WaitType=e_waitPipeGetRow nodeId=5 |
| 0x0000000E501A64E8 | 161 | 66 | 121094 | CXPACKET | 0x000000088DB9DCB0 | 0x0000000DC854B848 | 161 | 83 | exchangeEvent id=Pipe79fcc8d00 WaitType=e_waitPipeGetRow nodeId=5 |
| 0x0000000E501A64E8 | 161 | 66 | 121094 | CXPACKET | 0x000000088DB9DCB0 | 0x00000010B3A25848 | 161 | 84 | exchangeEvent id=Pipe79fcc8d00 WaitType=e_waitPipeGetRow nodeId=5 |
| 0x0000000E501A64E8 | 161 | 66 | 121094 | CXPACKET | 0x000000088DB9DCB0 | 0x0000000F39DFA4E8 | 161 | 85 | exchangeEvent id=Pipe79fcc8d00 WaitType=e_waitPipeGetRow nodeId=5 |
+----------------------+------------+-----------------+------------------+-----------+--------------------+-----------------------+---------------------+--------------------------+--------------------------------------------------------------------+
I am not sure what causes this. The problem didn't exist before moving the server. We had a short period of time during which the query executed fast on the current server and now have the problem of slow execution times again.
Have you compared execution plans? If you have upgraded SQL to a newer version have you updated statistics? If these are different servers with the same database is the data the same? Is the indexing the same? I used to run a replication topology and would delete unused indexes on subscribers.
The self blocking isn't really blocking. I think they introduced it in SQL 2005 and it's just slowness.
I have a table that keeps inventory information for products in stores on daily basis. It is like:
|------------|-----------|---------|-----------------|
| Date | ProductId | StoreId | InventoryOnHand |
|------------|-----------|---------|-----------------|
| 2017-10-11 | 348 | 121 | 2 |
| 2017-10-11 | 110 | 200 | 0 |
| 2017-10-11 | 254 | 587 | -2 |
| 2017-10-12 | 311 | 875 | 26 |
| 2017-10-12 | 954 | 364 | 15 |
| 2017-10-12 | 348 | 121 | 0 |
| 2017-10-12 | 441 | 121 | 7 |
| . | . | . | . |
| . | . | . | . |
| . | . | . | . |
|------------|-----------|---------|-----------------|
In most queries I used have condition like WHERE InventoryOnHand > 0. I need to speed up these queries.
Therefore, I want to build and index that separates values on column InventoryOnHand whether they are greater than 0 or not.
Filtered Index does not solve my problem because if I use filtered index all values greater than 0 will be indexed and this increases index size. I only need to know if a value greater than 0 or not.
i.e. I want to build an index that only works when condition is InventoryOnHand > 0. Is there any way to do this on SQL-Server?
I need to make 2 database constraints that connect two different tables at one time.
1. The total score of the four quarters equals the total score of the game the quarters belong to.
2. The total point of all the players equals to the score of the game of that team.
Here is what my tables look like.
quarter table
+------+--------+--------+--------+
| gNum | Period | hScore | aScore |
+------+--------+--------+--------+
| 1 | 1 | 13 | 18 |
| 1 | 2 | 12 | 19 |
| 1 | 3 | 23 | 31 |
| 1 | 4 | 32 | 18 |
| | | Total | Total |
| | | 80 | 86 |
+------+--------+--------+--------+
Game Table
+-----+--------+--------+--------+
| gID | hScore | lScore | tScore |
+-----+--------+--------+--------+
| 1 | 86 | 80 | 166 |
+-----+--------+--------+--------+
Player Table
+-----+------+--------+--------+
| pID | gNum | Period | Points |
+-----+------+--------+--------+
| 1 | 1 | 1 | 20 |
| | | 2 | 20 |
| | | 3 | 20 |
| | | 4 | 20 |
+-----+------+--------+--------+
So Virtually I need to use CHECK I think to make sure that players points = score of their team ie (hScore, aScore) and also make sure that the hScore and aScore = the total score in the Game table.
I was thinking of creating a foreign key variable on one of the tables and setting up constraints on that would this be the best way of going about it?
Thanks
I'm trying to migrate a 30GB database from one server to another.
The short story is that at a certain point through the process, the amount of time it takes to import records severely increases as a spike. The following is from using the SOURCE command to import a chunk of 500k records (out of about ~25-30 million throughout the database) that was exported as an sql file that was ssh tunnelled over to the new server:
...
Query OK, 2871 rows affected (0.73 sec)
Records: 2871 Duplicates: 0 Warnings: 0
Query OK, 2870 rows affected (0.98 sec)
Records: 2870 Duplicates: 0 Warnings: 0
Query OK, 2865 rows affected (0.80 sec)
Records: 2865 Duplicates: 0 Warnings: 0
Query OK, 2871 rows affected (0.87 sec)
Records: 2871 Duplicates: 0 Warnings: 0
Query OK, 2864 rows affected (2.60 sec)
Records: 2864 Duplicates: 0 Warnings: 0
Query OK, 2866 rows affected (7.53 sec)
Records: 2866 Duplicates: 0 Warnings: 0
Query OK, 2879 rows affected (8.70 sec)
Records: 2879 Duplicates: 0 Warnings: 0
Query OK, 2864 rows affected (7.53 sec)
Records: 2864 Duplicates: 0 Warnings: 0
Query OK, 2873 rows affected (10.06 sec)
Records: 2873 Duplicates: 0 Warnings: 0
...
The spikes eventually average to 16-18 seconds per ~2800 rows affected. Granted I don't usually use Source for a large import, but for the sakes of showing legitimate output, I used it to understand when the spikes happen. Using mysql command or mysqlimport yields the same results. Even piping the results directly into the new database instead of through an sql file has these spikes.
As far as I can tell, this happens after a certain amount of records are inserted into a table. The first time I boot up a server and import a chunk that size, it goes through just fine. Give or take the estimated amount it handles until these spikes occur. I can't correlate that because I haven't consistently replicated the issue to evidently conclude that. There are ~20 tables that have sub 500,000 records that all imported just fine when those 20 tables were imported through a single command. This seems to only happen to tables that have an excessive amount of data. Granted, the solutions I've come cross so far seem to only address the natural DR that occurs when you import over time (The expected output in my case was that eventually at the end of importing 500k records, it would take 2-3 seconds per ~2800, whereas it seems the questions were addressing that at the end it shouldn't take that long). This comes from a single sugarCRM table called 'campaign_log', which has ~9 million records. I was able to import in chunks of 500k back onto the old server i'm migrating off of without these spikes occuring, so I assume this has to do with my new server configuration. Another thing is that whenever these spikes occur, the table that it is being imported into seems to have an awkward way of displaying the # of records via count. I know InnoDB gives count estimates, but the number doesn't succeed the ~, indicating the estimate. It usually is accurate and that each time you refresh the table, it doesn't change the amount it displays (This is based on what it reports through PHPMyAdmin)
Here's the following commands/InnoDB system variables I have on the new server:
INNODB System Vars:
+---------------------------------+------------------------+
| Variable_name | Value |
+---------------------------------+------------------------+
| have_innodb | YES |
| ignore_builtin_innodb | OFF |
| innodb_adaptive_flushing | ON |
| innodb_adaptive_hash_index | ON |
| innodb_additional_mem_pool_size | 8388608 |
| innodb_autoextend_increment | 8 |
| innodb_autoinc_lock_mode | 1 |
| innodb_buffer_pool_instances | 1 |
| innodb_buffer_pool_size | 8589934592 |
| innodb_change_buffering | all |
| innodb_checksums | ON |
| innodb_commit_concurrency | 0 |
| innodb_concurrency_tickets | 500 |
| innodb_data_file_path | ibdata1:10M:autoextend |
| innodb_data_home_dir | |
| innodb_doublewrite | ON |
| innodb_fast_shutdown | 1 |
| innodb_file_format | Antelope |
| innodb_file_format_check | ON |
| innodb_file_format_max | Antelope |
| innodb_file_per_table | OFF |
| innodb_flush_log_at_trx_commit | 1 |
| innodb_flush_method | fsync |
| innodb_force_load_corrupted | OFF |
| innodb_force_recovery | 0 |
| innodb_io_capacity | 200 |
| innodb_large_prefix | OFF |
| innodb_lock_wait_timeout | 50 |
| innodb_locks_unsafe_for_binlog | OFF |
| innodb_log_buffer_size | 8388608 |
| innodb_log_file_size | 5242880 |
| innodb_log_files_in_group | 2 |
| innodb_log_group_home_dir | ./ |
| innodb_max_dirty_pages_pct | 75 |
| innodb_max_purge_lag | 0 |
| innodb_mirrored_log_groups | 1 |
| innodb_old_blocks_pct | 37 |
| innodb_old_blocks_time | 0 |
| innodb_open_files | 300 |
| innodb_print_all_deadlocks | OFF |
| innodb_purge_batch_size | 20 |
| innodb_purge_threads | 1 |
| innodb_random_read_ahead | OFF |
| innodb_read_ahead_threshold | 56 |
| innodb_read_io_threads | 8 |
| innodb_replication_delay | 0 |
| innodb_rollback_on_timeout | OFF |
| innodb_rollback_segments | 128 |
| innodb_spin_wait_delay | 6 |
| innodb_stats_method | nulls_equal |
| innodb_stats_on_metadata | ON |
| innodb_stats_sample_pages | 8 |
| innodb_strict_mode | OFF |
| innodb_support_xa | ON |
| innodb_sync_spin_loops | 30 |
| innodb_table_locks | ON |
| innodb_thread_concurrency | 0 |
| innodb_thread_sleep_delay | 10000 |
| innodb_use_native_aio | ON |
| innodb_use_sys_malloc | ON |
| innodb_version | 5.5.39 |
| innodb_write_io_threads | 8 |
+---------------------------------+------------------------+
System Specs:
Intel Xeon E5-2680 v2 (Ivy Bridge) 8 Processors
15GB Ram
2x80 SSDs
CMD to Export:
mysqldump -u <olduser> <oldpw>, <olddb> <table> --verbose --disable-keys --opt | ssh -i <privatekey> <newserver> "cat > <nameoffile>"
Thank you for any assistance. Let me know if there's any other information I can provide.
I figured it out. I increased the innodb_log_file_size from 5MB to 1024MB. While it did significantly increase the amount of records I imported (Never went above 1 second per 3000 rows), it also fixed the spikes. There were only 2 in all the records I imported, but after they happened, they immediately went back to taking sub 1 second.
I have a table in Access database as below;
Name | Range | X | Y | Z
------------------------------
A | 100-200 | 1 | 2 | 3
A | 200-300 | 4 | 5 | 6
B | 100-200 | 10 | 11 | 12
B | 200-300 | 13 | 14 | 15
C | 200-300 | 16 | 17 | 18
C | 300-400 | 19 | 20 | 21
I have trying write a query that convert this into the following format.
Name | X_100_200 | Y_100_200 | Z_100_200 | X_200_300 | Y_200_300 | Z_200_300 | X_300_400 | Y_300_400 | Z_300_400
A | 1 | 2 | 3 | 4 | 5 | 6 | | |
B | 10 | 11 | 12 | 13 | 14 | 15 | | |
C | | | | 16 | 17 | 18 | 19 | 20 | 21
After trying for a while the best method I could come-up with is to write bunch of short queries that selects the data for each Range and then put them together again using a Union query. The problem is that for this example I have shown 3 columns (X, Y and Z), but I actually have much more. Access is starting to strain with the amount of SQL I have come up with.
Is there a better way to achieve this?
The answer was simple. Just use Access Pivotview. Finding it hard to export the results to Excel though.