darknet doesn't use p5000 gpu with cuda - artificial-intelligence

i run this command
./darknet detector train data/obj.data cfg/yolov3_training.cfg back/last_4_4_7pm.weights /back -dont_show -gpus 0
but gou is not been used and 0 %
here is my makefile;:
%cd darknet
!sed -i 's/OPENCV=0/OPENCV=1/' Makefile
!sed -i 's/GPU=0/GPU=1/' Makefile
!sed -i 's/CUDNN=0/CUDNN=1/' Makefile
here is the out put
CUDA-version: 11020 (11000)
Warning: CUDA-version is higher than Driver-version!
, cuDNN: 8.1.0, GPU count: 1
OpenCV version: 3.4.11
0
yolov3_training
0 : compute_capability = 610, cudnn_half = 0, GPU: Quadro P5000
net.optimized_memory = 0
mini_batch = 4, batch = 64, time_steps = 1, train = 1
layer filters size/strd(dil) input output
0 Create CUDA-stream - 0
Create cudnn-handle 0
here is my nvidia smi:
root#n5qr6jidhm:/notebooks/Untitled Folder/darknet# nvidia-smi
Fri Jun 25 17:53:45 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.36.06 Driver Version: 450.36.06 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Quadro P5000 On | 00000000:00:05.0 Off | Off |
| 39% 62C P0 126W / 180W | 5725MiB / 16278MiB | 92% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+

First, start with upgrading(downgrading) your drives by gpu model. That can be easily found on google. Then check driver versions with darknet min. req. before installing them. Don't use -gpus flag as nothing changes.

Related

Loop a regression with increasing sample sizes

I am still learning Stata and am not sure how to get this to work. I need to run a regression over increasing sample sizes. I know how to get it to run for a specific sample size:
reg y x1 x2 in 1/10
but I need it to do it for sample size 10, then 11, then 12, etc. up to 1000. I tried the following:
foreach var in varlist x1 x2 {
reg y x1 x2 in 10/_n+1
}
but that did not work. How do I get it to loop the regression increasing sample size by 1 each time?
forvalues i = 10/1000 {
reg y x1 x2 if _n <= `i'
}
The first answer does exactly what you asked, but you then have the output from (in your example) 991 separate regressions to process. The next question is how to select what you want for further analysis. You could check out official command rolling as providing various machinery or rangestat from SSC. Here's a token reproducible example with a listing of results. The dataset in question is panel data: note that options like by(company) insist on separate regressions for each panel. If you wanted more or different results to be kept, there are related commands.
. webuse grunfeld
. rangestat (reg) invest mvalue, int(year . 0)
+------------------------------------------------------------------------------------------+
| year reg_nobs reg_r2 reg_adj~2 b_mvalue b_cons se_mvalue se_cons |
|------------------------------------------------------------------------------------------|
| 1935 10 .86526037 .84841791 .10253446 .20584135 .01430537 16.364981 |
| 1936 20 .75028331 .73641016 .0893724 7.3202446 .01215286 17.885845 |
| 1937 30 .70628424 .69579439 .08388549 11.163111 .01022308 17.600837 |
| 1938 40 .69788155 .68993107 .08333349 10.545486 .00889458 14.396864 |
| 1939 50 .70399484 .69782807 .08056278 9.3450069 .00754013 12.312899 |
|------------------------------------------------------------------------------------------|
| 1940 60 .72557587 .72084442 .0842278 7.6507759 .0068016 11.294786 |
| 1941 70 .73233666 .72840043 .08992373 7.496037 .00659263 11.034693 |
| 1942 80 .72656572 .72306015 .094125 7.7007269 .00653803 10.706012 |
| 1943 90 .73520447 .73219543 .0968378 6.7626664 .00619519 10.093819 |
| 1944 100 .74792336 .74535115 .09900035 6.0220131 .00580579 9.4585067 |
|------------------------------------------------------------------------------------------|
| 1945 110 .76426375 .762081 .1001161 5.3512756 .00535037 8.8046084 |
| 1946 120 .77316485 .77124251 .10424112 3.9977716 .00519777 8.6559782 |
| 1947 130 .76829138 .76648116 .10701191 4.703102 .0051944 8.549975 |
| 1948 140 .75348635 .75170002 .10927737 6.1833536 .00532076 8.6413437 |
| 1949 150 .75420863 .75254788 .11128353 6.3261435 .00522201 8.4080085 |
|------------------------------------------------------------------------------------------|
| 1950 160 .7520656 .75049639 .114046 5.7698694 .00520945 8.3379321 |
| 1951 170 .75387998 .75241498 .11796668 5.0385173 .00520028 8.4011668 |
| 1952 180 .74822014 .74680565 .12304588 3.702257 .00534999 8.728181 |
| 1953 190 .74683845 .74549185 .1322075 -1.296652 .00561387 9.387601 |
| 1954 200 .734334 .73299225 .14138597 -6.9762842 .00604359 10.272724 |
+------------------------------------------------------------------------------------------+

Uboot SERDES configuration for PCIEx4

I am trying to create a pcie x4 interface using a standard SERDES map on a Marvell 38x chip in Uboot. However lane verification prevents me from enabling the pcie x4 configuration.
The closest example I can find is this slm1363 board from this file: https://github.com/u-boot/u-boot/blob/3d4825446e4258192e1f2302d691a8c0c82a0975/arch/arm/mach-mvebu/serdes/a38x/high_speed_topology_spec-38x.c :
struct serdes_map db_config_slm1363_d[MAX_SERDES_LANES] = {
{PEX0, SERDES_SPEED_5_GBPS, PEX_ROOT_COMPLEX_X4, 0, 0},
{PEX1, SERDES_SPEED_5_GBPS, PEX_ROOT_COMPLEX_X4, 0, 0},
{PEX2, SERDES_SPEED_5_GBPS, PEX_ROOT_COMPLEX_X4, 0, 0},
{PEX3, SERDES_SPEED_5_GBPS, PEX_ROOT_COMPLEX_X4, 0, 0},
{USB3_HOST0, SERDES_SPEED_5_GBPS, SERDES_DEFAULT_MODE, 0, 0},
{USB3_HOST1, SERDES_SPEED_5_GBPS, SERDES_DEFAULT_MODE, 0, 0}
}
However that board was patched out in this commit https://github.com/u-boot/u-boot/commit/544acb07ecebc096c9449e675481ba280311fb0b due to it being an unsupported topology?
If I configure uboot using the example above, I get the following on boot:
board SerDes lanes topology details:
| Lane # | Speed | Type |
--------------------------------
| 0 | 5 | PCIe0 |
| 1 | 5 | PCIe1 |
| 2 | 5 | PCIe2 |
| 3 | 5 | PCIe3 |
| 4 | 5 | USB3 HOST0 |
| 5 | 5 | USB3 HOST1 |
--------------------------------
hws_serdes_topology_verify: Warning: serdes lane 2 is set to type PCIe2.
hws_serdes_topology_verify: Maximum supported lanes are already set to this type (limit = 4)
hws_update_serdes_phy_selectors: SerDes lane #2 is disabled
hws_serdes_topology_verify: Warning: serdes lane 3 is set to type PCIe3.
hws_serdes_topology_verify: Maximum supported lanes are already set to this type (limit = 4)
hws_update_serdes_phy_selectors: SerDes lane #3 is disabled
board SerDes lanes topology details:
| Lane # | Speed | Type |
--------------------------------
| 0 | 5 | PCIe0 |
| 1 | 5 | PCIe1 |
| 4 | 5 | USB3 HOST0 |
| 5 | 5 | USB3 HOST1 |
--------------------------------
If i comment out the SERDES lane verification, it seems I can configure the x4 link successfully and recognize the drive that is attached. Can anyone shed some light on the proper way to configure this?
Thanks!
Tyler

In vertica, Is there any way to access admintools without admin rights?

In vertica, Is there any way to access admintools without admin rights?. How to find nodes and cluster configure details in normal user in vertica?
You might have already tried it.
You must have access to the Linux shell level on one of the Vertica nodes, as a user that is allowed to write to /opt/vertica/log/adminTools.log . And that is, by default, dbadmin.
I would regard it as quite a security risk to tamper with the permissions around that, really.
A good start would be:
$ vsql -U <user> -w <pwd> -h 127.127.127.128 -d dbnm -x -c "select * from host_resources"
-[ RECORD 1 ]------------------+-----------------------------------------
host_name | 127.127.127.128
open_files_limit | 65536
threads_limit | 7873
core_file_limit_max_size_bytes | 0
processor_count | 1
processor_core_count | 2
processor_description | Intel(R) Xeon(R) CPU E5-2670 0 # 2.60GHz
opened_file_count | 7
opened_socket_count | 10
opened_nonfile_nonsocket_count | 7
total_memory_bytes | 8254820352
total_memory_free_bytes | 1034915840
total_buffer_memory_bytes | 523386880
total_memory_cache_bytes | 5516861440
total_swap_memory_bytes | 2097147904
total_swap_memory_free_bytes | 2097147904
disk_space_free_mb | 425185
disk_space_used_mb | 76682
disk_space_total_mb | 501867
system_open_files | 1440
system_max_files | 798044
-[ RECORD 2 ]------------------+-----------------------------------------
host_name | 127.127.127.129
open_files_limit | 65536
threads_limit | 7873
core_file_limit_max_size_bytes | 0
processor_count | 1
processor_core_count | 2
processor_description | Intel(R) Xeon(R) CPU E5-2670 0 # 2.60GHz
opened_file_count | 7
opened_socket_count | 9
opened_nonfile_nonsocket_count | 7
total_memory_bytes | 8254820352
total_memory_free_bytes | 1836150784
total_buffer_memory_bytes | 487129088
total_memory_cache_bytes | 4774060032
total_swap_memory_bytes | 2097147904
total_swap_memory_free_bytes | 2097147904
disk_space_free_mb | 447084
disk_space_used_mb | 54783
disk_space_total_mb | 501867
system_open_files | 1408
system_max_files | 798044
-[ RECORD 3 ]------------------+-----------------------------------------
host_name | 127.127.127.130
open_files_limit | 65536
threads_limit | 7873
core_file_limit_max_size_bytes | 0
processor_count | 1
processor_core_count | 2
processor_description | Intel(R) Xeon(R) CPU E5-2670 0 # 2.60GHz
opened_file_count | 7
opened_socket_count | 9
opened_nonfile_nonsocket_count | 7
total_memory_bytes | 8254820352
total_memory_free_bytes | 1747091456
total_buffer_memory_bytes | 531447808
total_memory_cache_bytes | 4813959168
total_swap_memory_bytes | 2097147904
total_swap_memory_free_bytes | 2097147904
disk_space_free_mb | 451444
disk_space_used_mb | 50423
disk_space_total_mb | 501867
system_open_files | 1408
system_max_files | 798044
In general, check the Vertica docu for system tables and information available there....

Sybase 15.7: Aggregating data in a large table

We have a fairly simple stored procedure that aggregates all the data in a large table (and then puts the results in another table). For a 5M rows table this process takes around 4 minutes - which is perfectly reasonable. But for a 13M rows table this same process takes around 60 minutes.
It looks like we are breaking some kind of a threshold and I struggle to find a simple workaround. Manually rewriting this as several "threads" aggregating small portions of table results in a reasonable run time of 10-15 minutes.
Is there a way to see the actual bottleneck here?
Update: The query plans are of course identical for both queries and they look like this:
|ROOT:EMIT Operator (VA = 3)
|
| |INSERT Operator (VA = 2)
| | The update mode is direct.
| |
| | |HASH VECTOR AGGREGATE Operator (VA = 1)
| | | GROUP BY
| | | Evaluate Grouped SUM OR AVERAGE AGGREGATE.
| | | Evaluate Grouped COUNT AGGREGATE.
| | | Evaluate Grouped SUM OR AVERAGE AGGREGATE.
[---//---]
| | | Evaluate Grouped SUM OR AVERAGE AGGREGATE.
| | | Using Worktable1 for internal storage.
| | | Key Count: 10
| | |
| | | |SCAN Operator (VA = 0)
| | | | FROM TABLE
| | | | TableName
| | | | Table Scan.
| | | | Forward Scan.
| | | | Positioning at start of table.
| | | | Using I/O Size 16 Kbytes for data pages.
| | | | With MRU Buffer Replacement Strategy for data pages.
| |
| | TO TABLE
| | #AggTableName
| | Using I/O Size 2 Kbytes for data pages.
Update 2:
Some statistics:
5M rows 13M rows
CPUTime 263,350 1,180,700
WaitTime 577,574 1,927,399
PhysicalReads 2,304,977 13,704,583
LogicalReads 11,479,123 27,911,085
PagesRead 5,550,737 19,518,030
PhysicalWrites 131,924 5,557,143
PagesWritten 263,640 6,103,708
Update 3:
set statistics io,time on results reformatted for easier comparison:
5M rows 13M rows
+-------------------------+-----------+------------+
| #AggTableName | | |
| logical reads regular | 81 114 | 248 961 |
| apf | 0 | 0 |
| total | 81 114 | 248 961 |
| physical reads regular | 0 | 2 |
| apf | 0 | 0 |
| total | 0 | 2 |
| apf IOs | 0 | 0 |
+-------------------------+-----------+------------+
| Worktable1 | | |
| logical reads regular | 1 924 136 | 8 200 130 |
| apf | 0 | 0 |
| total | 1 924 136 | 8 200 130 |
| physical reads regular | 1 621 916 | 11 906 846 |
| apf | 0 | 0 |
| total | 1 621 916 | 11 906 846 |
| apf IOs | 0 | 0 |
+-------------------------+-----------+------------+
| TableName | | |
| logical reads regular | 5 651 318 | 13 921 342 |
| apf | 52 | 20 |
| total | 5 651 370 | 13 921 362 |
| physical reads regular | 38 207 | 345 156 |
| apf | 820 646 | 1 768 064 |
| total | 858 853 | 2 113 220 |
| apf IOs | 819 670 | 1 754 171 |
+-------------------------+-----------+------------+
| Total writes | 0 | 5 675 198 |
| Execution time | 4 339 | 18 678 |
| CPU time | 211 321 | 930 657 |
| Elapsed time | 316 396 | 2 719 108 |
+-------------------------+-----------+------------+
5M rows:
Total writes for this command: 0
Execution Time 0.
Adaptive Server cpu time: 0 ms. Adaptive Server elapsed time: 0 ms.
Parse and Compile Time 0.
Adaptive Server cpu time: 0 ms.
Parse and Compile Time 0.
Adaptive Server cpu time: 0 ms.
Table: #AggTableName scan count 0, logical reads: (regular=81114 apf=0 total=81114), physical reads: (regular=0 apf=0 total=0), apf IOs used=0
Table: Worktable1 scan count 1, logical reads: (regular=1924136 apf=0 total=1924136), physical reads: (regular=1621916 apf=0 total=1621916), apf IOs used=0
Table: TableName scan count 1, logical reads: (regular=5651318 apf=52 total=5651370), physical reads: (regular=38207 apf=820646 total=858853), apf IOs used=819670
Total writes for this command: 0
Execution Time 4339.
Adaptive Server cpu time: 211321 ms. Adaptive Server elapsed time: 316396 ms.
13M rows:
Total writes for this command: 0
Execution Time 0.
Adaptive Server cpu time: 0 ms. Adaptive Server elapsed time: 0 ms.
Parse and Compile Time 1.
Adaptive Server cpu time: 50 ms.
Parse and Compile Time 0.
Adaptive Server cpu time: 0 ms.
Table: #AggTableName scan count 0, logical reads: (regular=248961 apf=0 total=248961), physical reads: (regular=2 apf=0 total=2), apf IOs used=0
Table: Worktable1 scan count 1, logical reads: (regular=8200130 apf=0 total=8200130), physical reads: (regular=11906846 apf=0 total=11906846), apf IOs used=0
Table: TableName scan count 1, logical reads: (regular=13921342 apf=20 total=13921362), physical reads: (regular=345156 apf=1768064 total=2113220), apf IOs used=1754171
Total writes for this command: 5675198
Execution Time 18678.
Adaptive Server cpu time: 930657 ms. Adaptive Server elapsed time: 2719108 ms.

MySQL Import into Innodb table severely spikes at a certain point

I'm trying to migrate a 30GB database from one server to another.
The short story is that at a certain point through the process, the amount of time it takes to import records severely increases as a spike. The following is from using the SOURCE command to import a chunk of 500k records (out of about ~25-30 million throughout the database) that was exported as an sql file that was ssh tunnelled over to the new server:
...
Query OK, 2871 rows affected (0.73 sec)
Records: 2871 Duplicates: 0 Warnings: 0
Query OK, 2870 rows affected (0.98 sec)
Records: 2870 Duplicates: 0 Warnings: 0
Query OK, 2865 rows affected (0.80 sec)
Records: 2865 Duplicates: 0 Warnings: 0
Query OK, 2871 rows affected (0.87 sec)
Records: 2871 Duplicates: 0 Warnings: 0
Query OK, 2864 rows affected (2.60 sec)
Records: 2864 Duplicates: 0 Warnings: 0
Query OK, 2866 rows affected (7.53 sec)
Records: 2866 Duplicates: 0 Warnings: 0
Query OK, 2879 rows affected (8.70 sec)
Records: 2879 Duplicates: 0 Warnings: 0
Query OK, 2864 rows affected (7.53 sec)
Records: 2864 Duplicates: 0 Warnings: 0
Query OK, 2873 rows affected (10.06 sec)
Records: 2873 Duplicates: 0 Warnings: 0
...
The spikes eventually average to 16-18 seconds per ~2800 rows affected. Granted I don't usually use Source for a large import, but for the sakes of showing legitimate output, I used it to understand when the spikes happen. Using mysql command or mysqlimport yields the same results. Even piping the results directly into the new database instead of through an sql file has these spikes.
As far as I can tell, this happens after a certain amount of records are inserted into a table. The first time I boot up a server and import a chunk that size, it goes through just fine. Give or take the estimated amount it handles until these spikes occur. I can't correlate that because I haven't consistently replicated the issue to evidently conclude that. There are ~20 tables that have sub 500,000 records that all imported just fine when those 20 tables were imported through a single command. This seems to only happen to tables that have an excessive amount of data. Granted, the solutions I've come cross so far seem to only address the natural DR that occurs when you import over time (The expected output in my case was that eventually at the end of importing 500k records, it would take 2-3 seconds per ~2800, whereas it seems the questions were addressing that at the end it shouldn't take that long). This comes from a single sugarCRM table called 'campaign_log', which has ~9 million records. I was able to import in chunks of 500k back onto the old server i'm migrating off of without these spikes occuring, so I assume this has to do with my new server configuration. Another thing is that whenever these spikes occur, the table that it is being imported into seems to have an awkward way of displaying the # of records via count. I know InnoDB gives count estimates, but the number doesn't succeed the ~, indicating the estimate. It usually is accurate and that each time you refresh the table, it doesn't change the amount it displays (This is based on what it reports through PHPMyAdmin)
Here's the following commands/InnoDB system variables I have on the new server:
INNODB System Vars:
+---------------------------------+------------------------+
| Variable_name | Value |
+---------------------------------+------------------------+
| have_innodb | YES |
| ignore_builtin_innodb | OFF |
| innodb_adaptive_flushing | ON |
| innodb_adaptive_hash_index | ON |
| innodb_additional_mem_pool_size | 8388608 |
| innodb_autoextend_increment | 8 |
| innodb_autoinc_lock_mode | 1 |
| innodb_buffer_pool_instances | 1 |
| innodb_buffer_pool_size | 8589934592 |
| innodb_change_buffering | all |
| innodb_checksums | ON |
| innodb_commit_concurrency | 0 |
| innodb_concurrency_tickets | 500 |
| innodb_data_file_path | ibdata1:10M:autoextend |
| innodb_data_home_dir | |
| innodb_doublewrite | ON |
| innodb_fast_shutdown | 1 |
| innodb_file_format | Antelope |
| innodb_file_format_check | ON |
| innodb_file_format_max | Antelope |
| innodb_file_per_table | OFF |
| innodb_flush_log_at_trx_commit | 1 |
| innodb_flush_method | fsync |
| innodb_force_load_corrupted | OFF |
| innodb_force_recovery | 0 |
| innodb_io_capacity | 200 |
| innodb_large_prefix | OFF |
| innodb_lock_wait_timeout | 50 |
| innodb_locks_unsafe_for_binlog | OFF |
| innodb_log_buffer_size | 8388608 |
| innodb_log_file_size | 5242880 |
| innodb_log_files_in_group | 2 |
| innodb_log_group_home_dir | ./ |
| innodb_max_dirty_pages_pct | 75 |
| innodb_max_purge_lag | 0 |
| innodb_mirrored_log_groups | 1 |
| innodb_old_blocks_pct | 37 |
| innodb_old_blocks_time | 0 |
| innodb_open_files | 300 |
| innodb_print_all_deadlocks | OFF |
| innodb_purge_batch_size | 20 |
| innodb_purge_threads | 1 |
| innodb_random_read_ahead | OFF |
| innodb_read_ahead_threshold | 56 |
| innodb_read_io_threads | 8 |
| innodb_replication_delay | 0 |
| innodb_rollback_on_timeout | OFF |
| innodb_rollback_segments | 128 |
| innodb_spin_wait_delay | 6 |
| innodb_stats_method | nulls_equal |
| innodb_stats_on_metadata | ON |
| innodb_stats_sample_pages | 8 |
| innodb_strict_mode | OFF |
| innodb_support_xa | ON |
| innodb_sync_spin_loops | 30 |
| innodb_table_locks | ON |
| innodb_thread_concurrency | 0 |
| innodb_thread_sleep_delay | 10000 |
| innodb_use_native_aio | ON |
| innodb_use_sys_malloc | ON |
| innodb_version | 5.5.39 |
| innodb_write_io_threads | 8 |
+---------------------------------+------------------------+
System Specs:
Intel Xeon E5-2680 v2 (Ivy Bridge) 8 Processors
15GB Ram
2x80 SSDs
CMD to Export:
mysqldump -u <olduser> <oldpw>, <olddb> <table> --verbose --disable-keys --opt | ssh -i <privatekey> <newserver> "cat > <nameoffile>"
Thank you for any assistance. Let me know if there's any other information I can provide.
I figured it out. I increased the innodb_log_file_size from 5MB to 1024MB. While it did significantly increase the amount of records I imported (Never went above 1 second per 3000 rows), it also fixed the spikes. There were only 2 in all the records I imported, but after they happened, they immediately went back to taking sub 1 second.

Resources