The MaxConcurrentExecutables package property when set to -1 means the maximum number of concurrent executables equates to the number physical or logical processors plus 2.
For example 8 processors plus 2 = 10, therefore MaxConcurrentExecutables will be 10
Does this mean the MaxConcurrentExecutables across all packages that happen to be running at the same time is 10, or is it actually 10 per package?
For example, two separate packages running at the same time can each run a max of 10 each, or is it a max of 10 concurrently for the whole SSIS server?
Related
HDFS Mechanism
If each block is saved multiple times for fault tolerance (say 3 times), then does that mean that a 100TB datafile actually takes 300 TB size on the HDFS?
Yes, that is correct. The data must be stored on 3 different data nodes, and therefore it duplicated 3 times.
I am looking to do some data processing of some 6700 files and am using fork() to handle different sets of calculations on the data. Will getting a CPU with a higher core count allow me to run more forks()? Currently I am using a quad core with 8 threads, forking() 8 times, takes me about an hour per file. If I had a 64 core processor and forked() 64 times (splitting up calculations), would that decrease the time by about 8???
Theoretically no, according to Amdahl's law. Probably also practically, because many resources are shared (the caches, the operating system calls, the disk, etc.), but this really depends on your algorithm. For example, if your algorithm is embarrassingly parallel and is cpu-bound, than you may notice a great improvement increasing the cores to 64.
A note after reading the comments of the question: if you have a complexity of O(n!), it is possible that your algorithm is simply impossible to be executed in a realistic time. For example, if your input is n=42, and let's say that you machine is able to do 1 billion of operation per seconds, then the time required to execute your algorithm is greater than the age of the universe.
It's an odd title, but bear with me. Some time ago, I finished writing a program in C with Visual Studio Community 2017 that made significant use of OpenSSL's secp256k1 implementation. It was 10x faster than an equivalent program in Java, so I was happy. However, today I decided to upgrade it to use the bitcoin project's libsecp256k1 optimized library. It worked out great and I got a further 7x performance boost! Changing what library was used to do the EC multiplications is the ONLY thing I changed about the software- it still reads in the same input files, computes the same things, and outputs that same results.
Input files consist of 5 million initial values, and I break that up into chunks of 50k. I then set pthreads to use 6 threads to compute those 50k, save the results, and move on to the next 50k until all 5 million are done (I've also used OpenMP with 6 threads). For some reason when running this program on my Windows 10 4-core laptop, after exactly 16 chunks, the CPU utilization drops from 75% down to 65%, and after another 10 chunks down to 55%, and so on until it's only using about 25% of my CPU by the time all 5 million inputs are calculated.
The thread count (7- 1 main thread, 6 worker threads) remains the same, and the memory usage never goes over 1.5GB (laptop has 16GB), yet the CPU utilization drops as if I'm dropping threads. My temps never go over 83C, and all-core turbo stays at the max 3.4Ghz (base 2.8), so there is no temp throttling happening here. The laptop is always plugged in and power settings are set to max performance. There are no other CPU- or memory-intensive programs running besides this one.
Even stranger is that this problem doesn't happen on either of my two Windows 7 desktops- they both hold correct CPU utilization throughout all 5 million calculations. The old OpenSSL implementation always stayed at correct CPU utilization on all computers, so something's different yet it only affects the Windows 10 laptop.
I'm sorry I don't have code to demonstrate this, and maybe another forum would be more appropriate, but since it's my code I thought I'd ask here. Anyone have any ideas what might be causing this or how to fix it?
I have an Apache Solr 4.2.1 instance that has 4 cores of total size (625MB + 30MB + 20GB + 300MB) 21 GB.
It runs on a 4 Core CPU, 16GB RAM, 120GB HD, CentOS dedicated machine.
1st core is fully imported once a day.
2nd core is fully imported every two hours.
3rd core is delta imported every two hours.
4rth core is fully imported every two hours.
The server also has a decent amount of queries (search for and create, update and delete documents).
Every core has maxDocs: 100 and maxTime: 15000 for autoCommint and maxTime: 1000 for autoSoftCommit.
The System usage is:
Around 97% of 14.96 GB Physical Memory
0MB Swap Space
Around 94% of 4096 File Descriptor Count
From 60% to 90% of 1.21GB of JVM-Memory.
When I reboot the machine the File Descriptor Count fall to near 0 and then steadily over the course of on week or so it reaches the aforementioned value.
So, to conclude, my questions are:
Is 94% of 4096 File Descriptor Count normal?
How can I increase the maximum File Descriptor Count?
How can I calculate the theoretical optimal value for the maximum and used File Descriptor Count.
Will the File Descriptor Count reaches 100? If yes, the server will crash? Or it will keep it bellow 100% by itself and functions as it should?
Thanks a lot beforehand!
Sure.
ulimit -n <number>. See Increasing ulimit on CentOS.
There really isn't one - as many as needed depending on a lot of factors, such as your mergefactor (if you have many files, the number of open files will be large as well - this is especially true for the indices that aren't full imports. Check the number of files in your data directories and issue an optimize if the same index has become very fragmented and have a large mergefactor), number of searchers, other software running on the same server, etc.
It could. Yes (or at least it won't function properly, as it won't be able to open any new files). No. In practice you'll get a message about being unable to open a file with the message "Too many open files".
So, the issue with the File Descriptor Count (FDC) and to be more precise with the ever increasing FDC was that I was committing after every update!
I noticed that Solr wasn't deleting old transaction logs. Thus, after the period of one week FDC was maxing out and I was forced to reboot.
I stopped committing after every update and now my Solr stats are:
Around 55% of 14.96 GB Physical Memory
0MB Swap Space
Around 4% of 4096 File Descriptor Count
From 60% to 80% of 1.21GB of JVM-Memory.
Also, the old transaction logs are deleted by auto commit (soft & hard) and Solr has no more performance wornings!
So, as pointed very well in this article:
Understanding Transaction Logs, Soft Commit and Commit in SolrCloud
"Be very careful committing from the client! In fact, don’t do it."
I have an OpenMP program running with say 6 threads on a 8-core machine. How can I extract this information (num_threads = 6) from another program (non-openmp, plain C program). Can I get this info from underlying kernel.
I was using run_queue lengths using "sar -q 1 0" but this doesn't yield consistent results. sometimes it gives 8, few times more or less.
In Linux, threads are processes (see first post here), so you can ask for a list of running processes with ps -eLf. However, if the machine has 8 cores, it is possible that OpenMP created 8 threads (even though it currently uses 6 of them for your computation); in this case, it is your code that must store somewhere (e.g. a file, or a FIFO) information about the threads that it is using.