SSD read/write measurement using iperf3 - iperf3

is it possible to measure the read/write speed on an SSD drive by connecting it to a PC and using iperf?

Related

Virtual server and Physical storage for SQL Server

We are migrating server from P2V but my server team is asking if we need an virtual server with physical storage drives.
What's the benefit of having a virtual server and physical drive for SQL Server?
Never heard of this setup
It's a broad topic but I'll try and shed some light on the subject. For SQL, in general, the bottleneck that we always fight with is disk contention. How to read and write faster, that's what a database is for right? :)
I suspect what they are offering you is a virtual server that uses drives on a physical SAN with dedicated drives. Ideally that is what you want. You could then have the drives configured into a RAID10 or other high performance/highly redudant way without the chance of somebody else impacting your environment.
When working with SQL it's very hard to track down "Why is this slow" and if the slowness is being causes by disk contention, it can be that much harder with shared drives and especially virtual drives. perf mon can be very helpful but then when you discover it is a disk problem but you are on virtual drives there may not be much you can do about it. Simply increasing the partition size won't help.
If you have your data on a dedicated physical drives on a SAN where you can expand and add, you'll have options. And if you can get physical SSD drives, do it!
There are some other posts similar to this, for your reading enjoyment: https://serverfault.com/questions/190374/vmware-workstation-virtual-vs-physical-disk

How do I tweak GlusterFS performance?

I have 2 dedicated servers with the following specs:
- E3 1270V3 CPU
- 32GB RAM
- 960GB SSD
- 1Gbps private ethernet network.
Using local drives, dd tests usually in the range of 600MB/s, which is very good.
I recently setup a GlusterFS replicated cluster, by installing both glusterd and glusterfs clients on each machine. The dd test result of the global namespace dropped to 50MB/s, and whenever I try to run my application which requires high read/write performance, the application simply crashes.
How do I tweak GlusterFS for better performance using 2 machines replicated cluster?

Docker container IO performance

I am trying to investigate the IO performance overhead of docker so I created a mysql docker container on a specific machine and I ran the sysbench mysql benchmark to measure IO performance. Sysbench basically executes some read/write transactions over a period of time and then outputs the number of completed transactions and the transactions/second rate.
When I run the benchmark on the native machine, I get a 779.5 transactions per second. When I run the benchmark in a mysql container, I get 336 transactions/second. Almost half the number of transactions per second. Is this a normal performance overhead of docker? This is a huge disadvantage for running the database in a container in production systems, especially for IO/database intensive applications
Are you using a Docker volume for the database files? By default, file writes inside the container will be made to the copy-on-write filesystem, which will be inefficient for database files. Using a volume will mean you can write directly to the host filesystem.

when to adjust i/o affinity in sql server

what exactly is i/o affinity in SQL server?
An SQ L SERVER uses 64 cores.
It has been discovered that a performance issues ,when large amount of data are written to tables under heavy system load.
Why to limit the number of cores that handle I/O by using I/O Affinity?
When to create an affinity mask?
It is used to control how many cores of CPU are used for disk operations and how many are used for the remaining SQL related services.
--> affinity I/O mask Option

Processing data faster using ssis and an ssd drive

We have a new local server that processes data using sql server 2008 and ssis. in this server i have dedicated drive to do different things. the c drive is for the os and software. the d drive is for db storage , and the ssis . The e drive is a ssd drive that we restore each database that is being used by the ssis.
our idea was that we process allot of data and since the ssd drive is only 500gb( because of the cost) we would have everything on a regular drive and transfer the databases in use to the ssd drive to have the process run faster.
when i run the ssis without the ssd drive it takes about 8 hrs and when i run the process restoring the databases on the ssd drive the process takes about the same amount of time to process( in fact if i include the restoring of the data bases the process takes longer)
As of right now i cannot move the os and software to the ssd drive to test to see if that would help the process.
is there a way to utilze the ssd drive to process the data and to speed up the process.
If you want to speed up a given process, you need to find the bottleneck.
Generally speaking (since you give no details of the SSIS-Process) at each point in the operation, one of the systems components (CPU, RAM, I/O, Network) is operating at maximum speed. You need to find the component that contributes the most to your run time and then either speed that component up by replacing it with a faster component or reduce the load on it by redesigning the process.
Since you ruled out already the I/O to the user database(s), you need to look elsewhere. For a general ad hoc look, use the systems resource monitor (available through the task manager). For a deeper look, there are lots of performance counters available via perfmon.exe, both for OS (CPU, I/O), SSIS and SQL Server.
If you have reason to believe that DB-I/O is your bottleneck, try moving the tempdb to the SSD (if you generally have lots of load on the tempdb that is a good idea anyway). Instructions here.
Unless you give us some more details about the SSIS process in question, that's all I can say for now.

Resources