rust vs c performance - c

I wanted to learn a bit about rust tasks, so I did a monte carlo computation
of PI. Now my puzzle is why the single-threaded C version is 4 times faster
than the 4-way threaded Rust version. Clearly I am doing something wrong, or my mental performance model is way off.
Here's the C version:
#include <stdlib.h>
#include <sys/types.h>
#include <unistd.h>
#include <stdio.h>
#define PI 3.1415926535897932
double monte_carlo_pi(int nparts)
{
int i, in=0;
double x, y;
srand(getpid());
for (i=0; i<nparts; i++) {
x = (double)rand()/(double)RAND_MAX;
y = (double)rand()/(double)RAND_MAX;
if (x*x + y*y < 1.0) {
in++;
}
}
return in/(double)nparts * 4.0;
}
int main(int argc, char **argv)
{
int nparts;
double mc_pi;
nparts = atoi(argv[1]);
mc_pi = monte_carlo_pi(nparts);
printf("computed: %f error: %f\n", mc_pi, mc_pi - PI);
}
The Rust version was not a line-by-line port:
use std::rand;
use std::rand::distributions::{IndependentSample,Range};
fn monte_carlo_pi(nparts: uint ) -> uint {
let between = Range::new(0f64,1f64);
let mut rng = rand::task_rng();
let mut in_circle = 0u;
for _ in range(0u, nparts) {
let a = between.ind_sample(&mut rng);
let b = between.ind_sample(&mut rng);
if a*a + b*b <= 1.0 {
in_circle += 1;
}
}
in_circle
}
fn main() {
let (tx, rx) = channel();
let ntasks = 4u;
let nparts = 100000000u; /* I haven't learned how to parse cmnd line args yet!*/
for _ in range(0u, ntasks) {
let child_tx = tx.clone();
spawn(proc() {
child_tx.send(monte_carlo_pi(nparts/ntasks));
});
}
let result = rx.recv() + rx.recv() + rx.recv() + rx.recv();
println!("pi is {}", (result as f64)/(nparts as f64)*4.0);
}
Build and time the C version:
$ clang -O2 mc-pi.c -o mc-pi-c; time ./mc-pi-c 100000000
computed: 3.141700 error: 0.000108
./mc-pi-c 100000000 1.68s user 0.00s system 99% cpu 1.683 total
Build and time the Rust version:
$ rustc -v
rustc 0.12.0-nightly (740905042 2014-09-29 23:52:21 +0000)
$ rustc --opt-level 2 --debuginfo 0 mc-pi.rs -o mc-pi-rust; time ./mc-pi-rust
pi is 3.141327
./mc-pi-rust 2.40s user 24.56s system 352% cpu 7.654 tota

The bottleneck, as Dogbert observed, was the random number generator. Here's one that is fast and seeded differently on each thread
fn monte_carlo_pi(id: u32, nparts: uint ) -> uint {
...
let mut rng: XorShiftRng = SeedableRng::from_seed([id,id,id,id]);
...
}

Meaningful benchmarks are a tricky thing, because you have all kinds of optimization options, etc. Also, the structure of the code can have a huge impact.
Comparing C and Rust is a little like comparing apples and oranges. We typically use compute-intensive algorithms like the one you dispicit above, but the real world can throw you a curve.
Having said that, in general, Rust can and does approach the peformance of C and C++, and most likey can do better on concurrency tasks in general.
Take a look at the benchmarks here:
https://benchmarksgame-team.pages.debian.net/benchmarksgame/fastest/rust-clang.html
I chose the Rust vs. C Clang benchmark comparasion, because both rely on the underlying LLVM.
On the other hand, a comparasion with C gcc yields different results:
And guess what? Rust still comes out ahead!
I entreat you to explore the Benchmark Game site in more detail. There are some cases where C will edge out Rust in some instances.
In general, when you are creating a real-world solution, you want to do performance benchmarks for your specific cases. Always do this, because you will often be surprised by the results. Never assume.
I think that too many times, benchmarks are used to forward the "my language is better than your langage" style of rwars. But as one who have used over 20 computer languages throughout his longish career, I always say that it is a matter of the best tool for the job.

Related

Is it possible to effectively parallelise a brute-force attack on 4 different password patterns?

In the context of my homework task I need to smart brute-force a set of passwords. Every password in the set has either of three possible masks:
%%##
##%%
#%%#
%##%
( # - a numeric character, % - a lowercase alpha character ).
At this point I am doing something like this to run over only one pattern ( the 1st one ) in multithreading:
// Compile: $ gcc test.c -o test -fopenmp -O3 -std=c99
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <omp.h>
int main() {
const char alp[26] = "abcdefghijklmnopqrstuvwxyz";
const char num[10] = "0123456789";
register int i;
char pass[4];
#pragma omp parallel for private(pass)
for (i = 0; i < 67600; i++) {
pass[3] = num[i % 10];
pass[2] = num[i / 10 % 10];
pass[1] = alp[i / 100 % 26];
pass[0] = alp[i / 2600 % 26];
/* Slow password processing here */
}
return 0;
}
But, unfortunately, that technique has nothing to do with searching passwords with different patterns.
So my question is:
Is there a way to construct an effective set of parallel for instructions in order to run the attack simultaneously on each password pattern?
Help is much appreciated.
The trick here is to note that all four password options are simply rotations/shifts of each other.
That is, for the example password qr34 and the patterns you mention, you are looking at:
qr34 %%## #Original potential password
4qr3 #%%# #Rotate 1 place right
34qr ##%% #Rotate 2 places right
r34q %##% #Rotate 3 places right
Given this, you can use the same generation technique as in your first question.
For each potential password generated, check the potential password as well as the next three shifts of that password.
Note that the following code relies on an interesting property of C/C++: if the truth value of a statement can be deduced early, no further execution takes place. That is, given the statement if(A || B || C), if A is false, then B must be evaluated; however, if B is true, then C is never evaluated.
This means that we can have A=CheckPass(pass) and B=CheckPass(RotatePass(pass)) and C=CheckPass(RotatePass(pass)) with the guarantee that the password will only be rotated as many times as necessary.
Note that this scheme requires that each thread have its own, private copy of the potential password.
//Compile with, e.g.: gcc -O3 temp.c -std=c99 -fopenmp
#include <stdio.h>
#include <unistd.h>
#include <string.h>
int PassCheck(char *pass){
return strncmp(pass, "4qr3", 4)==0;
}
//Rotate string one character to the right
char* RotateString(char *str, int len){
char lastchr = str[len-1];
for(int i=len-1;i>0;i--)
str[i]=str[i-1];
str[0] = lastchr;
return str;
}
int main(){
const char alph[27] = "abcdefghijklmnopqrstuvwxyz";
const char num[11] = "0123456789";
char goodpass[4] = "----"; //Provide a default password to indicate an error state
#pragma omp parallel for collapse(4)
for(int i = 0; i < 26; i++)
for(int j = 0; j < 26; j++)
for(int m = 0; m < 10; m++)
for(int n = 0; n < 10; n++){
char pass[4] = {alph[i],alph[j],num[m],num[n]};
if(
PassCheck(pass) ||
PassCheck(RotateString(pass,4)) ||
PassCheck(RotateString(pass,4)) ||
PassCheck(RotateString(pass,4))
){
//It is good practice to use `critical` here in case two
//passwords are somehow both valid. This won't arise in
//your code, but is worth thinking about.
#pragma omp critical
{
memcpy(goodpass, pass, 4);
//#pragma omp cancel for //Escape for loops!
}
}
}
printf("Password was '%.4s'.\n",goodpass);
return 0;
}
I notice that you are generating your password using
pass[3] = num[i % 10];
pass[2] = num[i / 10 % 10];
pass[1] = alp[i / 100 % 26];
pass[0] = alp[i / 2600 % 26];
This sort of technique is occasionally useful, especially in scientific programming, but usually only for addressing convenience and memory locality.
For instance, an array of arrays where an element is accessed as a[y][x] can be written as a flat-array with elements accessed as a[y*width+x]. This gives a speed gain, but only because the memory is contiguous.
In your case, this indexing does not produce any speed gains, but does make it more difficult to reason about how your program works. I would avoid it for this reason.
It's been said that "premature optimization is the root of all evil". This is especially true of micro-optimizations such as the one you're trying here. The biggest speed gains come from high-level algorithmic decisions, not from fiddly stuff. The -O3 compilation flag does most of everything you'll ever need done in terms of making your code fast at this level.
Micro-optimizations assume that doing something convoluted in your high-level code will somehow enable you to out-smart the compiler. This is not a good assumption since the compiler is often quite smart and will be even smarter tomorrow. Your time is very valuable: don't use it on this stuff unless you have a clear justification. (Further discussion of "premature optimization" is here.)

Enabling HVX SIMD in Hexagon DSP by using instruction intrinsics

I was using Hexagon-SDK 3.0 to compile my sample application for HVX DSP architecture. There are many tools related to Hexagon-LLVM available to use located folder at:
~/Qualcomm/HEXAGON_Tools/7.2.12/Tools/bin
I wrote a small example to calculate the product of two arrays to makes sure I can utilize the HVX hardware acceleration. However, when I generate my assembly, either with -S , or, with -S -emit-llvm I don't find any definition of HVX instructions such as vmem, vX, etc. My C application is executing on hexagon-sim for now till I manage to find a way to run in on the board as well.
As far as I understood, I need to define my HVX part of the code in C Intrinsics, but was not able to adapt the existing examples to match my own needs. It would be great if somebody could demonstrate how this process can be done. Also in the Hexagon V62 Programmer's Reference Manual many of the intrinsic instructions are not defined.
Here is my small app in pure C:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <math.h>
#if defined(__hexagon__)
#include "hexagon_standalone.h"
#include "subsys.h"
#endif
#include "io.h"
#include "hvx.cfg.h"
#define KERNEL_SIZE 9
#define Q 8
#define PRECISION (1<<Q)
double vectors_dot_prod2(const double *x, const double *y, int n)
{
double res = 0.0;
int i = 0;
for (; i <= n-4; i+=4)
{
res += (x[i] * y[i] +
x[i+1] * y[i+1] +
x[i+2] * y[i+2] +
x[i+3] * y[i+3]);
}
for (; i < n; i++)
{
res += x[i] * y[i];
}
return res;
}
int main (int argc, char* argv[])
{
int n;
long long start_time, total_cycles;
/* -----------------------------------------------------*/
/* Allocate memory for input/output */
/* -----------------------------------------------------*/
//double *res = memalign(VLEN, 4 *sizeof(double));
const double *x = memalign(VLEN, n *sizeof(double));
const double *y = memalign(VLEN, n *sizeof(double));
if ( *x == NULL || *y == NULL ){
printf("Error: Could not allocate Memory for image\n");
return 1;
}
#if defined(__hexagon__)
subsys_enable();
SIM_ACQUIRE_HVX;
#if LOG2VLEN == 7
SIM_SET_HVX_DOUBLE_MODE;
#endif
#endif
/* -----------------------------------------------------*/
/* Call fuction */
/* -----------------------------------------------------*/
RESET_PMU();
start_time = READ_PCYCLES();
vectors_dot_prod2(x,y,n);
total_cycles = READ_PCYCLES() - start_time;
DUMP_PMU();
printf("Array product of x[i] * y[i] = %f\n",vectors_dot_prod2(x,y,4));
#if defined(__hexagon__)
printf("AppReported (HVX%db-mode): Array product of x[i] * y[i] =%f\n", VLEN, vectors_dot_prod2(x,y,4));
#endif
return 0;
}
I compile it using hexagon-clang:
hexagon-clang -v -O2 -mv60 -mhvx-double -DLOG2VLEN=7 -I../../common/include -I../include -DQDSP6SS_PUB_BASE=0xFE200000 -o arrayProd.o -c arrayProd.c
Then link it with subsys.o (is found in DSK and already compiled) and -lhexagon to generate my executable:
hexagon-clang -O2 -mv60 -o arrayProd.exe arrayProd.o subsys.o -lhexagon
Finally, run it using the sim:
hexagon-sim -mv60 arrayProd.exe
A bit late, but might still be useful.
Hexagon Vector eXtensions are not emitted automatically and current instruction set (as of 8.0 SDK) only supports integer manipulation, so compiler will not emit anything for the C code containing "double" type (it is similar to SSE programming, you have to manually pack xmm registers and use SSE intrinsics to do what you need).
You need to define what your application really requires.
E.g., if you are writing something 3D-related and really need to calculate double (or float) dot products, you might convert yout floats to 16.16 fixed point and then use instructions (i.e., C intrinsics) like
Q6_Vw_vmpyio_VwVh and Q6_Vw_vmpye_VwVuh to emulate fixed-point multiplication.
To "enable" HVX you should use HVX-related types defined in
#include <hexagon_types.h>
#include <hexagon_protos.h>
The instructions like 'vmem' and 'vmemu' are emitted automatically for statements like
// I assume 64-byte mode, no `-mhvx-double`. For 128-byte mode use 32 int array
int values[16] = { 1, 2, 3, ..... };
/* The following line compiles to
{
r4 = __address_of_values
v1 = vmem(r4 + #0)
}
You can get the exact code by using '-S' switch, as you already do
*/
HVX_Vector v = *(HVX_Vector*)values;
Your (fixed-point) version of dot_product may read out 16 integers at a time, multiply all 16 integers in a couple of instructions (see HVX62 programming manual, there is a tip to implement 32-bit integer multiplication from 16-bit one),
then shuffle/deal/ror data around and sum up rearranged vectors to get dot product (this way you may calculate 4 dot products almost at once and if you preload 4 HVX registers - that is 16 4D vectors - you may calculate 16 dot products in parallel).
If what you are doing is really just byte/int image processing, you might use specific 16-bit and 8-bit hardware dot products in Hexagon instruction set, instead of emulating doubles and floats.

Why Rust outperforms C so much [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 6 years ago.
Improve this question
For many years I use this very simple program to get a rough estimate of the programming language performance. I have a dozen of versions in Ruby (600 ms), Python (1500 ms), JavaScript (45 ms), C (25 ms both GCC/Clang on my notebook) and other languages. Do not make serious conclusions based on such a simple benchmark, because it is far from any real life case. I call it "classic" simply because I use it for decades already. Maybe even saying "a rough estimate" is too much. This test is extremely simple, mostly because writing better test for a language you do not know is time consuming and I usually write it when I get my hands on the new language for the first time. Sometimes, though, I will run the test few years later when the compiler/interpreter gets an update. Anyway recently I ported this test to(for?) Rust and was really surprised because it outperformed previous record holder C about three times (7 ms!~#!). My question is for those who know something about Rust compilation, why is it so fast? I know it uses LLVM just as Clang so I expected about the same speed (Just as Nim performs about as C because it compiles to C, though not very efficiently and is still about two time slower than C when this simple benchmark is run).
Rust
// rustc --color always -C opt-level=3 -C prefer-dynamic classic.rs -C link-args=-s -o classic.rust
use std::ptr;
#[repr(C)]
struct timeval {
tv_sec: i64,
tv_usec: i64
}
extern {
fn gettimeofday(tv: &mut timeval, tzp: *const ()) -> i32;
}
fn time1000() -> i64 {
let mut tv = timeval { tv_sec: 0, tv_usec: 0 };
unsafe {
gettimeofday(&mut tv, ptr::null());
}
tv.tv_sec * 1000 + tv.tv_usec / 1000
}
fn classic() {
let mut a:i64 = 3000000;
loop {
a = a - 1;
if a == 0 { break; }
let mut b = (a / 100000) as i64;
b = b * 100000;
if a == b { print!("{} ", a); }
}
}
fn main() {
let mut t = time1000();
classic();
t = time1000() - t;
println!("{}", t);
}
C
#include "stdio.h"
#include <sys/time.h>
long time1000() {
struct timeval val;
gettimeofday(&val, 0);
return val.tv_sec * 1000 + val.tv_usec / 1000;
}
void classic() {
double a = 3000000, b;
while (1) {
a--;
if (a == 0) break;
b = a / 100000;
b = (int) b;
b *= 100000;
if (a == b) { printf("%i ", (int)a); }
}
}
int main() {
int T = time1000();
classic();
T = time1000() - T;
printf("%i", (int)T);
}
Substitute
int64_t a = 3000000, b;
for
double a = 3000000, b;
to make it equivalent (on a 64 bit arch.) with
let mut a:i64 = 3000000;
//...
let mut b = (a / 100000) as i64;
and C wins (even with stdio).
On my PC, C is about 1.4–1.5 times faster (-O3, measured on a 100-iteration shell for-loop to discount startup overhead).

Why Swift is 100 times slower than C in this image processing test? [duplicate]

This question already has answers here:
Swift Beta performance: sorting arrays
(9 answers)
Closed 8 years ago.
Like many other developers I have been very excited about the new Swift language from Apple. Apple has claimed its speed is faster than Objective C and can be used to write operating system. And from what I learned so far, it's a static typed language and able to have precisely control over the exact data type (like integer length). So it does look like having good potential handling performance critical tasks, like image processing, right?
That's what I thought before I carried out a quick test. The result really surprised me.
Here is a simple code snippet in C:
test.c:
#include <stdio.h>
#include <stdint.h>
#include <string.h>
uint8_t pixels[640*480];
uint8_t alpha[640*480];
uint8_t blended[640*480];
void blend(uint8_t* px, uint8_t* al, uint8_t* result, int size)
{
for(int i=0; i<size; i++) {
result[i] = (uint8_t)(((uint16_t)px[i]) *al[i] /255);
}
}
int main(void)
{
memset(pixels, 128, 640*480);
memset(alpha, 128, 640*480);
memset(blended, 255, 640*480);
// Test 10 frames
for(int i=0; i<10; i++) {
blend(pixels, alpha, blended, 640*480);
}
return 0;
}
I compiled it on my Macbook Air 2011 with the following command:
clang -O3 test.c -o test
The 10 frame processing time is about 0.01s. In other words, it takes the C code 1ms to process one frame:
$ time ./test
real 0m0.010s
user 0m0.006s
sys 0m0.003s
Then I have a Swift version of the same code:
test.swift:
let pixels = UInt8[](count: 640*480, repeatedValue: 128)
let alpha = UInt8[](count: 640*480, repeatedValue: 128)
let blended = UInt8[](count: 640*480, repeatedValue: 255)
func blend(px: UInt8[], al: UInt8[], result: UInt8[], size: Int)
{
for(var i=0; i<size; i++) {
var b = (UInt16)(px[i]) * (UInt16)(al[i])
result[i] = (UInt8)(b/255)
}
}
for i in 0..10 {
blend(pixels, alpha, blended, 640*480)
}
The build command line is:
xcrun swift -O3 test.swift -o test
Here I use the same O3 level optimization flag to make the comparison hopefully fair. However, the resulting speed is 100 time slower:
$ time ./test
real 0m1.172s
user 0m1.146s
sys 0m0.006s
In other words, it takes Swift ~120ms to processing one frame which takes C just 1 ms.
What happened?
Update: I am using clang:
$ gcc -v
Configured with: --prefix=/Applications/Xcode6-Beta.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
Apple LLVM version 6.0 (clang-600.0.34.4) (based on LLVM 3.5svn)
Target: x86_64-apple-darwin13.2.0
Thread model: posix
Update: more results with different running iterations:
Here are the result for different number of "frames", i.e. change the main for loop number from 10 to other numbers. Note now I am getting even faster C code time (cache hot?), while the Swift time doesn't change too much:
C Time (s) Swift Time (s)
1 frame: 0.005 0.130
10 frames(*): 0.006 1.196
20 frames: 0.008 2.397
100 frames: 0.024 11.668
Update: `-Ofast` helps
With -Ofast suggested by #mweathers, the Swift speed goes up to reasonable range.
On my laptop the Swift version with -Ofast gets 0.013s for 10 frames and 0.048s for 100 frames, close to half of the C performance.
Building with:
xcrun swift -Ofast test.swift -o test
I'm getting times of:
real 0m0.052s
user 0m0.009s
sys 0m0.005s
Let's just concentrate on the answer to the question, which started with a "Why": Because you didn't turn optimisations on, and Swift relies heavily on compiler optimisation.
That said, doing image processing in C is truly daft. That's what you have CGImage and friends for.

Comparing speed of Haskell and C for the computation of primes

I initially wrote this (brute force and inefficient) method of calculating primes with the intent of making sure that there was no difference in speed between using "if-then-else" versus guards in Haskell (and there is no difference!). But then I decided to write a C program to compare and I got the following (Haskell slower by just over 25%) :
(Note I got the ideas of using rem instead of mod and also the O3 option in the compiler invocation from the following post : On improving Haskell's performance compared to C in fibonacci micro-benchmark)
Haskell : Forum.hs
divisibleRec :: Int -> Int -> Bool
divisibleRec i j
| j == 1 = False
| i `rem` j == 0 = True
| otherwise = divisibleRec i (j-1)
divisible::Int -> Bool
divisible i = divisibleRec i (i-1)
r = [ x | x <- [2..200000], divisible x == False]
main :: IO()
main = print(length(r))
C : main.cpp
#include <stdio.h>
bool divisibleRec(int i, int j){
if(j==1){ return false; }
else if(i%j==0){ return true; }
else{ return divisibleRec(i,j-1); }
}
bool divisible(int i){ return divisibleRec(i, i-1); }
int main(void){
int i, count =0;
for(i=2; i<200000; ++i){
if(divisible(i)==false){
count = count+1;
}
}
printf("number of primes = %d\n",count);
return 0;
}
The results I got were as follows :
Compilation times
time (ghc -O3 -o runProg Forum.hs)
real 0m0.355s
user 0m0.252s
sys 0m0.040s
time (gcc -O3 -o runProg main.cpp)
real 0m0.070s
user 0m0.036s
sys 0m0.008s
and the following running times :
Running times on Ubuntu 32 bit
Haskell
17984
real 0m54.498s
user 0m51.363s
sys 0m0.140s
C++
number of primes = 17984
real 0m41.739s
user 0m39.642s
sys 0m0.080s
I was quite impressed with the running times of Haskell. However my question is this : can I do anything to speed up the haskell program without :
Changing the underlying algorithm (it is clear that massive speedups can be gained by changing the algorithm; but I just want to understand what I can do on the language/compiler side to improve performance)
Invoking the llvm compiler (because I dont have this installed)
[EDIT : Memory usage]
After a comment by Alan I noticed that the C program uses a constant amount of memory where as the Haskell program slowly grows in memory size. At first I thought this had something to do with recursion, but gspr explains below why this is happening and provides a solution. Will Ness provides an alternative solution which (like gspr's solution) also ensures that the memory remains static.
[EDIT : Summary of bigger runs]
max number tested : 200,000:
(54.498s/41.739s) = Haskell 30.5% slower
max number tested : 400,000:
3m31.372s/2m45.076s = 211.37s/165s = Haskell 28.1% slower
max number tested : 800,000:
14m3.266s/11m6.024s = 843.27s/666.02s = Haskell 26.6% slower
[EDIT : Code for Alan]
This was the code that I had written earlier which does not have recursion and which I had tested on 200,000 :
#include <stdio.h>
bool divisibleRec(int i, int j){
while(j>0){
if(j==1){ return false; }
else if(i%j==0){ return true; }
else{ j -= 1;}
}
}
bool divisible(int i){ return divisibleRec(i, i-1); }
int main(void){
int i, count =0;
for(i=2; i<8000000; ++i){
if(divisible(i)==false){
count = count+1;
}
}
printf("number of primes = %d\n",count);
return 0;
}
The results for the C code with and without recursion are as follows (for 800,000) :
With recursion : 11m6.024s
Without recursion : 11m5.328s
Note that the executable seems to take up 60kb (as seen in System monitor) irrespective of the maximum number, and therefore I suspect that the compiler is detecting this recursion.
This isn't really answering your question, but rather what you asked in a comment regarding growing memory usage when the number 200000 grows.
When that number grows, so does the list r. Your code needs all of r at the very end, to compute its length. The C code, on the other hand, just increments a counter. You'll have to do something similar in Haskell too if you want constant memory usage. The code will still be very Haskelly, and in general it's a sensible proposition: you don't really need the list of numbers for which divisible is False, you just need to know how many there are.
You can try with
main :: IO ()
main = print $ foldl' (\s x -> if divisible x then s else s+1) 0 [2..200000]
(foldl' is a stricter foldl from Data.List that avoids thunks being built up).
Well bang patters give you a very small win (as does llvm, but you seem to have expected that):
{-# LANUGAGE BangPatterns #-}
divisibleRec !i !j | j == 1 = False
And on my x86-64 I get a very big win by switching to smaller representations, such as Word32:
divisibleRec :: Word32 -> Word32 -> Bool
...
divisible :: Word32 -> Bool
My timings:
$ time ./so -- Int
2262
real 0m2.332s
$ time ./so -- Word32
2262
real 0m1.424s
This is a closer match to your C program, which is only using int. It still doesn't match performance wise, I suspect we'd have to look at core to figure out why.
EDIT: and the memory use, as was already noted I see, is about the named list r. I just inlined r, made it output a 1 for each non-divisble value and took the sum:
main = print $ sum $ [ 1 | x <- [2..800000], not (divisible x) ]
Another way to write down your algorithm is
main = print $ length [()|x<-[2..200000], and [rem x d>0|d<-[x-1,x-2..2]]]
Unfortunately, it runs slower. Using all ((>0).rem x) [x-1,x-2..2] as a test, it runs slower still. But maybe you'd test it on your setup nevertheless.
Replacing your code with explicit loop with bang patterns made no difference whatsoever:
{-# OPTIONS_GHC -XBangPatterns #-}
r4::Int->Int
r4 n = go 0 2 where
go !c i | i>n = c
| True = go (if not(divisible i) then (c+1) else c) (i+1)
divisibleRec::Int->Int->Bool
divisibleRec i !j | j == 1 = False
| i `rem` j == 0 = True
| otherwise = divisibleRec i (j-1)
When I started programming in Haskell I was also impressed about its speed. You may be interested in reading point 5 "The speed of Haskell" of this article.

Resources