How to get a working mutable reference to a subset of an array? - arrays

This works as expected for reading:
fn u64_from_low_eight(buf: &[u8; 9]) -> u64 {
let bytes: &[u8; size_of::<u64>()] = buf[..size_of::<u64>()].try_into().unwrap();
u64::from_le_bytes(*bytes)
}
(It nicely optimises into a single assembly instruction on AArch64 and x86_64.)
I had expected something similar to work for an unaligned write of a u64 to a buffer.
/// Encodes a u64 into 1-9 bytes and returns the number of bytes updated.
pub fn encode(value: u64, buf: &mut [u8; 9]) -> usize {
let low64: &mut [u8; size_of::<u64>()] = &mut buf[..(size_of::<u64>())].try_into().unwrap();
match value {
// FIXME: Change to exclusive ranges once the feature's stabilised.
OFFSET0..=OFFSET1_LESS_ONE => {
let num = inner_encode::<1>(value, low64);
#[cfg(test)] eprintln!("low64: {low64:?}");
#[cfg(test)] eprintln!("buf: {buf:?}");
num
},
low64 (above) appears not to be a mutable reference into the first eight bytes of buf. (Perhaps it is pointing at a copy?)
i.e. low64 and the first eight bytes of buf are different in the example above.
What can I use instead of let low64: &mut [u8; size_of::<u64>()] = &mut buf[..(size_of::<u64>())].try_into().unwrap(); to get a &mut [u8; 8] which points at the first eight bytes of a &mut [u8; 9]?
(My intention is that this should also optimise into a single unaligned write on AArch64 and x86_64.)
Update: Here's the problem on the playground: https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=294a9dd9ba1400eceb2d945a10028a6b
use std::mem::size_of;
fn u64_to_low_eight(value: u64, buf: &mut [u8; 9]) {
let low64: &mut [u8; size_of::<u64>()] = &mut buf[..size_of::<u64>()].try_into().unwrap();
*low64 = u64::to_le_bytes(value);
dbg!(low64);
}
fn main() {
let mut src: [u8; 9] = [1, 2, 3, 4, 5, 6, 7, 8, 9];
u64_to_low_eight(0x0A_0B_0C_0D_0E_0F_10_11, &mut src);
dbg!(src);
}
and the output, where I want src to change too:
Compiling playground v0.0.1 (/playground)
Finished dev [unoptimized + debuginfo] target(s) in 0.62s
Running `target/debug/playground`
[src/main.rs:6] low64 = [
17,
16,
15,
14,
13,
12,
11,
10,
]
[src/main.rs:12] src = [
1,
2,
3,
4,
5,
6,
7,
8,
9,
]

You need to wrap &mut buf[..size_of::<u64>()] in parenthesis. &mut is a low-priority operator, so everything to the right is evaluated first, making your code equivalent to:
// takes a mutable reference to those 8 copied bytes
let low64: &mut [u8; size_of::<u64>()] = &mut (
// copies 8 bytes out of `buf`
buf[..size_of::<u64>()].try_into().unwrap()
);
So you need to do the following instead:
let low64: &mut [u8; size_of::<u64>()] = (&mut buf[..size_of::<u64>()]).try_into().unwrap();

Related

Splitting owned array into owned halves

I would like to divide a single owned array into two owned halves—two separate arrays, not slices of the original array. The respective sizes are compile time constants. Is there a way to do that without copying/cloning the elements?
let array: [u8; 4] = [0, 1, 2, 3];
let chunk_0: [u8; 2] = ???;
let chunk_1: [u8; 2] = ???;
assert_eq!(
[0, 1],
chunk_0
);
assert_eq!(
[2, 3],
chunk_1
);
Since it would amount to merely moving ownership of the elements, I have a hunch there should be a zero-cost abstraction for this. I wonder if I could do something like this with some clever use of transmute and forget. But there are a lot of scary warnings in the docs for those functions.
My main motivation is to operate on large arrays in memory without as many mem copies. For example:
let raw = [0u8; 1024 * 1024];
let a = u128::from_be_array(???); // Take the first 16 bytes
let b = u64::from_le_array(???); // Take the next 8 bytes
let c = ...
The only way I know to accomplish patterns like the above is with lots of mem copying which is redundant.
You can use std::mem:transmute (warning: unsafe!):
fn main() {
let array: [u8; 4] = [0, 1, 2, 3];
let [chunk_0, chunk_1]: [[u8; 2]; 2] =
unsafe { std::mem::transmute::<[u8; 4], [[u8; 2]; 2]>(array) };
assert_eq!([0, 1], chunk_0);
assert_eq!([2, 3], chunk_1);
}
Playground
use std::convert::TryInto;
let raw = [0u8; 1024 * 1024];
let a = u128::from_be_bytes(raw[..16].try_into().unwrap()); // Take the first 16 bytes
let b = u64::from_le_bytes(raw[16..24].try_into().unwrap()); // Take the next 8 bytes
In practice, I've found the compiler is pretty smart about optimizing this. With optimizations, it will do the above in a single copy (directly into the register that holds a or b, respectively). As an example, according to godbolt, this:
use std::convert::TryInto;
pub fn cvt(bytes: [u8; 24]) -> (u128, u64) {
let a = u128::from_be_bytes(bytes[..16].try_into().unwrap()); // Take the first 16 bytes
let b = u64::from_le_bytes(bytes[16..24].try_into().unwrap()); // Take the next 8 bytes
(a, b)
}
with -C opt-level=3 compiles into:
example::cvt:
mov rax, qword ptr [rdi + 8]
bswap rax
mov rdx, qword ptr [rdi]
bswap rdx
mov rcx, qword ptr [rdi + 16]
ret
It's optimized out any extra copies, calling the try_into method, possibly panicking, et cetera.
The bytemuck library provides a safe wrapper for re-interpretation of any data type that is “plain old data” (more precisely: all possible byte sequences of the right size are valid values), as long as the input and output are the same size (or the input is a slice whose byte-length is divisible by the output type's size). This is equivalent to a transmute solution but without needing to write any any new unsafe code.
let array: [u8; 4] = [0, 1, 2, 3];
let [chunk_0, chunk_1]: [[u8; 2]; 2] = bytemuck::cast(array);
If you'd like to avoid using additional libraries, I recommend the try_into() approach that's already been posted.

How to cast an [u8] array larger than 8 bytes to an integer? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
Because of the length of the array, I can't use i32::from_ne_bytes(), but of course, the following works especially since the code will run only on a cpu architecture supporting unaligned access (or that because of the small length, the whole array might get stored accross several cpu registers).
fn main() {
let buf: [u8; 10] = [0, 0, 0, 1, 0x12, 14, 50, 120, 250, 6];
println!("1 == {}", unsafe{std::ptr::read(&buf[1])} as i32);
}
But is there a cleaner way to do it while still not copying the array?
Extract a 4-byte &[u8] slice and use try_into() to convert it into a &[u8; 4] array reference. Then you can call i32::from_ne_bytes().
use std::convert::TryInto;
fn main() {
let buf: [u8; 10] = [0, 0, 0, 1, 0x12, 14, 50, 120, 250, 6];
println!("{}", i32::from_ne_bytes((&buf[1..5]).try_into().unwrap()));
}
Output:
302055424
Playground
TL;DR: Realistically, just use John Kugelman's solution, copying 4 bytes is not measurable.
The biggest "measured" difference is 0.09 ps (239.79 - 239.70). That's 90 femtoseconds, or 0.00009 nanoseconds. Running the benchmark again, will yield wildly different results (in the picoseconds range.)
Measuring something as copying 4 bytes is not realistic. We're so far below nanoseconds that this is pure noise.
test
#[bench]
criterion
try_into
0 ns
239.79 ps
reinterpret
0 ns
239.70 ps
bit unpack
0 ns
239.74 ps
b.iter(|| 1)
240.18 ps
b.iter(|| 1)
239.73 ps
b.iter(|| 1)
239.68 ps
For fun, change all the tests to b.iter(|| 1), and you'll receive similar results fluctuating in picoseconds.
The biggest difference of the b.iter(|| 1) tests, results in 0.5 ps (240.18 - 239.68). That's a "measured" difference of 0.5 ps. That's 500 femtoseconds, or 0.0005 nanoseconds.
That's literally a bigger difference, compared to when we did "actual" "work". This is pure noise.
You're talking about copying 4 bytes. This isn't going to be measurable, even if "every µs matters". This alone isn't going to be measurable in microseconds, and neither in nanoseconds.
(I'll avoid reiterating what's already been said in the comments.)
If you don't want to use TryInto, then you can use some good old bit unpacking and bit shifting. (Out of bounds access will cause a panic.)
let i = (buf[1] as i32) |
(buf[2] as i32) << 8 |
(buf[3] as i32) << 16 |
(buf[4] as i32) << 24;
println!("{}", i);
// Prints `302055424`
Alternatively, you can also reinterpret buf as a *const i32 pointer and dereference it. However, dereferencing a pointer is unsafe. (Again, out of bounds access can cause a panic.)
// let i = unsafe { &*((buf.as_ptr().add(1)) as *const i32) };
let i = unsafe { &*((buf.as_ptr().offset(1)) as *const i32) };
println!("{:?}", i);
// Prints `302055424`
So you want the best performing solution for copying 4 bytes. Alright, let's take John Kugelman's solution and the previous 2 and benchmark them.
// benches/bench.rs
#![feature(test)]
extern crate test;
use test::Bencher;
use std::convert::TryInto;
#[bench]
fn bench_try_into(b: &mut Bencher) {
b.iter(|| {
let buf: [u8; 10] = [0, 0, 0, 1, 0x12, 14, 50, 120, 250, 6];
i32::from_ne_bytes((&buf[1..5]).try_into().unwrap())
});
}
#[bench]
fn bench_reinterpret(b: &mut Bencher) {
b.iter(|| {
let buf: [u8; 10] = [0, 0, 0, 1, 0x12, 14, 50, 120, 250, 6];
unsafe { &*((buf.as_ptr().offset(1)) as *const i32) }
});
}
#[bench]
fn bench_bit_unpack(b: &mut Bencher) {
b.iter(|| {
let buf: [u8; 10] = [0, 0, 0, 1, 0x12, 14, 50, 120, 250, 6];
(buf[1] as i32) | (buf[2] as i32) << 8 | (buf[3] as i32) << 16 | (buf[4] as i32) << 24
});
}
Now let's benchmark by executing cargo +nightly bench.
running 3 tests
test bench_bit_unpack ... bench: 0 ns/iter (+/- 0)
test bench_reinterpret ... bench: 0 ns/iter (+/- 0)
test bench_try_into ... bench: 0 ns/iter (+/- 0)
Like I presumed, copying 4 bytes isn't going to be measurable.
Now, let's try and benchmark with criterion. Maybe the test crate is (being realistic and) limited to nanoseconds, who knows.
// benches/bench.rs
use criterion::{criterion_group, criterion_main, Criterion};
use std::convert::TryInto;
fn criterion_benchmark(c: &mut Criterion) {
c.bench_function("try_into", |b| {
b.iter(|| {
let buf: [u8; 10] = [0, 0, 0, 1, 0x12, 14, 50, 120, 250, 6];
i32::from_ne_bytes((&buf[1..5]).try_into().unwrap())
})
});
c.bench_function("reinterpret", |b| {
b.iter(|| {
let buf: [u8; 10] = [0, 0, 0, 1, 0x12, 14, 50, 120, 250, 6];
unsafe { &*((buf.as_ptr().offset(1)) as *const i32) }
})
});
c.bench_function("bit_unpack", |b| {
b.iter(|| {
let buf: [u8; 10] = [0, 0, 0, 1, 0x12, 14, 50, 120, 250, 6];
(buf[1] as i32) | (buf[2] as i32) << 8 | (buf[3] as i32) << 16 | (buf[4] as i32) << 24
})
});
}
criterion_group!(benches, criterion_benchmark);
criterion_main!(benches);
# Cargo.toml
[dev-dependencies]
criterion = "0.3.3"
[[bench]]
name = "bench"
harness = false
Now, let's benchmark by executing cargo bench.
try_into time: [239.69 ps 239.79 ps 239.91 ps]
change: [+0.0101% +0.0700% +0.1316%] (p = 0.02 < 0.05)
Change within noise threshold.
Found 14 outliers among 100 measurements (14.00%)
3 (3.00%) low mild
4 (4.00%) high mild
7 (7.00%) high severe
reinterpret time: [239.63 ps 239.70 ps 239.78 ps]
change: [-0.7006% -0.2163% +0.0525%] (p = 0.45 > 0.05)
No change in performance detected.
Found 11 outliers among 100 measurements (11.00%)
4 (4.00%) high mild
7 (7.00%) high severe
bit_unpack time: [239.65 ps 239.74 ps 239.84 ps]
change: [-0.0768% +0.0775% +0.2867%] (p = 0.45 > 0.05)
No change in performance detected.
Found 12 outliers among 100 measurements (12.00%)
1 (1.00%) low mild
3 (3.00%) high mild
8 (8.00%) high severe
test
#[bench]
criterion
try_into
0 ns
239.79 ps
reinterpret
0 ns
239.70 ps
bit unpack
0 ns
239.74 ps
So the mean measurements are 239.79 ps, 239.70 ps, and 239.74 ps. So the biggest "measured" difference is 0.09 ps. That's 90 femtoseconds, or 0.00009 nanoseconds. Running the benchmark again, will yield different results. Measuring something standalone as copying 4 bytes is not realistic.
Sure, in that instant "reinterpret" was the "fastest" but we're so far below nanoseconds that this is pure noise.
Use the solution you prefer, there isn't any measurable or significant performance difference between them.
For fun, change all the tests to b.iter(|| 1), and you'll receive similar results fluctuating in picoseconds.
c.bench_function("1", |b| b.iter(|| 1_i32));
c.bench_function("2", |b| b.iter(|| 1_i32));
c.bench_function("3", |b| b.iter(|| 1_i32));
Running the benchmark will result in similar results. I ran it once and got 240.18 ps, 239.73 ps, and 239.68 ps. That's a "measured" difference of 0.5 ps. That's 500 femtoseconds, or 0.0005 nanoseconds.
That's literally a bigger difference, compared to when we did "actual" "work". Again, this is pure noise. This isn't enough "work" to be measurable, in any significant way.
Again, use the solution you prefer, there isn't any measurable or significant performance difference between them.

What is the idiomatic way of looping through the bytes of an integer number in Rust? [duplicate]

This question already has answers here:
Converting number primitives (i32, f64, etc) to byte representations
(5 answers)
Closed 4 years ago.
I tried such a piece of code to loop through the bytes of a u64:
let mut message: u64 = 0x1234123412341234;
let msg = &message as *mut u8;
for b in 0..8 {
// ...some work...
}
Unfortunately, Rust doesn't allow such C-like indexing.
While transmute-ing is possible (see #Tim's answer), it is better to use the byteorder crate to guarantee endianness:
extern crate byteorder;
use byteorder::ByteOrder;
fn main() {
let message = 0x1234123412341234u64;
let mut buf = [0; 8];
byteorder::LittleEndian::write_u64(&mut buf, message);
for b in &buf {
// 34, 12, 34, 12, 34, 12, 34, 12,
print!("{:X}, ", b);
}
println!("");
byteorder::BigEndian::write_u64(&mut buf, message);
for b in &buf {
// 12, 34, 12, 34, 12, 34, 12, 34,
print!("{:X}, ", b);
}
}
(Permalink to the playground)
It's safe to transmute u64 into an array [u8; 8]:
let message_arr: [u8; 8] = unsafe { mem::transmute(message) };
for b in &message_arr {
println!("{}", b)
}
See this in action on the playground.

Using assert_eq or printing large fixed sized arrays doesn't work

I have written some tests where I need to assert that two arrays are equal. Some arrays are [u8; 48] while others are [u8; 188]:
#[test]
fn mul() {
let mut t1: [u8; 48] = [0; 48];
let t2: [u8; 48] = [0; 48];
// some computation goes here.
assert_eq!(t1, t2, "\nExpected\n{:?}\nfound\n{:?}", t2, t1);
}
I get multiple errors here:
error[E0369]: binary operation `==` cannot be applied to type `[u8; 48]`
--> src/main.rs:8:5
|
8 | assert_eq!(t1, t2, "\nExpected\n{:?}\nfound\n{:?}", t2, t1);
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: an implementation of `std::cmp::PartialEq` might be missing for `[u8; 48]`
= note: this error originates in a macro outside of the current crate (in Nightly builds, run with -Z external-macro-backtrace for more info)
error[E0277]: the trait bound `[u8; 48]: std::fmt::Debug` is not satisfied
--> src/main.rs:8:57
|
8 | assert_eq!(t1, t2, "\nExpected\n{:?}\nfound\n{:?}", t2, t1);
| ^^ `[u8; 48]` cannot be formatted using `:?`; if it is defined in your crate, add `#[derive(Debug)]` or manually implement it
|
= help: the trait `std::fmt::Debug` is not implemented for `[u8; 48]`
= note: required by `std::fmt::Debug::fmt`
Trying to print them as slices like t2[..] or t1[..] doesn't seem to work.
How do I use assert with these arrays and print them?
For the comparison part you can just convert the arrays to iterators and compare elementwise.
assert_eq!(t1.len(), t2.len(), "Arrays don't have the same length");
assert!(t1.iter().zip(t2.iter()).all(|(a,b)| a == b), "Arrays are not equal");
With Iterator::eq, it is possible to compare anything that can be turned into an iterator for equality:
let mut t1: [u8; 48] = [0; 48];
let t2: [u8; 48] = [0; 48];
assert!(t1.iter().eq(t2.iter()));
Using slices
As a workaround, you can just use &t1[..] (instead of t1[..]) to make arrays into slices. You'll have to do this for both comparison and formatting.
assert_eq!(&t1[..], &t2[..], "\nExpected\n{:?}\nfound\n{:?}", &t2[..], &t1[..]);
or
assert_eq!(t1[..], t2[..], "\nExpected\n{:?}\nfound\n{:?}", &t2[..], &t1[..]);
Formatting arrays directly
Ideally, the original code should compile, but it doesn't for now. The reason is that the standard library implements common traits (such as Eq and Debug) for arrays of only up to 32 elements, due to lack of const generics.
Therefore, you can compare and format shorter arrays like:
let t1: [u8; 32] = [0; 32];
let t2: [u8; 32] = [1; 32];
assert_eq!(t1, t2, "\nExpected\n{:?}\nfound\n{:?}", t2, t1);
You could make Vecs out of them.
fn main() {
let a: [u8; 3] = [0, 1, 2];
let b: [u8; 3] = [2, 3, 4];
let c: [u8; 3] = [0, 1, 2];
let va: Vec<u8> = a.to_vec();
let vb: Vec<u8> = b.to_vec();
let vc: Vec<u8> = c.to_vec();
println!("va==vb {}", va == vb);
println!("va==vc {}", va == vc);
println!("vb==vc {}", vb == vc);
}

Raw pointer turns null passing from Rust to C

I'm attempting to retrieve a raw pointer from on C function in rust, and use that same raw pointer as an argument in another C function from another library. When I pass the raw pointer, I end up with a NULL pointer on the C side.
I have tried to make a simplified version of my issue, but when I do it works as I would expect it to -
C Code -
struct MyStruct {
int value;
};
struct MyStruct * get_struct() {
struct MyStruct * priv_struct = (struct MyStruct*) malloc( sizeof(struct MyStruct));
priv_struct->value = 0;
return priv_struct;
}
void put_struct(struct MyStruct *priv_struct) {
printf("Value - %d\n", priv_struct->value);
}
Rust Code -
#[repr(C)]
struct MyStruct {
value: c_int,
}
extern {
fn get_struct() -> *mut MyStruct;
}
extern {
fn put_struct(priv_struct: *mut MyStruct) -> ();
}
fn rust_get_struct() -> *mut MyStruct {
let ret = unsafe { get_struct() };
ret
}
fn rust_put_struct(priv_struct: *mut MyStruct) {
unsafe { put_struct(priv_struct) };
}
fn main() {
let main_struct = rust_get_struct();
rust_put_struct(main_struct);
}
When I run this I get the output of Value - 0
~/Dev/rust_test$ sudo ./target/debug/rust_test
Value - 0
~/Dev/rust_test$
However, when trying to do this against a DPDK library, I retrieve and pass a raw pointer in the the same way but get a segfault. If I use gdb to debug, I can see that I'm passing a pointer on the Rust side, but I see it NULL on the C side -
(gdb) frame 0
#0 rte_eth_rx_queue_setup (port_id=0 '\000', rx_queue_id=<optimized out>, nb_rx_desc=<optimized out>, socket_id=0, rx_conf=0x0, mp=0x0)
at /home/kenton/Dev/dpdk-16.07/lib/librte_ether/rte_ethdev.c:1216
1216 if (mp->private_data_size < sizeof(struct rte_pktmbuf_pool_private)) {
(gdb) frame 1
#1 0x000055555568953b in dpdk::ethdev::dpdk_rte_eth_rx_queue_setup (port_id=0 '\000', rx_queue_id=0, nb_tx_desc=128, socket_id=0, rx_conf=None,
mb=0x7fff3fe47640) at /home/kenton/Dev/dpdk_ffi/src/ethdev/mod.rs:32
32 let retc: c_int = unsafe {ffi::rte_eth_rx_queue_setup(port_id as uint8_t,
In frame 1, mb has an address and is being passed. In frame 0 the receiving function in the library is showing it as 0x0 for mp.
My code to receive the pointer -
let mb = dpdk_rte_pktmbuf_pool_create(CString::new("MBUF_POOL").unwrap().as_ptr(),
(8191 * nb_ports) as u32 , 250, 0, 2176, dpdk_rte_socket_id());
This calls into an ffi library -
pub fn dpdk_rte_pktmbuf_pool_create(name: *const c_char,
n: u32,
cache_size: u32,
priv_size: u16,
data_room_size: u16,
socket_id: i32) -> *mut rte_mempool::ffi::RteMempool {
let ret: *mut rte_mempool::ffi::RteMempool = unsafe {
ffi::shim_rte_pktmbuf_pool_create(name,
n as c_uint,
cache_size as c_uint,
priv_size as uint16_t,
data_room_size as uint16_t,
socket_id as c_int)
};
ret
}
ffi -
extern {
pub fn shim_rte_pktmbuf_pool_create(name: *const c_char,
n: c_uint,
cache_size: c_uint,
priv_size: uint16_t,
data_room_size: uint16_t,
socket_id: c_int) -> *mut rte_mempool::ffi::RteMempool;
}
C function -
struct rte_mempool *
rte_pktmbuf_pool_create(const char *name, unsigned n,
unsigned cache_size, uint16_t priv_size, uint16_t data_room_size,
int socket_id);
When I pass the pointer, it looks much the same as my simplified version up above. My variable mb contains a raw pointer that I pass to another function -
ret = dpdk_rte_eth_rx_queue_setup(port,q,128,0,None,mb);
ffi library -
pub fn dpdk_rte_eth_rx_queue_setup(port_id: u8,
rx_queue_id: u16,
nb_tx_desc: u16,
socket_id: u32,
rx_conf: Option<*const ffi::RteEthRxConf>,
mb_pool: *mut rte_mempool::ffi::RteMempool ) -> i32 {
let retc: c_int = unsafe {ffi::rte_eth_rx_queue_setup(port_id as uint8_t,
rx_queue_id as uint16_t,
nb_tx_desc as uint16_t,
socket_id as c_uint,
rx_conf,
mb)};
let ret: i32 = retc as i32;
ret
}
ffi -
extern {
pub fn rte_eth_rx_queue_setup(port_id: uint8_t,
rx_queue_id: uint16_t,
nb_tx_desc: uint16_t,
socket_id: c_uint,
rx_conf: Option<*const RteEthRxConf>,
mb: *mut rte_mempool::ffi::RteMempool ) -> c_int;
}
C function -
int
rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
uint16_t nb_rx_desc, unsigned int socket_id,
const struct rte_eth_rxconf *rx_conf,
struct rte_mempool *mp);
I apologize for the length, but I feel like I'm missing something simple and haven't been able to figure it out. I've checked struct alignment for each field that is being passed, and I even see values for the pointer that is received as I'd expect -
(gdb) frame 1
#1 0x000055555568dcf4 in dpdk::ethdev::dpdk_rte_eth_rx_queue_setup (port_id=0 '\000', rx_queue_id=0, nb_tx_desc=128, socket_id=0, rx_conf=None,
mb=0x7fff3fe47640) at /home/kenton/Dev/dpdk_ffi/src/ethdev/mod.rs:32
32 let retc: c_int = unsafe {ffi::rte_eth_rx_queue_setup(port_id as uint8_t,
(gdb) print *mb
$1 = RteMempool = {name = "MBUF_POOL", '\000' <repeats 22 times>, pool_union = PoolUnionStruct = {data = 140734245862912}, pool_config = 0x0,
mz = 0x7ffff7fa4c68, flags = 16, socket_id = 0, size = 8191, cache_size = 250, elt_size = 2304, header_size = 64, trailer_size = 0,
private_data_size = 64, ops_index = 0, local_cache = 0x7fff3fe47700, populated_size = 8191, elt_list = RteMempoolObjhdrList = {
stqh_first = 0x7fff3ebc7f68, stqh_last = 0x7fff3fe46ce8}, nb_mem_chunks = 1, mem_list = RteMempoolMemhdrList = {stqh_first = 0x7fff3ebb7d80,
stqh_last = 0x7fff3ebb7d80}, __align = 0x7fff3fe47700}
Any ideas on why the pointer is turning to NULL on the C side?
CString::new("…").unwrap().as_ptr() does not work. The CString is temporary, hence the as_ptr() call returns the inner pointer of that temporary, which will likely be dangling when you use it. This is “safe” per Rust's definition of safety as long as you don't use the pointer, but you eventually do so in a unsafe block. You should bind the string to a variable and use as_ptr on that variable.
This is such a common problem, there is even a proposal to fix the CStr{,ing} API to avoid it.
Additionally raw pointer are nullable by themselves, so the Rust FFI equivalent of const struct rte_eth_rxconf * would be *const ffi::RteEthRxConf, not Option<*const ffi::RteEthRxConf>.

Resources