I have basically digged through all Julia documentation, but I cannot find any answers on this. My question can be split into two parts. Code snippets ignore stuff like basic s initialization.
Part 1: How to pass basic complex types without jl_eval_string()
Suppose I have a C/C++ program which calls some Julia scripts, for a function f which do some String manipulation. In the C source:
char* parameter_string; // Initialized as something.
jl_module_t *m = (jl_module_t *) jl_load("Script.jl");
jl_function_t *f = jl_get_function(m, "f");
jl_value_t * ret = jl_call1(f, /*???*/) <--- Problem
Now, notice that the manual only describes how to box up primitives, like int, float, double. Nothing about complex types, like String. Yes, I can use jl_eval_string(parameter_string), but I don't like this. Moreover, ret will be a String, and I have no idea how to extract it to C. It is undocumented.
Part 2:
Suppose I have a C/C++ program which calls some Julia scripts, in which a state machine is stepped. To create a state machine, I create some types:
abstract State
type Idle <: State end
type State1 <: State end
type State2 <: State end
And then a transition function:
function transition(s :: State, input :: String) # input :: String is arbitrary
.. Do Something ..
return newState
end
Now, if I want to create a State, say Idle, in C... I cannot find anything like this, let alone finding a way to retrieve it from Julia.
I am approaching this problem more or less like functional programming language, such as Haskell, Scala, or F#. Algebraic Data Type might not be well supported here, but I think it is still better than hard coding it with integers.
The real problem is that I cannot find any C API documents on Julia, without directly digging into its source code.
You can convert a C string to a Julia String using jl_cstr_to_string(char*).
To get the data from a Julia String, use jl_string_ptr(jl_value_t*).
Constructors are called just like functions, so to call a constructor you can use jl_get_function(m, "Idle") and call it as normal. Or, to allocate an object directly (going around any constructors that might be defined, so technically a bit dangerous), you can call jl_new_struct(type, fields...).
Related
I've been messing around with SDL2 in c and was wondering how to abstract code away without using too many function parameters. For example, in a normal gameplay loop there is usually an input, update, render cycle. Ideally, I would like this to be abstracted as possible so I could have functions called "input", "update", "render", in my loop. How could i do this in c without having those functions take a ludicrous amount of parameters? I know that c++ kind of solves this issue through classes, but I am curious and want to know how to do this in a procedural programming setting.
So far, I can't really think of any way to fix this. I tried looking it up online but only get results for c++ classes. As mentioned before, I want to stick to c because that is what i am comfortable with right now and would prefer to use.
If you have complex state to transport some between calls, put that in a struct. Pass a pointer to that as the sole argument to your functions, out at least as the first of very few.
That is a very common design pattern on C code.
void inputstep(struct state_t* systemstate);
void updatestep(struct state_t* systemstate);
void renderstep(struct state_t* systemstate, struct opengl_context_t* oglctx);
Note also that it is exactly the same, if not even more (due to less safety about pointers), overhead as having a C++ class with methods.
this in a functional programming setting.
Well, C is about as far as you get from a purely functional language, so functional programming paradigms only awkwardly translate. Are you sure you didn't mean "procedural"?
In a functional programming mindset, the state you pass into a function would be immutable or discarded after the function, and the function would return a new state; something like
struct mystate_t* mystate;
...
while(1) {
mystate = inputfunc(mystate);
mystate = updatefunc(mystate);
…
}
Only that in a functional setting, you wouldn't re-assign to a variable, and wouldn't have a while loop like that. Essentially, you wouldn't write C.
Are exists in .Net 5 something like Array.Fill(myArray, 1) but with function.
I mean Array.Fill(myArray, myRandom.Next(10)) or another function. Actually, this code is possible but it counts function once and fills the whole array with the result.
My goal can be achieved with myArray = Array.ConvertAll(Of Integer, Integer)(myArray, Function(i) myRandom.Next(10)) but it looks a little strange and also wastes CPU time by redundant array constructor calls.
Of course, it can be simply performed with a loop, but some inline better complies with the style of the current project.
Samples are provided in VB.Net but of course, the question is related to .Net not to any particular programing language.
I have a function that uses diesel to get an object from a DB based off the given ID:
fn get_db_site(pool: web::Data<Pool>, site_id: u32) -> Result<Site, diesel::result::Error> {
let conn = pool.get().unwrap();
dsl::site.find(site_id).get_result::<Site>(&conn)
}
This function is going to be exactly the same for every table I want to run it on so I'm hoping to put it in it's own utils file so I don't have to type the same thing in every time. The only problem is in order to call that find I need to do
crate::schema::site::dsl::site.find and I'm not sure how I can make that call generic to take any type. I know there are type arguments but I don't think that would work here
I normally advise against making diesel things more generic as this leads to really complex trait bounds quite fast. You normally never want to do this in application code. (It's a different thing for libraries that need to be generic). I normally compare the situation with plain SQL. For example if someone complains that users::table.find(pk) feels like duplication, ask yourself the following question: Would you feel that SELECT … FROM users is duplicated in the corresponding query SELECT … FROM users WHERE id = $. (The diesel dsl statement is basically the same).
So to answer your actual question, the generic functions needs to look something like this:
(Not sure if I got all bounds right without testing)
fn get_db_thing<T, U, PK>(pool: web::Data<Pool>, primary_key: PK) -> Result<U, diesel::result::Error>
where T: Table + HasTable<Table = T>,
T: FindDsl<PK>,
U: Queryable<SqlTypeOf<Find<T, PK>>, Pg>
{
let conn = pool.get().unwrap();
T::table().find(primary_key).get_result::<U>(&conn)
}
As you can see the list of trait bounds is already much longer than just having the load inline in the corresponding functions. In addition all the details added while constructing the query would now be required as generic function argument. At least the type for T cannot be inferred by the compiler, so from a code size point of view this solution is not "simpler" than just not making it generic.
While reading about Julia on http://learnxinyminutes.com/docs/julia/ I came across this:
# You can define functions that take a variable number of
# positional arguments
function varargs(args...)
return args
# use the keyword return to return anywhere in the function
end
# => varargs (generic function with 1 method)
varargs(1,2,3) # => (1,2,3)
# The ... is called a splat.
# We just used it in a function definition.
# It can also be used in a fuction call,
# where it will splat an Array or Tuple's contents into the argument list.
Set([1,2,3]) # => Set{Array{Int64,1}}([1,2,3]) # produces a Set of Arrays
Set([1,2,3]...) # => Set{Int64}(1,2,3) # this is equivalent to Set(1,2,3)
x = (1,2,3) # => (1,2,3)
Set(x) # => Set{(Int64,Int64,Int64)}((1,2,3)) # a Set of Tuples
Set(x...) # => Set{Int64}(2,3,1)
Which I'm sure is a perfectly good explanation, however I fail to grasp the main idea/benefits.
From what I understand so far:
Using a splat in a function definition allows us to specify that we have no clue how many input arguments the function will be given, could be 1, could be 1000. Don't really see the benefit of this, but at least I understand (I hope) the concept of this.
Using a splat as an input argument to a function does... What exactly? And why would I use it? If I had to input an array's contents into the argument list, I would use this syntax instead: some_array(:,:) (for 3D arrays i would use some_array(:,:,:) etc.).
I think part of the reason why I don't understand this is that I'm struggling with the definition of tuples and arrays, are tuples and arrays data types (like Int64 is a data type) in Julia? Or are they data structures, and what is a data structure? When I hear array I typically think about a 2D matrix, perhaps not the best way to imagine arrays in a programming context?
I realize that you could probably write entire books about what a data structure is, and I could certainly Google it, however I find that people with a profound understanding of a subject are able to explain it in a much more succinct (and perhaps simplified) way then let's say the wikipedia article could, which is why I'm asking you guys (and girls).
You seem like you get the mechanism and how/what they do but are struggling with what you would use it for. I get that.
I find them useful for things where I need to pass an unknown number of arguments and don't want to have to bother constructing an array first before passing it in when working with the function interactively.
for instance:
func geturls(urls::Vector)
# some code to retrieve URL's from the network
end
geturls(urls...) = geturls([urls...])
# slightly nicer to type than building up an array first then passing it in.
geturls("http://google.com", "http://facebook.com")
# when we already have a vector we can pass that in as well since julia has method dispatch
geturls(urlvector)
So a few things to note. Splat's allow you to turn an iterable into an array and vice versa. See the [urls...] bit above? Julia turns that into a Vector with the urls tuple expanded which turns out to be much more useful than the argument splatting itself in my experience.
This is just 1 example of where they've proved useful to me. As you use julia you'll run across more.
It's mostly there to aid in designing api's that feel natural to use.
I want to get an array of bytes (Array[Byte]) from somewhere (read from file, from socket, etc) and then provide a efficient way to pull bits out of it (e.g. provide a function to extract a 32-bit integer from offset N in array). I would then like to wrap the byte array (hiding it) providing functions to pull bits out from the array (probably using lazy val for each bit to pull out).
I would imagine having a wrapping class that takes an immutable byte array type in the constructor to prove the array contents is never modified. IndexedSeq[Byte] seemed relevant, but I could not work out how to go from Array[Byte] to IndexedSeq[Byte].
Part 2 of the question is if I used IndexedSeq[Byte] will the resultant code be any slower? I need the code to execute as fast as possible, so would stick with Array[Byte] if the compiler could do a better job with it.
I could write a wrapper class around the array, but that would slow things down - one extra level of indirection for each access to bytes in the array. Performance is critical due to the number of array accesses that will be required. I need fast code, but would like to do the code nicely at the same time. Thanks!
PS: I am a Scala newbie.
Treating Array[T] as an IndexedSeq[T] could hardly be simpler:
Array(1: Byte): IndexedSeq[Byte] // trigger an Implicit View
wrapByteArray(Array(1: Byte)) // explicitly calling
Unboxing will kill you long before an extra layer of indirection.
C:\>scala -Xprint:erasure -e "{val a = Array(1: Byte); val b1: Byte = a(0); val
b2 = (a: IndexedSeq[Byte])(0)}"
[[syntax trees at end of erasure]]// Scala source: scalacmd5680604016099242427.s
cala
val a: Array[Byte] = scala.Array.apply((1: Byte), scala.this.Predef.
wrapByteArray(Array[Byte]{}));
val b1: Byte = a.apply(0);
val b2: Byte = scala.Byte.unbox((scala.this.Predef.wrapByteArray(a): IndexedSeq).apply(0));
To avoid this, the Scala collections library should be specialized on the element type, in the same style as Tuple1 and Tuple2. I'm told this is planned, but it's a bit more involved than simply slapping #specialized everywhere, so I don't know how long it will take.
UPDATE
Yes, WrappedArray is mutable, although collection.IndexedSeq[Byte] doesn't have methods to mutate, so you could just trust clients not to cast to a mutable interface. The next release of Scalaz will include ImmutableArray which prevents this.
The boxing comes retrieving an element from the collection via this generic method:
trait SeqLike[+A, +Repr] extends IterableLike[A, Repr] { self =>
def apply(idx: Int): A
}
At the JVM level, this signature is type-erased to:
def apply(idx: Int): Object
If your collection contains primitives, that is, subtypes of AnyVal, they must be boxed in the corresponding wrapper to be returned from this method. For some applications, this is a major performance concern. Entire libraries have been written in Java to avoid this, notably fastutils.
Annotation directed specialization was added to Scala 2.8 to instruct the compiler to generate various versions of a class or method tailored to the permutations of primitive types. This has been applied to a few places in the standard library already, e.g. TupleN, ProductN, Function{0, 1, 2}. If this was also applied to the collections hierarchy, this performance cost could be alleviated.
If you want to work with sequences in Scala, I recommend you choose one of these:
Immutable seqs:
(linked seqs) List, Stream, Queue
(indexed seqs) Vector
Mutable seqs:
(linked seq) ListBuffer
(indexed seq) ArrayBuffer
The new (2.8) Scala collections have been hard to grasp for me, primarily due to shortage of (correct) documentation but also because of the source code (complex hierarchys). To clear my mind I made this pic to visualize the basic structure:
(source: programmera.net)
Also, note that Array is not part of the tree structure, it is a special case, since it wraps the Java array (which is a special case in Java).