I'm learning functional programming by using fp-ts lib and noticed that some functions end with K, like:
chainTaskEitherK
chainEitherK
fromTaskK
Also I read the explanation of What a K suffix means in fp-ts documentation, but unfortunately I can't say that one example gives me a solid idea of how to use it on the battlefield.
I would like to know exactly what problem they solve and what the code would look like without them (to see the benefit).
Please, consider that I'm a newbie in this topic.
First, (going off of the example in the link you shared). I think a Kleisli arrow is just a name for a type that looks like:
<A>(value: A) => F<A>
where F is some Functor value. They call it a constructor in the doc you linked which might be more precise? My understanding is that it's just a function that takes some non-Functor value (a string in the parse example) and puts it into some Functor.
These helpers you've listed are there for when you already have a Kleisli arrow and you want to use it with some other Functor values. In the example they have
declare function parse(s: string): Either<Error, number>;
and they have an IOEither value that would probably come from user input. They want to combine the two, basically run parse on the input if it's a Right and end up with a function using parse with the signature:
declare function parseForIO(s: string): IOEither<Error, number>;
This is so the return type can be compatible with our input type (so we can use chain on the IOEither to compose our larger function).
fromEitherK is therefore, wrapping the base parse function in some logic to naturally transform the resulting regular Either into an IOEither. The chainEitherK does that and a chain to save some of the boilerplate.
Basically, it's solving a compatibility issue when the return value from your Kleisli arrows doesn't match the value you need when chaining things together.
In addition to the #Souperman explanation I want to share my investigation on this topic
Let's take already known example from the fp-ts documentation.
We have an input variable of type IOEtiher
const input: IE.IOEither<Error, string> = IE.right('foo')
and function, which take a plain string and returns E.Either
function parse(s: string): E.Either<Error, number> {
// implentation
}
If we want to make this code works together in a fp-ts style, we need to introduce a pipe. pipe is a function which passes our data through the functions listed inside the pipe.
So, instead of doing this (imperative style)
const input: IE.IOEither<Error, string> = IE.right('foo')
const value = input()
let result: E.Either<Error, number>
if (value._tag === 'Right') {
result = parse(value.right) // where value.right is our 'foo'
}
We can do this
pipe(
input,
IE.chain(inputValue => parse(inputValue))
~~~~~~~~~~~~~~~~~ <- Error is shown
)
Error message
Type 'Either<Error, number>' is not assignable to type 'IOEither<Error, unknown>'.
Unfortunately, fp-ts cannot implicitly jump between types (e.g. from IOEither to Either) . In our example, we started with input variable which has IOEither (IE shortened) type and continue with a IE.chain method which tries to return Either value.
To make it work we can introduce a function which helps us to convert this types.
pipe(
input,
IE.chain(inputValue => IE.fromEitherK(parse)(inputValue))
)
Now our chain function explicitly know that parse function was converted from Either type to IOEither by using fromEitherK.
At this moment we can see that fromEitherK is a helper function that expects a Kleisli function in its arguments and return the new function with a new return type.
To make it more clear, we needn't to use a K suffix if, for example, our parse was a value (instead of function).
The code would look like
pipe(
input,
IE.chain(inputValue => IE.fromEither(parsed)) // I know this is useless code, but it shows its purpose
)
Returning back to our example. We can improve the code to make it more readable
Instead of this
pipe(
input,
IE.chain(inputValue => IE.fromEitherK(parse)(inputValue))
)
We can do this
pipe(
input,
IE.chain(IE.fromEitherK(parse))
)
And even more
pipe(
input,
IE.chainEitherK(parse)
)
Summary
As far I understand, a Kleisli arrows are the functions which take an argument and return a result wrapped in a container (like parse function).
Related
Transaction hash give input data like this
"0xa9059cbb00000000000000000000000024c38db6c4a85b3e6b58631de2334105f6209da300000000000000000000000000000000000000000000000000000dca4f1516a8".
if i call this function
let encodedFunctionSignature = web3.eth.abi.encodeFunctionSignature('transfer(address,uint256)');
it give me this "0xa9059cbb".etherscan call this methodId
My question is how i get the transfer(address,uint256) back from this "0xa9059cbb"
The function selector is the first four bytes of the keccak256 hash of the canonicalized function signature. In this case, web3.sha3('transfer(address,uint256)').substring(0, 10) === "0xa9059cbb".
Reversing this process is not generally possible unless the contract's code or ABI is provided. That said, as long as someone else has used a given function selector before and provided its original name, you can use that information instead.
One list of commonly used function selectors is here: https://github.com/ethereum-lists/4bytes, and in fact transfer(addresss,uint256) is the first example given.
i think https://github.com/ethereum-lists/4bytes helpfull for you
get first 4 bytes of data and use lib in this github to get function name of transaction
or you can check in here https://www.4byte.directory/
When creating a JavaScript function with multiple arguments, I am always confronted with this choice: pass a list of arguments vs. pass an options object.
For example I am writing a function to map a nodeList to an array:
function map(nodeList, callback, thisObject, fromIndex, toIndex){
...
}
I could instead use this:
function map(options){
...
}
where options is an object:
options={
nodeList:...,
callback:...,
thisObject:...,
fromIndex:...,
toIndex:...
}
Which one is the recommended way? Are there guidelines for when to use one vs. the other?
[Update] There seems to be a consensus in favor of the options object, so I'd like to add a comment: one reason why I was tempted to use the list of arguments in my case was to have a behavior consistent with the JavaScript built in array.map method.
Like many of the others, I often prefer passing an options object to a function instead of passing a long list of parameters, but it really depends on the exact context.
I use code readability as the litmus test.
For instance, if I have this function call:
checkStringLength(inputStr, 10);
I think that code is quite readable the way it is and passing individual parameters is just fine.
On the other hand, there are functions with calls like this:
initiateTransferProtocol("http", false, 150, 90, null, true, 18);
Completely unreadable unless you do some research. On the other hand, this code reads well:
initiateTransferProtocol({
"protocol": "http",
"sync": false,
"delayBetweenRetries": 150,
"randomVarianceBetweenRetries": 90,
"retryCallback": null,
"log": true,
"maxRetries": 18
});
It is more of an art than a science, but if I had to name rules of thumb:
Use an options parameter if:
You have more than four parameters
Any of the parameters are optional
You've ever had to look up the function to figure out what parameters it takes
If someone ever tries to strangle you while screaming "ARRRRRG!"
Multiple arguments are mostly for obligatory parameters. There's nothing wrong with them.
If you have optional parameters, it gets complicated. If one of them relies on the others, so that they have a certain order (e.g. the fourth one needs the third one), you still should use multiple arguments. Nearly all native EcmaScript and DOM-methods work like this. A good example is the open method of XMLHTTPrequests, where the last 3 arguments are optional - the rule is like "no password without a user" (see also MDN docs).
Option objects come in handy in two cases:
You've got so many parameters that it gets confusing: The "naming" will help you, you don't have to worry about the order of them (especially if they may change)
You've got optional parameters. The objects are very flexible, and without any ordering you just pass the things you need and nothing else (or undefineds).
In your case, I'd recommend map(nodeList, callback, options). nodelist and callback are required, the other three arguments come in only occasionally and have reasonable defaults.
Another example is JSON.stringify. You might want to use the space parameter without passing a replacer function - then you have to call …, null, 4). An arguments object might have been better, although its not really reasonable for only 2 parameters.
Using the 'options as an object' approach is going to be best. You don't have to worry about the order of the properties and there's more flexibility in what data gets passed (optional parameters for example)
Creating an object also means the options could be easily used on multiple functions:
options={
nodeList:...,
callback:...,
thisObject:...,
fromIndex:...,
toIndex:...
}
function1(options){
alert(options.nodeList);
}
function2(options){
alert(options.fromIndex);
}
It can be good to use both. If your function has one or two required parameters and a bunch of optional ones, make the first two parameters required and the third an optional options hash.
In your example, I'd do map(nodeList, callback, options). Nodelist and callback are required, it's fairly easy to tell what's happening just by reading a call to it, and it's like existing map functions. Any other options can be passed as an optional third parameter.
I may be a little late to the party with this response, but I was searching for other developers' opinions on this very topic and came across this thread.
I very much disagree with most of the responders, and side with the 'multiple arguments' approach. My main argument being that it discourages other anti-patterns like "mutating and returning the param object", or "passing the same param object on to other functions". I've worked in codebases which have extensively abused this anti-pattern, and debugging code which does this quickly becomes impossible. I think this is a very Javascript-specific rule of thumb, since Javascript is not strongly typed and allows for such arbitrarily structured objects.
My personal opinion is that developers should be explicit when calling functions, avoid passing around redundant data and avoid modify-by-reference. It's not that this patterns precludes writing concise, correct code. I just feel it makes it much easier for your project to fall into bad development practices.
Consider the following terrible code:
function main() {
const x = foo({
param1: "something",
param2: "something else",
param3: "more variables"
});
return x;
}
function foo(params) {
params.param1 = "Something new";
bar(params);
return params;
}
function bar(params) {
params.param2 = "Something else entirely";
const y = baz(params);
return params.param2;
}
function baz(params) {
params.params3 = "Changed my mind";
return params;
}
Not only does this kind of require more explicit documentation to specify intent, but it also leaves room for vague errors.
What if a developer modifies param1 in bar()? How long do you think it would take looking through a codebase of sufficident size to catch this?
Admittedly, this is example is slightly disingenuous because it assumes developers have already committed several anti-patterns by this point. But it shows how passing objects containing parameters allows greater room for error and ambiguity, requiring a greater degree of conscientiousness and observance of const correctness.
Just my two-cents on the issue!
Your comment on the question:
in my example the last three are optional.
So why not do this? (Note: This is fairly raw Javascript. Normally I'd use a default hash and update it with the options passed in by using Object.extend or JQuery.extend or similar..)
function map(nodeList, callback, options) {
options = options || {};
var thisObject = options.thisObject || {};
var fromIndex = options.fromIndex || 0;
var toIndex = options.toIndex || 0;
}
So, now since it's now much more obvious what's optional and what's not, all of these are valid uses of the function:
map(nodeList, callback);
map(nodeList, callback, {});
map(nodeList, callback, null);
map(nodeList, callback, {
thisObject: {some: 'object'},
});
map(nodeList, callback, {
toIndex: 100,
});
map(nodeList, callback, {
thisObject: {some: 'object'},
fromIndex: 0,
toIndex: 100,
});
It depends.
Based on my observation on those popular libraries design, here are the scenarios we should use option object:
The parameter list is long (>4).
Some or all parameters are optional and they don’t rely on a certain
order.
The parameter list might grow in future API update.
The API will be called from other code and the API name is not clear
enough to tell the parameters’ meaning. So it might need strong
parameter name for readability.
And scenarios to use parameter list:
Parameter list is short (<= 4).
Most of or all of the parameters are required.
Optional parameters are in a certain order. (i.e.: $.get )
Easy to tell the parameters meaning by API name.
Object is more preferable, because if you pass an object its easy to extend number of properties in that objects and you don't have to watch for order in which your arguments has been passed.
For a function that usually uses some predefined arguments you would better use option object. The opposite example will be something like a function that is getting infinite number of arguments like: setCSS({height:100},{width:200},{background:"#000"}).
I would look at large javascript projects.
Things like google map you will frequently see that instantiated objects require an object but functions require parameters. I would think this has to do with OPTION argumemnts.
If you need default arguments or optional arguments an object would probably be better because it is more flexible. But if you don't normal functional arguments are more explicit.
Javascript has an arguments object too.
https://developer.mozilla.org/en-US/docs/JavaScript/Reference/Functions_and_function_scope/arguments
I have basically digged through all Julia documentation, but I cannot find any answers on this. My question can be split into two parts. Code snippets ignore stuff like basic s initialization.
Part 1: How to pass basic complex types without jl_eval_string()
Suppose I have a C/C++ program which calls some Julia scripts, for a function f which do some String manipulation. In the C source:
char* parameter_string; // Initialized as something.
jl_module_t *m = (jl_module_t *) jl_load("Script.jl");
jl_function_t *f = jl_get_function(m, "f");
jl_value_t * ret = jl_call1(f, /*???*/) <--- Problem
Now, notice that the manual only describes how to box up primitives, like int, float, double. Nothing about complex types, like String. Yes, I can use jl_eval_string(parameter_string), but I don't like this. Moreover, ret will be a String, and I have no idea how to extract it to C. It is undocumented.
Part 2:
Suppose I have a C/C++ program which calls some Julia scripts, in which a state machine is stepped. To create a state machine, I create some types:
abstract State
type Idle <: State end
type State1 <: State end
type State2 <: State end
And then a transition function:
function transition(s :: State, input :: String) # input :: String is arbitrary
.. Do Something ..
return newState
end
Now, if I want to create a State, say Idle, in C... I cannot find anything like this, let alone finding a way to retrieve it from Julia.
I am approaching this problem more or less like functional programming language, such as Haskell, Scala, or F#. Algebraic Data Type might not be well supported here, but I think it is still better than hard coding it with integers.
The real problem is that I cannot find any C API documents on Julia, without directly digging into its source code.
You can convert a C string to a Julia String using jl_cstr_to_string(char*).
To get the data from a Julia String, use jl_string_ptr(jl_value_t*).
Constructors are called just like functions, so to call a constructor you can use jl_get_function(m, "Idle") and call it as normal. Or, to allocate an object directly (going around any constructors that might be defined, so technically a bit dangerous), you can call jl_new_struct(type, fields...).
Given:
abstract ABSGene
type NuGene <: Genetic.ABSGene
fqnn::ANN
dcqnn::ANN
score::Float32
end
function mutate_copy{T<:ABSGene}(gene::T)
all_fields_except_score = filter(x->x != :score, names(T))
all_fields_except_score = map(x->("mutate_copy(gene.$x)"),all_fields_except_score)
eval(parse("$(T)("*join(all_fields_except_score,",")*")"))
end
ng = NuGene()
mutated_ng = mutate_copy(ng)
results in:
ERROR: gene not defined
in mutate_copy at none:4
If I just look at it as a string (prior to running parse and eval) it looks fine:
"NuGene(mutate_copy(gene.fqnn),mutate_copy(gene.dcqnn))"
However, eval doesn't seem to know about gene that has been passed into the mutate_copy function.
How do I access the gene argument that's been passed into the mutate copy?
I tried this:
function mutate_copy{T<:ABSGene}(gene::T)
all_fields_except_score = filter(x->x != :score, names(T))
all_fields_except_score = map(x-> ("mutate_copy($gene.$x)"),all_fields_except_score)
eval(parse("$(T)("*join(all_fields_except_score,",")*")"))
end
But that expands the gene in the string which is not what I want.
Don't use eval! In almost all cases, unless you really know what you're doing, you shouldn't be using eval. And in this case, eval simply won't work because it operates in the global (or module) scope and doesn't have access to the variables local to the function (like the argument gene).
While the code you posted isn't quite enough for a minimal working example, I can take a few guesses as to what you want to do here.
Instead of map(x->("mutate_copy(gene.$x)"),all_fields_except_score), you can dynamically look up the field name:
map(x->mutate_copy(gene.(x)), all_fields_except_score)
This is a special syntax that may eventually be replaced by getfield(gene, x). Either one will work right now, though.
And then instead of eval(parse("$(T)("*join(all_fields_except_score,",")*")")), call T directly and "splat" the field values:
T(all_fields_except_score...)
I think the field order should be stable through all those transforms, but it looks a pretty fragile (you're depending on the score being the last field, and all constructors to have their arguments in the same order as their fields). It looks like you're trying to perform a deepcopy sort of operation, but leaving the score field uninitialized. You could alternatively use Base's deepcopy and then recursively set the scores to zero.
This is a sort of followup to my previous question about nested registered C functions found here:
Trying to call a function in Lua with nested tables
The previous question gave me the answer to adding a nested function like this:
dog.beagle.fetch()
I also would like to have variables at that level like:
dog.beagle.name
dog.beagle.microchipID
I want this string and number to be allocated in C and accessible by Lua. So, in C code, the variables might be defined as:
int microchipIDNumber;
char dogname[500];
The C variables need to be updated by assignments in Lua and its value needs to be retrieved by Lua when it is on the right of the equal sign. I have tried the __index and __newindex metamethod concept but everything I try seems to break down when I have 2 dots in the Lua path to the variable. I know I am probably making it more complicated with the 2 dots, but it makes the organization much easier to read in the Lua code. I also need to get an event for the assignment because I need to spin up some hardware when the microchipIDNumber value changes. I assume I can do this through the __newindex while I am setting the value.
Any ideas on how you would code the metatables and methods to accomplish the nesting? Could it be because my previous function declarations are confusing Lua?
The colon operator (:) in Lua is used only for functions. Consider the following example:
meta = {}
meta["__index"] = function(n,m) print(n) print(m) return m end
object = {}
setmetatable(object,meta)
print(object.foo)
The index function will simply print the two arguments it is passed and return the second one (which we will also print, because just doing object.foo is a syntax error). The output is going to be table: 0x153e6d0 foo foo with new lines. So __index gets the object in which we're looking up the variable and it's name. Now, if we replace object.foo with object:foo we get this:
input:5: function arguments expected near ')'
This is the because : in object:foo is syntactic sugar for object.foo(object), so Lua expects that you will provide arguments for a function call. If we did provide arguments (object:foo("bar")) we get this:
table: 0x222b3b0
foo
input:5: attempt to call method 'foo' (a string value)
So our __index function still gets called, but it is not passed the argument - Lua simply attemps to call the return value. So don't use : for members.
With that out of the way, let's look at how you can sync variables between Lua and C. This is actually quite involved and there are different ways to do it. One solution would be to use a combination of __index and __newindex. If you have a beagle structure in C, I'd recommend making these C functions and pushing them into the metatable of a Lua table as C-closures with a pointer to your C struct as an upvalue. Look at this for some info on lua_pushcclosure and this on closures in Lua in general.
If you don't have a single structure you can reference, it gets a lot more complicated, since you'll have to somehow store pairs variableName-variableLocation on the C side and know what type each is. You could maintain such a list in the actual Lua table, so dog.beagle would be a map of variable name to one or two something's. There a couple of options for this 'something'. First - one light user data (ie - a C pointer), but then you'll have the issue of figuring out what that is pointing to, so that you know what Lua type to push in for __index and what to pop out for __newindex . The other option is to push two functions/closures. You can make a C function for each type you'll have to handle (number, string, table, etc) and push the appropriate one for each variable, or make a uber-closure that takes a parameter what type it's being given and then just vary the up-values you push it with. In this case the __index and __newindex functions will simply lookup the appropriate function for a given variable name and call it, so it would be probably easiest to implement it in Lua.
In the case of two functions your dog.beagle might look something like this (not actual Lua syntax):
dog.beagle = {
__metatable = {
__index = function(table,key)
local getFunc = rawget(table,key).get
return getFunc(table,key)
end
__newindex = function(table,key,value)
local setFunc = rawget(table,key).set
setFunc(table,key,value)
end
}
"color" = {
"set" = *C function for setting color or closure with an upvalue to tell it's given a color*,
"get" = *C function for getting color or closure with an upvalue to tell it to return a color*
}
}
Notes about the above: 1.Don't set an object's __metatable field directly - it's used to hide the real metatable. Use setmetatable(object,metatable). 2. Notice the usage of rawget. We need it because otherwise trying to get a field of the object from within __index would be an infinite recursion. 3. You'll have to do a bit more error checking in the event rawget(table,key) returns nil, or if what it returns does not have get/set members.