Refactoring after deprecation of getFoldableComposition, option, array et al - fp-ts

I spent some time last year trying to learn fp-ts. I've finally come around to using it in a project and a lot of my sample code has broken due to the recent refactoring. I've fixed a few of the breakages but am strugging with the others. It highlights a massive whole in my FP knowledge no doubt!
I had this:
import { strict as assert } from 'assert';
import { array } from 'fp-ts/Array';
import { getFoldableComposition, } from 'fp-ts/Foldable';
import { Monoid as MonoidString } from 'fp-ts/string'
import { none,some, option } from 'fp-ts/Option';
const F = getFoldableComposition(array, option)
assert.strictEqual(F.reduce([some('a'), none, some('c')], '', MonoidString.concat), 'ac')
getFoldableComposition, option and array are now deprecated. The comments on getFoldableComposition say to use reduce, foldMap or reduceRight instead, so, amongst other things, I tried this.
import { strict as assert } from 'assert';
import { reduceRight } from 'fp-ts/Foldable';
import { Monoid as MonoidString } from 'fp-ts/string'
import { some } from 'fp-ts/Option';
assert.strictEqual(reduceRight([some('a'), none, some('c')], '', MonoidString.concat), 'ac')
That's not even compiling, so obviously I'm way off base.
Could someone please show me the correct way to replace getFoldableComposition and, while we're at it, explain what is meant by 'Use small, specific instances instead' as well for option and array? Also, anything else I'm obviously doing wrong?
Thank you!

Let's start with your question
what is meant by 'Use small, specific instances instead' as well for option and array?
Prior to fp-ts v2.10.0, type class instances were grouped together as a single record implementing the interfaces of multiple classes, and the type class record was named after the data type for which the classes were defined. So for the Array module, array was exported containing all the instances; it had map for Functor and ap for Apply etc. For Option, the option record was exported with all the instances. And so on.
Many functions, like getFoldableComposition and sequenceT are defined very generically using "higher-kinded types" and require you to pass in the type class instance for the data type you wanted the function to use. So, e.g., sequenceT requires you to pass an Apply instance like
assert.deepEqual(
sequenceT(O.option)([O.some(1), O.none]),
O.none
)
Requiring these big records of type classes instances to be passed around like that ended up making fp-ts not tree-shake well in application and library code, because JS bundlers couldn't statically tell which members of the type class record where being accessed and which weren't, so it ended up including all of them even if only one was used. That increases bundle size, which ultimately makes your app load slower for users and/or increases the bundle size of libraries consuming your library.
The solution to this problem was to break the big type class records apart and give each type class its own record. So now each data type module exports small, individual type class instances and eventually the mega-instance record will be removed. So now you would use sequenceT like
assert.deepEqual(
sequenceT(O.Apply)([O.some(1), O.none]),
O.none
)
Now the bundler knows that only Apply methods are being used, and it can remove unused instances from the bundle.
So the upshot of all this is to just not use the mega instance record anymore and only use the smaller instance records.
Now for your code.
The first thing I'll say is talk to the compiler. Your code should give you a compile error. What I'm seeing is this:
So you passed reduceRight too many arguments, so let's look at the signature:
export declare function reduceRight<F extends URIS, G extends URIS>(
F: Foldable1<F>,
G: Foldable1<G>
): <B, A>(b: B, f: (a: A, b: B) => B) => (fga: Kind<F, Kind<G, A>>) => B
First thing you should note, this function is curried and requires three invocations in order to fully evaluate (i.e. it is curried to three separate function calls). First it takes the type class instances, then the accumulator and reducing function, and finally it takes the data type we are reducing.
So first it takes a Foldable instance for a type of kind Type -> Type, and another Foldable instance for another (or the same) type of kind Type -> Type. This is where the small vs big instance record comes into play. You'll pass SomeDataType.Foldable instead of SomeDataType.someDataType.
Then it takes polymorphic type B of kind Type as the initial value for the reduce (aka the "accumulator") and a binary function which takes polymorphic type A of kind Type and B and returns B. This is the typical signature of a reduceRight.
Then it takes a scary looking type which is making use of higher-kinded types. I would pronounce it as "F of G of A" or F<G<A>>. And finally it returns B, the reduced value.
Sounds complicated, but hopefully after this it won't seem so bad.
From looking at your code, it appears you want to reduce an Array<Option<string>> into a string. Array<Option<string>> is the higher-kinded type you want to specify. You just replace "F of G of A" with "Array of Option of string". So in the signature of reduceRight, F is the Foldable instance for Array and G is the Foldable instance for Option.
If we pass those instances, we'll get back a reduceRight function specialized for an array of options.
import * as A from 'fp-ts/Array'
import * as O from 'fp-ts/Option'
import { reduceRight } from 'fp-ts/Foldable'
const reduceRightArrayOption: <B, A>(
b: B,
f: (a: A, b: B) => B) => (fga: Array<O.Option<A>>) => B =
reduceRight(A.Foldable, O.Foldable)
Then we call this reduce with the initial accumulator and a reducing function that takes the value inside Array<Option<?>> which is string and the type of the accumulator, which is also string. In your initial code, you were using concat for string. That will work here, and you'll find it on the Monoid<string> instance in the string module.
import * as A from 'fp-ts/Array'
import * as O from 'fp-ts/Option'
import { reduceRight } from 'fp-ts/Foldable'
import * as string from 'fp-ts/string'
const reduceRightArrayOption: <B, A>(
b: B,
f: (a: A, b: B) => B) => (fga: Array<O.Option<A>>) => B
= reduceRight(A.Foldable, O.Foldable)
const reduceRightArrayOptionStringToString: (fga: Array<O.Option<string>>) => string
= reduceRightArrayOption("", string.Monoid.concat)
Finally, it's ready to take our Array<O.Option<string>>.
import * as assert from 'assert'
import * as A from 'fp-ts/Array'
import * as O from 'fp-ts/Option'
import { reduceRight } from 'fp-ts/Foldable'
import * as string from 'fp-ts/string'
const reduceRightArrayOption: <B, A>(
b: B,
f: (a: A, b: B) => B) => (fga: Array<O.Option<A>>) => B
= reduceRight(A.Foldable, O.Foldable)
const reduceRightArrayOptionStringToString: (fga: Array<O.Option<string>>) => string
= reduceRightArrayOption("", string.Monoid.concat)
const result = reduceRightArrayOptionStringToString([
O.some('a'),
O.none,
O.some('c'),
])
assert.strictEqual(result, "ac")
To simplify all of this, we can use the more idiomatic pipe approach to calling reduceRight:
import * as assert from "assert"
import { reduceRight } from "fp-ts/Foldable"
import * as string from "fp-ts/string"
import * as O from "fp-ts/Option"
import * as A from "fp-ts/Array"
import { pipe } from "fp-ts/lib/function"
assert.strictEqual(
pipe(
[O.some("a"), O.none, O.some("c")],
reduceRight(A.Foldable, O.Foldable)(string.empty, string.Monoid.concat)
),
"ac"
)
I know that was a lot, but hopefully it provides a little clarity about what's going on. reduceRight is very generic, in a way that almost no other TypeScript libraries attempt to be, so it's totally normal if it takes you a while to get your head around it. Higher-kinded types are not a built-in feature of TypeScript, and the way fp-ts does it is admittedly a bit of a hack to work around the limitations of TS. But keep playing around and experimenting. It'll all start to click eventually.

Related

TypeScript Discriminated Union with Optional Discriminant

I've created a discriminated union that's later used to type props coming into a React component. A pared down sample case of what I've created looks like this:
type Client = {
kind?: 'client',
fn: (updatedIds: string[]) => void
};
type Server = {
kind: 'server',
fn: (selectionListId: string) => void
};
type Thing = Client | Server;
Note that the discriminant, kind, is optional in one code path, but is defaulted when it is destructured in the component definition:
function MyComponent(props: Thing) {
const {
kind = 'client',
fn
} = props;
if (kind === 'client') {
props.fn(['hey']);
// also an error:
// fn(['hey'])
} else {
props.fn('hi')
// also an error:
// fn('hey')
}
}
What I'm trying to understand is what's going on with this conditional. I understand that the type checker is having trouble properly narrowing the type of Thing, since the default value is separate from the type definition. The oddest part is that in both branches of the conditional it insists that fn is of type (arg0: string[] & string) => void and I don't understand why the type checker is trying to intersect the parameters here.
I would have expected it to be unhappy about non-exhaustiveness of the branches (i.e. not checking the undefined branch) or just an error on the else branch where the 'server' and undefined branches don't line up. But even trying to rearrange the code to be more explicit about each branch doesn't seem to make any impact.
Perhaps because the compiler simply can't narrow the types so tries an intersection so it doesn't matter which path--if the signatures are compatible then it's fine to call the function, otherwise you basically end up with a never (since string[] & string is an impossible type)?
I understand that there are a variety of ways I can resolve this via user-defined type predicates or type assertions, but I'm trying to get a better grasp on what's going on here internally and to find something a bit more elegant.
TS Playground link
It's an implementation detail in TS. Types are not narrowed when you are storing the value in a different variable. The same issue exists for the square bracket notation. You can refer to this question, it deals with a similar issue. Apparently, this is done for compiler performance.
You can fix your issue by using both props.fn and props.kind.
playground
Or write a type guard function.

Why does typescript complain when I import a component, but not when the component is defined in the same file?

I have an issue where I get two different results from Typescript's type check when I import a component from another file vs define the component in the same file where it's used.
I made a sandbox to describe question in more detail: https://codesandbox.io/s/typescript-error-1l44t?file=/src/App.tsx
If you look at Example function, I'm passing in an additional parameter z, which shouldn't be valid, therefore I'm getting an error (as expected).
However if you enable L15-L22 where ExampleComponent is defined in the same file, then disable or remove the ExampleComponent import from './component' on L2, suddenly Typescript stops complaining.
Any help would be appreciated. Thank you!
If there's any extra information I can give, please let me know.
This is because your are refining a type in one scope, but if you export that value you don't also get those refinements.
In other words, when you create the value in the same file, Typescript can infer a more specific subtype. But when it's in another file it just imports whatever the most broad type that it could possibly be.
Here's a simpler example:
// test.ts
export const test: string | number = 'a string';
test.toUpperCase(); // works
This works because typescript observed that test is a string because of the assignment of a string literal. There is no way that, after executing the code of that file, test could be a number.
However, test is still typed as string | number. It's just that in this scope typescript can apply a refinement to a more narrow type.
Now let's import test into another file:
// other.ts
import { test } from './test'
test.toUpperCase() // Property 'toFixed' does not exist on type 'string | number'.
Refinements only get applied in the scope where they were refined. That means that you get the more broad type when you export that value.
Another example:
// test.ts
export const test = Math.random() > 0.5 ? 'abc' : 123 // string | number
if (typeof test === 'string') throw Error('string not allowed!')
const addition = test + 10 // works fine
// other.ts
import { test } from './test'
const addition = test + 10 // Operator '+' cannot be applied to types 'string | number' and 'number'.(2365)
In this case the program should throw an exception if a string is assigned. In the test.ts file, typescript knows that and therefore knows that test must be a number if that third line is executing.
However, the exported type is still string | number because that's what you said it was.
In your code, React.ComponentType<P> is actually an alias for:
React.ComponentClass<P, any> | React.FunctionComponent<P>
Typescript notices that you are assigning a function, and not a class, and refines that type to React.FunctionComponent<P>. But when you import from another file it could be either, so typescript is more paranoid, and you get the type error.
And, lastly, for a reason I haven't yet figured out, your code works with a function component, but not with a class component. But this should at least make it clear why there's a difference at all.

How to map a 0-argument JavaScript function in PureScript FFI

I am trying to import the following JavaScript function into PureScript using the FFI:
function getGreeting() {
return "Hi, welcome to the show."
}
but I am not sure what the type should be. The closest I get to is something like:
foreign import getGreeting :: Unit -> String
I do want getGreeting to stay a function, and not convert it to a constant.
Is there a better way to write the type? I tried to see what PureScript does if I define a dummy function in PureScript itself with that type of signature:
var getGreeting = function (v) {
return "Hi, welcome to the show.";
};
Is there a way to get rid of that v parameter that is not being used?
TIA
Unit -> String is a perfectly good type for that, or perhaps forall a. a -> String. The latter type may seem too permissive, but we know for sure that the a is unused thanks to parametricity, so that the function still must be constant.
There is really useful packagepurescript-functions which can be helpful in such a situation and if you really have to call this function from Purescript as it is (because I think that it IS really just a constant) you can try:
module Main where
import Prelude
import Control.Monad.Eff (Eff)
import Control.Monad.Eff.Console (CONSOLE, log)
import Data.Function.Uncurried (Fn0, runFn0)
foreign import getString ∷ Fn0 String
main :: forall e. Eff (console :: CONSOLE | e) Unit
main = do
log (runFn0 getString)
I've created this simple javascript module so this example can be tested:
/* global exports */
"use strict";
// module Main
exports.getString = function() {
return "my constant string ;-)";
};

How do I create a Flow with a different input and output types for use inside of a graph?

I am making a custom sink by building a graph on the inside. Here is a broad simplification of my code to demonstrate my question:
def mySink: Sink[Int, Unit] = Sink() { implicit builder =>
val entrance = builder.add(Flow[Int].buffer(500, OverflowStrategy.backpressure))
val toString = builder.add(Flow[Int, String, Unit].map(_.toString))
val printSink = builder.add(Sink.foreach(elem => println(elem)))
builder.addEdge(entrance.out, toString.in)
builder.addEdge(toString.out, printSink.in)
entrance.in
}
The problem I am having is that while it is valid to create a Flow with the same input/output types with only a single type argument and no value argument like: Flow[Int] (which is all over the documentation) it is not valid to only supply two type parameters and zero value parameters.
According to the reference documentation for the Flow object the apply method I am looking for is defined as
def apply[I, O]()(block: (Builder[Unit]) ⇒ (Inlet[I], Outlet[O])): Flow[I, O, Unit]
and says
Creates a Flow by passing a FlowGraph.Builder to the given create function.
The create function is expected to return a pair of Inlet and Outlet which correspond to the created Flows input and output ports.
It seems like I need to deal with another level of graph builders when I am trying to make what I think is a very simple flow. Is there an easier and more concise way to create a Flow that changes the type of it's input and output that doesn't require messing with it's inside ports? If this is the right way to approach this problem, what would a solution look like?
BONUS: Why is it easy to make a Flow that doesn't change the type of its input from it's output?
If you want to specify both the input and the output type of a flow, you indeed need to use the apply method you found in the documentation. Using it, though, is done pretty much exactly the same as you already did.
Flow[String, Message]() { implicit b =>
import FlowGraph.Implicits._
val reverseString = b.add(Flow[String].map[String] { msg => msg.reverse })
val mapStringToMsg = b.add(Flow[String].map[Message]( x => TextMessage.Strict(x)))
// connect the graph
reverseString ~> mapStringToMsg
// expose ports
(reverseString.inlet, mapStringToMsg.outlet)
}
Instead of just returning the inlet, you return a tuple, with the inlet and the outlet. This flow can now we used (for instance inside another builder, or directly with runWith) with a specific Source or Sink.

Flink Scala API functions on generic parameters

It's a follow up question on Flink Scala API "not enough arguments".
I'd like to be able to pass Flink's DataSets around and do something with it, but the parameters to the dataset are generic.
Here's the problem I have now:
import org.apache.flink.api.scala.ExecutionEnvironment
import org.apache.flink.api.scala._
import scala.reflect.ClassTag
object TestFlink {
def main(args: Array[String]) {
val env = ExecutionEnvironment.getExecutionEnvironment
val text = env.fromElements(
"Who's there?",
"I think I hear them. Stand, ho! Who's there?")
val split = text.flatMap { _.toLowerCase.split("\\W+") filter { _.nonEmpty } }
id(split).print()
env.execute()
}
def id[K: ClassTag](ds: DataSet[K]): DataSet[K] = ds.map(r => r)
}
I have this error for ds.map(r => r):
Multiple markers at this line
- not enough arguments for method map: (implicit evidence$256: org.apache.flink.api.common.typeinfo.TypeInformation[K], implicit
evidence$257: scala.reflect.ClassTag[K])org.apache.flink.api.scala.DataSet[K]. Unspecified value parameters evidence$256, evidence$257.
- not enough arguments for method map: (implicit evidence$4: org.apache.flink.api.common.typeinfo.TypeInformation[K], implicit evidence
$5: scala.reflect.ClassTag[K])org.apache.flink.api.scala.DataSet[K]. Unspecified value parameters evidence$4, evidence$5.
- could not find implicit value for evidence parameter of type org.apache.flink.api.common.typeinfo.TypeInformation[K]
Of course, the id function here is just an example, and I'd like to be able to do something more complex with it.
How it can be solved?
you also need to have TypeInformation as a context bound on the K parameter, so:
def id[K: ClassTag: TypeInformation](ds: DataSet[K]): DataSet[K] = ds.map(r => r)
The reason is, that Flink analyses the types that you use in your program and creates a TypeInformation instance for each type you use. If you want to create generic operations then you need to make sure a TypeInformation of that type is available by adding a context bound. This way, the Scala compiler will make sure an instance is available at the call site of the generic function.

Resources