Using POJOs in fp-ts - fp-ts

I want to do what I think is the simplest thing, map through a plain old Javascript object, using fp-ts constructs.
There are a lot of operators I would like to use — map, filterMap, and so on — but they seem to require the Map type (which I assume is the Javascript Map object) — and there does not seem to be any easy way to convert back and forth between Map and Javascript object, which in Typescript is the standard representation of algebraic data types and of its Record<K,V> type.
This seems like a pretty big hole. Please don’t tell me I have to switch to lodash...

fp-ts/Record lets you work with Record<K, V> as if they were a Functor in V, so you can have
import { map } from 'fp-ts/Record'
const mapped = map((v: number) => v + 1)({a: 1}) // { a: 2 }

Related

Check if generic is an array with inline function

How can I check if a Generic is an Array using an inline function?
I tried with the following code:
class Mediator {
inline fun <reified C> mediate(string: String): Container<C> {
if (C::class == Int::class) {
//It works
}
else if (C::class == Array::class) {
//It doesn't work!
}
throw IllegalStateException("Yopta")
}
}
But it doesn't work. Maybe because it can be Array<Whatever>?
How can I do it?
Contrary to collections where for example List<String> and List<Int> are internally represented by the same class List, in arrays the type parameter is a part of the type itself. That means Array<String> and Array<Int> are internally represented as different types and as far as I know, they don't have a common super type.
I don't know a pure Kotlin solution to check if a class is an array. It seems to me like an overlook in the design of the reflection API. If you don't mind using the Java reflection, you can do it like this:
else if (C::class.java.isArray) {
Update
There is one interesting fact here. In the Kotlin type system we could consider Array<out Any?> to be a supertype of all arrays. For example, we can upcast to it without an explicit cast operator:
val intArray = arrayOf(1, 2, 3)
val arr: Array<out Any?> = intArray
However, for the reflection API these two types are entirely different:
// false
println(Array<Int>::class.isSubclassOf(Array<out Any?>::class))
I assume this is due to how arrays where implemented in Java. I'm not even sure if it would be technically possible to return true in the code above. Still, it is concerning it provides a different result than the type system at a compile time and it doesn't even produce a warning.
Actual answer that solves the issue here.
Since broot added an actual answer I'll just leave this here as a note as to how we can see that he is right basically.
If we make the call like this:
Mediator().mediate<Array<Int>>("")
Adding a simple check inside the function like this makes it a bit confusing as to why they are not equal.
println(C::class) //class Kotlin.Array
println(Array:class) //class Kotlin.Array
But doing the same for the underlying java class shows that they are not really the same object.
println(C::class.java) //class [Ljava.lang.Integer
println(Array:class.java) //class [Ljava.lang.Object
So changing the statement to:
if(C::class.java == Array<Int>::class.java)
Will make the example work ... for Int only. All other "infinite" possibilities will have to be added manually. Not an issue if you just want to check Array<X> only, but definitely not generic.

How do deal with nested Arrays/objects in BehaviorSubjects, Observables?

I generally have problems using rxjs with nested Objects or Arrays.
My current use-case is this:
{a: [
{b: 0, c:[{d:1}]},
{b: 1, e:[{f: 'someString'}]}
]
Task: Get and set the Observable or value of a,b,c,d,e,f. I also want to be able to subscribe to each property.
I had this Problem in a similar use-case with an Array of BehaviorSubjects:
Efficiently get Observable of an array BehaviorSubjects
I generally have problems to use the basic functionality of nested arrays/objects in rxjs.
The basic functionality I mean includes:
Array:
getting Element by Index
using for of/in on Arrays
setting an Element by Index
push, pop, shift, slice, splice, ...
Object:
getting Value by Property name
going into the nested tree: object.key1.key2.key3[3].key4 ...
setting Value by Property name
assign
for of/in loops
Generally:
Destructuring: e.g.: let [variable1, variable2] = someObject;
Maybe other stuff I forgot.
I dont know if and which functions are possible for which rxjs Objects and which make sense (for example you should be able to set values in an Observable directly). But coming from a background without rxjs, I have trouble to manage my rxjs Objects properly.
I think reason for this besides my lack of knowledge and understanding is, that
a. The rxjs Objects don't provide the functionality as I'm used to from normal arrays and objects. e.g.:
let variable1 = array[1].property;
//becomes this (see related stack-Question I mentioned earlier)
let variable2 = array.pipe(mergeMap(d=> d[index].pipe(map(d1 => d1[property]));
// -> what happens here? You first need to know what mergeMap,
// map is doing and you have 5 levels of nested inline functions.
b. To implement the those mentioned functionalities I need to go over the .pipe() function and use some function like mergeMap, map, pluck, ... Functions that aren't directly indicating that you can get the Observable of let's say 'e' in my example. Making something like object.a[1].e wierd to implement (at least I don't know how to do that yet)
EDIT:
I also want to note, that I still love the idea of rxjs which works well in angular. I just have problems using it to it's full extend, as I'm a bit new to angular and consequently rxjs.
I thin RX is mainly focus on dealing with async operations. Mutation of array and object we can perfectly use the methods comes natively with javascript if theres no existing operators. or you can create your own operator for mutation/iteration etc.
Will try to answer some of your question on array/objects mutation, they are actually very straight forward.
Array:
getting Element by Index
map(arr=>arr[index])
using for of/in on Arrays
map(arr=>arry.map(item=>....))
setting an Element by Index
tap(arr=>arr[index]=somevalue)
Object:
getting Value by Property name
pluck('name')
going into the nested tree: object.key1.key2.key3[3].key4 ...
pluck('key1','key2')
setting Value by Property name
map(obj=>({a:value,obj...}))
assign
lets say your really want some pick array index method as rxjs operator you can create something like, same as for..in operations.
const pluckIndex=(index)=>source=>source.pipe(map(arr=>arr[index]))
const source = of([2,3])
source.pipe(pluckIndex(1)).subscribe(x => console.log(x));

Can TypeScript's `readonly` fully replace Immutable.js?

I have worked on a couple of projects using React.js. Some of them have used Flux, some Redux and some were just plain React apps utilizing Context.
I really like the way how Redux is using functional patterns. However, there is a strong chance that developers unintentionally mutate the state. When searching for a solution, there is basically just one answer - Immutable.js. To be honest, I hate this library. It totally changes the way you use JavaScript. Moreover, it has to be implemented throughout the whole application, otherwise you end up having weird errors when some objects are plain JS and some are Immutable structures. Or you start using .toJS(), which is - again - very very bad.
Recently, a colleague of mine has suggested using TypeScript. Aside from the type safety, it has one interesting feature - you can define your own data structures, which have all their fields labeled as readonly. Such a structure would be essentially immutable.
I am not an expert on either Immutable.js or TypeScript. However, the promise of having immutable data structures inside Redux store and without using Immutable.js seems too good to be true. Is TypeScript's readonly a suitable replacement for Immutable.js? Or are there any hidden issues?
While it is true that the readonly modifier of TypeScript only exists at design type and does not affect runtime code, this is true of the entire type system. That is, nothing stops you at runtime from assigning a number to a variable of type string. So that answer is kind of a red herring... if you get warned at design time that you're trying to mutate something marked as const or readonly, then that would possibly eliminate the need for extensive runtime checking.
But there is a major reason why readonly is insufficient. There is an outstanding issue with readonly, which is that currently (as of TS3.4), types that differ only in their readonly attributes are mutually assignable. Which lets you easily bust through the protective readonly shell of any property and mess with the innards:
type Person = { name: string, age: number }
type ReadonlyPerson = Readonly<Person>;
const readonlyPerson: ReadonlyPerson = { name: "Peter Pan", age: 12 };
readonlyPerson.age = 40; // error, "I won't grow up!"
const writablePerson: Person = readonlyPerson; // no error?!?!
writablePerson.age = 40; // no error! Get a job, Peter.
console.log(readonlyPerson.age); // 40
This is pretty bad for readonly. Until that gets resolved, you might find yourself agreeing with a previous issue filer who had originally named the issue "readonly modifiers are a joke" 🤡.
Even if this does get resolved, readonly might not cover all use cases. You'd also need to walk through all interfaces and types in your libraries (or even the standard libraries) and remove methods that mutate state. So all uses of Array would need to be changed to ReadonlyArray and all uses of Map would need to be changed to ReadonlyMap, etc. Once you did this you'd have a fairly typesafe way to represent immutability. But it's a lot of work.
Anyway, hope that helps; good luck!
The purpose of Immutable.js is not to prevent a developer from doing an illegal mutation at compile time. It provides a convenient API to create copies of an object with some of its properties changed. The fact that you get type safeness on objects that you manage with immutable.js is basically just a side effect of using it.
Typescript is "just" a typing system. It does not implement any of the features Immutable.js does to make copies of immutable objects. All it does, when declaring a variable as readonly, is to check at compile time that you do not mutate it. How you design your code to handle immutability is not the scope of a typing system and you would still need a way of dealing with it.
React ensures immutability by providing a method setState instead of mutating the state object directly. It takes care of merging the changed properties for you. But if you e.g. use redux you may want a convenient solution to handle immutability too. That is what Immutable.js provides and typescript never will and it is independent of whether you like the api or not.
There are two issues with this:
1) You have to use readonly and/or things like ReadonlyArray all the way down, which is error-prone.
2) readonly exists solely at compile time, not runtime, unless backed by immutable data stores. Once your code is transpiled to JS your runtime code can do whatever it wants.
Immutable js distinguishing feature compared to readonly is structural sharing.
Here is general benefit:
Imagine nested JS object that have 16 properties across multiple levels of nesting.
With readonly the way to update a value is to copy old one, modify whatever data we want and then we have new value!
With JS the way to update a value is to keep all the properties that did not change and only copy those that did (and their parents untill we reach a root).
Thus Immutable js saves time on update (less copying), saves memory (less copying), saves time when deciding if we need to redo some related work (e.g. we know that some leafs didn't change so their DOM do not have to be changed by React!).
As you can see readonly is not even in the same league as Immutable js. One is mutation property, the other is efficient immutable data structure library.
Typescript is still rough around the edges with immutability - and they still (as of Typescript 3.7) haven't fixed the issue where you can mutate readonly objects by first assigning them to non-readonly objects.
But the usability is still pretty good because it covers almost all other use cases.
This definition which I've found in this comment works pretty well for me:
type ImmutablePrimitive = undefined | null | boolean | string | number | Function;
export type Immutable<T> =
T extends ImmutablePrimitive ? T :
T extends Array<infer U> ? ImmutableArray<U> :
T extends Map<infer K, infer V> ? ImmutableMap<K, V> :
T extends Set<infer M> ? ImmutableSet<M> : ImmutableObject<T>;
export type ImmutableArray<T> = ReadonlyArray<Immutable<T>>;
export type ImmutableMap<K, V> = ReadonlyMap<Immutable<K>, Immutable<V>>;
export type ImmutableSet<T> = ReadonlySet<Immutable<T>>;
export type ImmutableObject<T> = { readonly [K in keyof T]: Immutable<T[K]> };

Array of arrays in Go or Rust?

I'm porting some old PHP script to Go in order to achieve better performance. However, the old PHP is full of multidimensional arrays. Some excerpt from the codebase:
while (($row = $stmt->fetch(PDO::FETCH_ASSOC)) !== false) {
$someData[$row['column_a']][$row['column_b']] = $row;
}
// ... more queries and stuff
if (isset($moreData['id']) && isset($anotherData['id']) && $someData[$anotherData['id']][$moreData['id']]) {
echo $someData[$anotherData['id']][$moreData['id']];
}
Awful, i know, but i can't change the logic. I made the whole script perform much better by compiling with phc, but moving to a procedural, statically-typed language seems like a better move. How can i replicate those data structures efficiently with Go or Rust? It needs to be fault-tolerant when it comes to checking indexes, there's a lot of issets all around the script to check if the identifiers exist in the data structures.
In Go, this would be represented as a map. The syntax is map[key]value. So for example to store a multidimensional map of [string, string] -> int it would be map[string]map[string]int. If you know your indices are integers and densely packed, then you'd want to use slices. Those are simpler and look like [][]type.
As for the checking for a key existing, use this syntax, where m is a map:
if val, ok := m[key1][key2]; ok {
///Do something with val
}
Remember that to add a key to a multidimensional map you'd have to make sure the inner map is allocated before adding to it.
if _, ok := m[key]; !ok {
m[key] = make(map[string]int)
}
m[key1][key2] = value
Obviously you'd want to wrap this up in a type with methods or a few simple functions.
Rust
PHP arrays are actually associative arrays, also known as maps or dictionaries. Rust uses the name Map in its standard library. The Map trait provides an interface for a variety of implementations. In particular, this trait defines a contains_key method, which you can use to check if the map contains a particular key (instead of writing isset($array[$key]), you write map.contains_key(key)).
Map has two type parameters: K is the type of the map's keys (i.e. the values you use as an index) and V is the type of the map's values.
If you need your map to contain keys and/or values of various types, you'll need to use the Any trait. For example, if the keys are strings and the values are of various types, you could use HashMap<String, Box<Any>> (Box is necessary because trait objects are unsized; see this answer for more information). Check out the documentation for AnyRefExt and AnyMutRefExt to see how to work with an Any value.
However, if the possible types are relatively limited, it might be easier for you to define your own trait and use that trait instead of Any so that you can implement operations on those types without having to cast explicitly everywhere you need to use the values (plus, you can add types by adding an impl without having to change all the places that use values).

What is most efficient way to do immutable byte arrays in Scala?

I want to get an array of bytes (Array[Byte]) from somewhere (read from file, from socket, etc) and then provide a efficient way to pull bits out of it (e.g. provide a function to extract a 32-bit integer from offset N in array). I would then like to wrap the byte array (hiding it) providing functions to pull bits out from the array (probably using lazy val for each bit to pull out).
I would imagine having a wrapping class that takes an immutable byte array type in the constructor to prove the array contents is never modified. IndexedSeq[Byte] seemed relevant, but I could not work out how to go from Array[Byte] to IndexedSeq[Byte].
Part 2 of the question is if I used IndexedSeq[Byte] will the resultant code be any slower? I need the code to execute as fast as possible, so would stick with Array[Byte] if the compiler could do a better job with it.
I could write a wrapper class around the array, but that would slow things down - one extra level of indirection for each access to bytes in the array. Performance is critical due to the number of array accesses that will be required. I need fast code, but would like to do the code nicely at the same time. Thanks!
PS: I am a Scala newbie.
Treating Array[T] as an IndexedSeq[T] could hardly be simpler:
Array(1: Byte): IndexedSeq[Byte] // trigger an Implicit View
wrapByteArray(Array(1: Byte)) // explicitly calling
Unboxing will kill you long before an extra layer of indirection.
C:\>scala -Xprint:erasure -e "{val a = Array(1: Byte); val b1: Byte = a(0); val
b2 = (a: IndexedSeq[Byte])(0)}"
[[syntax trees at end of erasure]]// Scala source: scalacmd5680604016099242427.s
cala
val a: Array[Byte] = scala.Array.apply((1: Byte), scala.this.Predef.
wrapByteArray(Array[Byte]{}));
val b1: Byte = a.apply(0);
val b2: Byte = scala.Byte.unbox((scala.this.Predef.wrapByteArray(a): IndexedSeq).apply(0));
To avoid this, the Scala collections library should be specialized on the element type, in the same style as Tuple1 and Tuple2. I'm told this is planned, but it's a bit more involved than simply slapping #specialized everywhere, so I don't know how long it will take.
UPDATE
Yes, WrappedArray is mutable, although collection.IndexedSeq[Byte] doesn't have methods to mutate, so you could just trust clients not to cast to a mutable interface. The next release of Scalaz will include ImmutableArray which prevents this.
The boxing comes retrieving an element from the collection via this generic method:
trait SeqLike[+A, +Repr] extends IterableLike[A, Repr] { self =>
def apply(idx: Int): A
}
At the JVM level, this signature is type-erased to:
def apply(idx: Int): Object
If your collection contains primitives, that is, subtypes of AnyVal, they must be boxed in the corresponding wrapper to be returned from this method. For some applications, this is a major performance concern. Entire libraries have been written in Java to avoid this, notably fastutils.
Annotation directed specialization was added to Scala 2.8 to instruct the compiler to generate various versions of a class or method tailored to the permutations of primitive types. This has been applied to a few places in the standard library already, e.g. TupleN, ProductN, Function{0, 1, 2}. If this was also applied to the collections hierarchy, this performance cost could be alleviated.
If you want to work with sequences in Scala, I recommend you choose one of these:
Immutable seqs:
(linked seqs) List, Stream, Queue
(indexed seqs) Vector
Mutable seqs:
(linked seq) ListBuffer
(indexed seq) ArrayBuffer
The new (2.8) Scala collections have been hard to grasp for me, primarily due to shortage of (correct) documentation but also because of the source code (complex hierarchys). To clear my mind I made this pic to visualize the basic structure:
(source: programmera.net)
Also, note that Array is not part of the tree structure, it is a special case, since it wraps the Java array (which is a special case in Java).

Resources