How do I create a Flow with a different input and output types for use inside of a graph? - akka-stream

I am making a custom sink by building a graph on the inside. Here is a broad simplification of my code to demonstrate my question:
def mySink: Sink[Int, Unit] = Sink() { implicit builder =>
val entrance = builder.add(Flow[Int].buffer(500, OverflowStrategy.backpressure))
val toString = builder.add(Flow[Int, String, Unit].map(_.toString))
val printSink = builder.add(Sink.foreach(elem => println(elem)))
builder.addEdge(entrance.out, toString.in)
builder.addEdge(toString.out, printSink.in)
entrance.in
}
The problem I am having is that while it is valid to create a Flow with the same input/output types with only a single type argument and no value argument like: Flow[Int] (which is all over the documentation) it is not valid to only supply two type parameters and zero value parameters.
According to the reference documentation for the Flow object the apply method I am looking for is defined as
def apply[I, O]()(block: (Builder[Unit]) ⇒ (Inlet[I], Outlet[O])): Flow[I, O, Unit]
and says
Creates a Flow by passing a FlowGraph.Builder to the given create function.
The create function is expected to return a pair of Inlet and Outlet which correspond to the created Flows input and output ports.
It seems like I need to deal with another level of graph builders when I am trying to make what I think is a very simple flow. Is there an easier and more concise way to create a Flow that changes the type of it's input and output that doesn't require messing with it's inside ports? If this is the right way to approach this problem, what would a solution look like?
BONUS: Why is it easy to make a Flow that doesn't change the type of its input from it's output?

If you want to specify both the input and the output type of a flow, you indeed need to use the apply method you found in the documentation. Using it, though, is done pretty much exactly the same as you already did.
Flow[String, Message]() { implicit b =>
import FlowGraph.Implicits._
val reverseString = b.add(Flow[String].map[String] { msg => msg.reverse })
val mapStringToMsg = b.add(Flow[String].map[Message]( x => TextMessage.Strict(x)))
// connect the graph
reverseString ~> mapStringToMsg
// expose ports
(reverseString.inlet, mapStringToMsg.outlet)
}
Instead of just returning the inlet, you return a tuple, with the inlet and the outlet. This flow can now we used (for instance inside another builder, or directly with runWith) with a specific Source or Sink.

Related

Rails update remove number from an array attribute?

Is there a way to remove a number from an attibute array in an update? For example, if I want to update all of an alchy's booze stashes if he runs out of a particular type of booze:
Alchy has_many :stashes
Stash.available_booze_types = [] (filled with booze.ids)
Booze is also a class
#booze.id = 7
if #booze.is_all_gone
#alchy.stashes.update(available_booze_types: "remove #booze.id")
end
update: #booze.id may or may not be present in the available_booze_types array
... so if #booze.id was in any of the Alchy.stash instances (in the available_booze_types attribute array), it would be removed.
I think you can do what you want in the following way:
if #booze.is_all_gone
#alchy.stashes.each do |stash|
stash.available_booze_types.delete(#booze.id)
end
end
However, it looks to me like there are better ways to do what you are trying to do. Rails gives you something like that array by using relations. Also, the data in the array will be lost if you reset the app (if as I understand available_booze_types is an attribute which is not stored in a database). If your application is correctly set up (an stash has many boozes), an scope like the following in Stash class seems to me like the correct approach:
scope :available_boozes, -> { joins(:boozes).where("number > ?", 0) }
You can use it in the following way:
#alchy.stashes.available_boozes
which would only return the ones that are available.

Is there any way to use flow to restrict specific string patterns?

I'm using Flow on a React webapp and I'm currently facing a use-case where I'm asking for the user to input certain time values in a "HH:mm" format. Is there any way to describe what pattern is being followed by the strings?
I've been looking around for a solution but the general consensus which I agree to to a certain point seems to be that you don't need to handle this kind of thing using Flow, favouring using validating functions and relying on the UI code to supply the code following the correct pattern. Still, I was wondering if there is any way to achieve this in order to make the code as descriptive as possible.
You want to create a validator function, but enhanced using Opaque Type Aliases: https://flow.org/en/docs/types/opaque-types/
Or, more specifically, Opaque Type Aliases with Subtyping Constraints: https://flow.org/en/docs/types/opaque-types/#toc-subtyping-constraints
You should write a validator function in the same file where you define the opaque type. It will accept the primitive type as an argument and return a value typed as the opaque type with subtyping constraint.
Now, in a different file, you can type some variables as the opaque type, for example in function arguments. Flow will enforce that you only pass values that go through your validator function, but these could be used just as if they were the primitive type.
Example:
exports.js:
export opaque type ID: string = string;
function validateID(x: string): ID | void {
if ( /* some validity check passes */ ) {
return x;
}
return undefined;
}
import.js:
import type {ID} from './exports';
function formatID(x: ID): string {
return "ID: " + x; // Ok! IDs are strings.
}
function toID(x: string): ID {
return x; // Error: strings are not IDs.
}

Flink Scala API functions on generic parameters

It's a follow up question on Flink Scala API "not enough arguments".
I'd like to be able to pass Flink's DataSets around and do something with it, but the parameters to the dataset are generic.
Here's the problem I have now:
import org.apache.flink.api.scala.ExecutionEnvironment
import org.apache.flink.api.scala._
import scala.reflect.ClassTag
object TestFlink {
def main(args: Array[String]) {
val env = ExecutionEnvironment.getExecutionEnvironment
val text = env.fromElements(
"Who's there?",
"I think I hear them. Stand, ho! Who's there?")
val split = text.flatMap { _.toLowerCase.split("\\W+") filter { _.nonEmpty } }
id(split).print()
env.execute()
}
def id[K: ClassTag](ds: DataSet[K]): DataSet[K] = ds.map(r => r)
}
I have this error for ds.map(r => r):
Multiple markers at this line
- not enough arguments for method map: (implicit evidence$256: org.apache.flink.api.common.typeinfo.TypeInformation[K], implicit
evidence$257: scala.reflect.ClassTag[K])org.apache.flink.api.scala.DataSet[K]. Unspecified value parameters evidence$256, evidence$257.
- not enough arguments for method map: (implicit evidence$4: org.apache.flink.api.common.typeinfo.TypeInformation[K], implicit evidence
$5: scala.reflect.ClassTag[K])org.apache.flink.api.scala.DataSet[K]. Unspecified value parameters evidence$4, evidence$5.
- could not find implicit value for evidence parameter of type org.apache.flink.api.common.typeinfo.TypeInformation[K]
Of course, the id function here is just an example, and I'd like to be able to do something more complex with it.
How it can be solved?
you also need to have TypeInformation as a context bound on the K parameter, so:
def id[K: ClassTag: TypeInformation](ds: DataSet[K]): DataSet[K] = ds.map(r => r)
The reason is, that Flink analyses the types that you use in your program and creates a TypeInformation instance for each type you use. If you want to create generic operations then you need to make sure a TypeInformation of that type is available by adding a context bound. This way, the Scala compiler will make sure an instance is available at the call site of the generic function.

LDAPMAP - Mapping SAP data to LDAP via RSLDAPSYNC_USER function

We are looking at syncing some of our LDAP (Active Directory) data with what is stored in SAP. SAP provides several function modules that allow you to write a custom program to handle mapping the data, but we are looking to use the provided solution that makes use of RSLDAPSYNC_USER.
The issue I'm having is understanding how the mapping of fields is performed in LDAPMAP. In particular, when performing the Mapping Overview, where are the structures as shown below defined?
Also, we have a function module that is currently available for grabbing all of the fields we would like to send to LDAP, but can the screen shown below be used to call a custom function module to grab the data I require? If so, then please give an example.
Thanks,
Mike
I am not sure if that is what you ask. As an answer to your second question:
You can give attributes that you want to get. The LDAP_READ function will return the results in entries parameter.
CALL FUNCTION 'LDAP_READ'
EXPORTING
base = base
* scope = 2
filter = filter
* attributes = attributes_ldap
timeout = s_timeout
attributes = t_attributes_ldap
IMPORTING
entries = t_entries_ldap "<< entries will come
EXCEPTIONS
no_authoriz = 1
conn_outdate = 2
ldap_failure = 3
not_alive = 4
other_error = 5
OTHERS = 6.
Entries parameter looks like:
Attributes parameter looks like:

Configure Interception with Unity AutoConfig in VB.NET

I'm trying to get interception working in vb.net since my work only allows that. The way I would use it is to configure say some logger so that every business logic function that gets run is intercepted and logged to the database (bad idea, but its just an example). This is an example that I found:
container
.ConfigureAutoRegistration()
.Include(If.Implements<IBusinessService>, (x, y) =>
{
if (x.IsClass)
y.Configure<Interception>().
SetDefaultInterceptorFor(x,new VirtualMethodInterceptor());
})
This is what I tried to get working in vb.net, but it keeps throwing an error.
container.
ConfigureAutoRegistration().
Include([if].ImplementsITypeName, Function(x, y)
if x.IsClass
y.Configure(of Interception)()
.SetDefaultInterceptorFor(x,new VirtualMethodInterceptor())
End Function)
The error is:
Argument not specified for parameter 'type' of 'Public Shared Function ImplementsITypeName(type as System.Type) As Boolean.
Now obviously I need to specify some type, but the point is that I need to autoregister, so why do I need to provide a type? Also, the C# code doesn't require it, and neither does the code sample (see below).
var container = new UnityContainer();
container
.ConfigureAutoRegistration()
.ExcludeAssemblies(a => a.GetName().FullName.Contains("Test"))
.Include(If.Implements<ILogger>, Then.Register().UsingPerCallMode())
.Include(If.ImplementsITypeName, Then.Register().WithTypeName())
.Include(If.Implements<ICustomerRepository>, Then.Register().WithName("Sample"))
.Include(If.Implements<IOrderRepository>,
Then.Register().AsSingleInterfaceOfType().UsingPerCallMode())
.Include(If.DecoratedWith<LoggerAttribute>,
Then.Register()
.As<IDisposable>()
.WithTypeName()
.UsingLifetime<MyLifetimeManager>())
.Exclude(t => t.Name.Contains("Trace"))
.ApplyAutoRegistration();
http://autoregistration.codeplex.com/
I ended up using structure map.

Resources