I have an Akka stream source from a Message queue, for example RabbitMq. For each message I want to execute an http request, map the http request to an object and proceed downward.
Is this possible by using a flow from the akka http (Http().outgoingConnection) or should the request be executed inside a map operation?
This is exactly what Http().outgoingConnection is used for (as mentioned in the question):
type MQMessage = ???
val messageToRequest : (MQMessage) => HttpRequest = ???
type ObjectType = ???
val responseToObjectType = (HttpResponse) => ObjectType = ???
val httpHost : String = ???
val messageFlow : Flow[MQMessage, ObjectType, _] =
Flow.map(messageToRequest)
.via(Http().outgoingConnection(httpHost))
.map(responseToObjectType)
Related
I'm using OKHttpClient in a Kotlin app to post a file to an API that gets processed. While the process is running the API is sending back messages to keep the connection alive until the result has been completed. So I'm receiving the following (this is what is printed out to the console using println())
{"status":"IN_PROGRESS","transcript":null,"error":null}
{"status":"IN_PROGRESS","transcript":null,"error":null}
{"status":"IN_PROGRESS","transcript":null,"error":null}
{"status":"DONE","transcript":"Hello, world.","error":null}
Which I believe is being separated by a new line character, not a comma.
I figured out how to extract the data by doing the following but is there a more technically correct way to transform this? I got it working with this but it seems error-prone to me.
data class Status (status : String?, transcript : String?, error : String?)
val myClient = OkHttpClient ().newBuilder ().build ()
val myBody = MultipartBody.Builder ().build () // plus some stuff
val myRequest = Request.Builder ().url ("localhost:8090").method ("POST", myBody).build ()
val myResponse = myClient.newCall (myRequest).execute ()
val myString = myResponse.body?.string ()
val myJsonString = "[${myString!!.replace ("}", "},")}]".replace (",]", "]")
// Forces the response from "{key:value}{key:value}"
// into a readable json format "[{key:value},{key:value},{key:value}]"
// but hoping there is a more technically sound way of doing this
val myTranscriptions = gson.fromJson (myJsonString, Array<Status>::class.java)
An alternative to your solution would be to use a JsonReader in lenient mode. This allows parsing JSON which does not strictly comply with the specification, such as in your case multiple top level values. It also makes other aspects of parsing lenient, but maybe that is acceptable for your use case.
You could then use a single JsonReader wrapping the response stream, repeatedly call Gson.fromJson and collect the deserialized objects in a list yourself. For example:
val gson = GsonBuilder().setLenient().create()
val myTranscriptions = myResponse.body!!.use {
val jsonReader = JsonReader(it.charStream())
jsonReader.isLenient = true
val transcriptions = mutableListOf<Status>()
while (jsonReader.peek() != JsonToken.END_DOCUMENT) {
transcriptions.add(gson.fromJson(jsonReader, Status::class.java))
}
transcriptions
}
Though, if the server continously provides status updates until processing is done, then maybe it would make more sense to directly process the parsed status instead of collecting them all in a list before processing them.
I am using akka http fileUpload method that produces a Source[akka.util.ByteString, Any].
I would like to handle this source in 2 different threads such as:
----> Future(check first rows if ok) -> insert object in db -> HTTP response 201 / 400
|
source ---|
|
----> Future(upload file to S3) -> set object to ready / delete if error...
So far, I managed to do something like this:
val f = for {
uploadResult <- Future(sendFileToS3(filePath, source)) // uploads the file
(extractedLines, fileSize) <- Future(readFileFromS3(filePath)) // reads the uploaded file
} yield(uploadResult, extractedLines, fileSize)
oncomplete(f) {
case Success((uploadResult, extractedLines, fileSize)) => HTTP OK with id of the object created
case Success((uploadResult, extractedLines, fileSize)) if ... => HTTP KO
case Failure(ex) => HTTP KO
}
The problem here is that on large files, the HTTP response is returned when the upload is finished. But what I would like to have is to handle the uploadResult separately from checking the first lines.
Something like
val f = for {
(extractedLines, fileSize) <- Future(readSource(source))
} yield(extractedLines, fileSize)
oncomplete(f) {
case Success((extractedLines, fileSize)) =>
Future(sendFileToS3AndHandle(filePath, source)) //send in another thread
HTTP OK with id of the object created
case Success((extractedLines, fileSize)) if ... => HTTP KO
case Failure(ex) => HTTP KO
}
Did someone have a similar issue and managed to handle it like this?
I have read something about using the source twice but it seems over complicated for my use case (and did not managed to do what I want). Also, I tried to use akka-stream alsoTo but this does not solve the issue about returning the response as soon as the first line check is completed.
Thank you for your help or suggestion.
I have a web API exposing one GET endpoint using Akka HTTP and, the logic that it takes the parameter form the requester go and call external web service using AKKA Streams and based on the response it goes and query another endpoint also using akka stream.
first external endpoint call looks like this
def poolFlow(uri: String): Flow[(HttpRequest, T), (Try[HttpResponse], T), HostConnectionPool] =
Http().cachedHostConnectionPool[T](host = uri, 80)
def parseResponse(parallelism: Int): Flow[(Try[HttpResponse], T), (ByteString, T), NotUsed] =
Flow[(Try[HttpResponse], T)].mapAsync(parallelism) {
case (Success(HttpResponse(_, _, entity, _)), t) =>
entity.dataBytes.alsoTo(Sink.ignore)
.runFold(ByteString.empty)(_ ++ _)
.map(e => e -> t)
case (Failure(ex), _) => throw ex
}
def parse(result: String, data: RequestShape): (Coord, Coord, String) =
(data.src, data.dst, result)
val parseEntity: Flow[(ByteString, RequestShape), (Coord, Coord, String), NotUsed] =
Flow[(ByteString, RequestShape)] map {
case (entity, request) => parse(entity.utf8String, request)
}
and the stream consumer
val routerResponse = httpRequests
.map(buildHttpRequest)
.via(RouterRequestProcessor.poolFlow(uri)).async
.via(RouterRequestProcessor.parseResponse(2))
.via(RouterRequestProcessor.parseEntity)
.alsoTo(Sink.ignore)
.runFold(Vector[(Coord, Coord, String)]()) {
(acc, res) => acc :+ res
}
routerResponse
then I do some calculations on routerResponse and create a post to the other external web service,
Second external Stream Consumer
def poolFlow(uri: String): Flow[(HttpRequest, Unit), (Try[HttpResponse], Unit), Http.HostConnectionPool] =
Http().cachedHostConnectionPoolHttps[Unit](host = uri)
val parseEntity: Flow[(ByteString, Unit), (Unit.type, String), NotUsed] = Flow[(ByteString, Unit)] map {
case (entity, _) => parse(entity.utf8String)
}
def parse(result: String): (Unit.type, String) = (Unit, result)
val res = Source.single(httpRequest)
.via(DataRobotRequestProcessor.poolFlow(uri))
.via(DataRobotRequestProcessor.parseResponse(1))
.via(DataRobotRequestProcessor.parseEntity)
.alsoTo(Sink.ignore)
.runFold(List[String]()) {
(acc, res) => acc :+ res._2
}
The Get Endpoint consume the first stream and then build the second request based on the first response,
Notes:
the first external service is fast 1-2 seconds time, and the second's external service is slow 3-4 seconds time.
the first endpoint is being queried using parallelism=2 and the second endpoint is being queried using parallelism=1
The Service is running on AWS ECS Cluster, and for the test purposes it is running on a single node
the problem,
that the web service work for some time but the CPU utilization get higher by dealing with more request, I would assume something to do with back pressure is being triggered, and the CPU stays highly utilized after no request is being sent also which is strange.
Does anybody have a clue whats going on
Hi the following code works as expected.
implicit val system = ActorSystem()
implicit val materializer = ActorMaterializer()
import system.dispatcher
val request = HttpRequest(uri = "http://www.google.com")
Http.get(system).singleRequest(request).map(_.entity.dataBytes.runWith(Sink.ignore))
.onComplete { _ =>
println("shutting down actor system...")
system.terminate()
}
However, if I change http://www.google.com to https://www.google.com like the following:
implicit val system = ActorSystem()
implicit val materializer = ActorMaterializer()
import system.dispatcher
val request = HttpRequest(uri = "https://www.google.com")
Http.get(system).singleRequest(request).map(_.entity.dataBytes.runWith(Sink.ignore))
.onComplete { _ =>
println("shutting down actor system...")
system.terminate()
}
I get the following error message:
shutting down actor system...
[ERROR] [02/11/2017 13:13:08.929] [default-akka.actor.default-dispatcher-4] [akka.actor.ActorSystemImpl(default)] Outgoing request stream error (akka.stream.AbruptTerminationException)
Anyone knows why https produces above error and how can I fix it?
It is apparently a known issue, see the following tickets:
akka-http#497
akka#18747
The error seems harmless.
I created a client-server system including:
a node.js server (with module ws);
a WebClient;
a QtClient (using Qt5.4 and QWebSocket).
The QtClient sends and receives strings via the method QWebSocket.sendTextMessage (QString s). How can I send an array of strings?
OTHER INFO:
The WebClient sends an array using JSON:
# index.html (WebClient)
socket.onopen = function() {
var array = {
value1: "WebClient value1 = v1",
value2: "WebClient value2 = v2"
};
socket.send(JSON.stringify(array), {binary: true, mask: false});
};
# server.js
socket.on('connection', function(ws) {
ws.on('message', function(message) {
var array = JSON.parse(message);
console.log(array["value1"]);
console.log( array["value2"]);
});
});
# console node
C:\Users\PietroP\Desktop\cs\v0.3>node server.js
Server connect on http://192.168.1.60:3000/
a user connected
WebClient value1 : v1
WebClient value2 : v2
QWebSocket class does not have direct implementation for sending arrays. You can send binary or text messages. For details please refer to:
http://doc.qt.io/qt-5/qwebsocket.html
Here is an alternative approach:
You can convert your array into a long string using something like
str = array.toString() // This is psuedo code
in a loop and send from the sender side. Then on the receiver side you can get parse it using a method such as
str.split(...);
Hope that heps!
Edit: You've probably already noticed that: in your sample code what JSON.stringify(array) and JSON.parse(message) do is nothing but converting array to string and then parse the string into array again.