Read request body with Akka-Http and send each line to the message queue on an Actor - akka-stream

I'm googling for an example that could fit my use case, but I haven't found any so far.
I'm writing an Akka WebService that should process a potentially huge plain text request body sending each line to an Actor's incoming message queue.
Could any of you write some code here or just head me to an example page?
I actually have no idea from where to start: the big problem to me is dealing with streams in general (in my case I want to use Akka streaming library)

To get the request body you can use the extractRequestEntity directive to create your route. Once you have the entity stream you can simply dispatch each line of text to the Actor:
import akka.stream.scaladsl.Framing.delimiter
import akka.util.ByteString
import akka.actor.ActorRef
import akka.http.scaladsl.server.Directives.{extractRequestEntity, onComplete}
val maxLineLength = 256
val streamSplitter = delimiter(ByteString("\n"), maxLineLength)
val actorRef : ActorRef = ??? //not specified in question
val route : Route =
extractRequestEntity { entity =>
onComplete {
entity
.dataBytes
.via(streamSplitter)
.map(_.utf8String)
.runForeach(line => actorRef ! line)
} { _ =>
complete("all lines sent to actor")
}
}
The question doesn't specify whether or not the response is dependent on the results of the Actor processing so the above example simply sends the lines to the Actor and then completes the request with a response containing a simple message.
The route can now form the basis of a server.

Related

How to run the request multiple times and save the response to a single file in SOAPUI?

I'm developing an app in c# to communicate with SOAPUI.
I have a specific Test Case which called TERMEKADAT TestCase, I need to call this TestCase more time than 1 and save the data to file(all responses data to ONE file).
In the c# app I can call the test case using command line and I can storage the response, but i can do it one time(if i do this more times, the dump file will overwritten, so the latest datas are gone).
Can i somehow storage all requests data in ONE file?
Can SOAPUI append the responses in one file?
I need to storage more responses in one file.
You could do it like this in a Groovy Script Test Step in your test case:
import com.eviware.soapui.support.XmlHolder
def groovyUtils = new com.eviware.soapui.support.GroovyUtils(context)
def resp = new XmlHolder(context.response) //get the response object
def respXml = resp.getXml() //get the XML string from the response
def respFile = new File("/your/dir/yourFile.txt")
respFile.append(respXml)

scala - Gatling - I can't seem to use Session Variables stored from a request in a subsequent Request

The code:
package simulations
import io.gatling.core.Predef._
import io.gatling.http.Predef._
class StarWarsBasicExample extends Simulation
{
// 1 Http Conf
val httpConf = http.baseUrl("https://swapi.dev/api/films/")
// 2 Scenario Definition
val scn = scenario("Star Wars API")
.exec(http("Get Number")
.get("4")
.check(jsonPath("$.episode_id")
.saveAs("episodeId"))
)
.exec(session => {
val movie = session("episodeId").as[String]
session.set("episode",movie)
}).pause(4)
.exec(http("$episode")
.get("$episode"))
// 3 Load Scenario
setUp(
scn.inject(atOnceUsers(1)))
.protocols(httpConf)
}
Trying to grab a variable from the first Get request, and inject that variable into a second request, but unable to do so despite using the documentation. There might be something I'm not understanding.
When I use breakpoints, and navigate through the process, it appears the session execution happens AFTER both of the other requests have been completed (by which time is too late). Can't seem to make that session execution happen between the two requests.
Already answered on Gatling's community mailing list.
"$episode" is not correct Gatling Expression Language syntax. "${episode}" is correct.

Any way to reuse a Source[ByteString, Any] (without keeping it all in memory)

Is there any way to make a Source reusable?
I have an akka-http server that receives a large file upload and then streams the (chunked) data to subscriber websockets and other HTTP servers via HTTP POST. In both cases, there is an API that accepts a Source[ByteString, Any]:
HttpEntity(..., source) in the case of the HTTP POST
BinaryMessage(source) for the websocket
Using these APIs has some advantages over the versions that take a single ByteString (Only need to do a single HTTP post, can recreate the same chunked message, etc.).
So is there a way to make something like this work (without buffering everything in memory)?
val allSinks: Seq[Sink[Source[ByteString, Any], Future[Done]]] = ???
val g = RunnableGraph.fromGraph(GraphDSL.create(allSinks) { implicit builder => sinks =>
import GraphDSL.Implicits._
// Broadcast with an output for each subscriber
val broadcast = builder.add(Broadcast[DataSource](sinks.size))
Source.single(source) ~> broadcast
sinks.foreach(broadcast ~> _)
ClosedShape
})
Sources are Not Reusable
Unfortunately a Source cannot be reused after it has been exhausted. The underlying "source" of the data can be re-used to create separate Source values but each value can be run on at most one stream.
Persistence
If replay capabilities are a requirement then the data being streamed will need to be stored in a persistence mechanism to facilitate replay later. This mechanism could be a filesystem, database, Kafka,...
Below is a mockup using the filesystem.
The incoming POST message body can be streamed to a file in write-mode:
post {
path(Segment) { fileName =>
extractRequestEntity { entity =>
complete {
entity
.dataBytes
.toMat(FileIO.toPath(Paths.get(fileName), Set(CREATE_NEW, WRITE)))(Keep.Right)
.run()
.andThen {
case Success(ioResult) =>
StatusCodes.Ok -> s"wrote ${ioResult.count} bytes"
case Failure(ex) =>
StatusCodes.InternalServerError -> ex.toString
}
}
}
}
}
}
There is then no need to create a Broadcast hub, you simply respond to GET requests with the contents of the file:
path(Segment) { fileName =>
getFromFile(fileName)
}
This takes advantage of the fact that most OSes will allow you to write to a file as a stream of bytes while at the same time reading from a file as a stream of bytes...

Returning multiple items with Servlet

Good day, I'm working on a Servlet that must return a PDF file and the message log for the processing done with that file.
So far I'm passing a boolean which I evaluate and return either the log or the file, depending on the user selection, as follows:
//If user Checked the Download PDF
if (isDownload) {
byte[] oContent = lel;
response.setContentType("application/pdf");
response.addHeader("Content-disposition", "attachment;filename=test.pdf");
out = response.getOutputStream();
out.write(oContent);
} //If user Unchecked Download PDF and only wants to see logs
else {
System.out.println("idCompany: "+company);
System.out.println("code: "+code);
System.out.println("date: "+dateValid);
System.out.println("account: "+acct);
System.out.println("documentType: "+type);
String result = readFile("/home/gianksp/Desktop/Documentos/Logs/log.txt");
System.setOut(System.out);
// Get the printwriter object from response to write the required json object to the output stream
PrintWriter outl = response.getWriter();
// Assuming your json object is **jsonObject**, perform the following, it will return your json object
outl.print(result);
outl.flush();
}
Is there an efficient way to return both items at the same time?
Thank you very much
HTTP protocol doesn't allow you to send more than one HTTP response per one HTTP request. With this restriction in mind you can think of the following alternatives:
Let client fire two HTTP requests, for example by specifyingonclick event handler, or, if you returned HTML page in the first response, you could fire another request on window.load or page.ready;
Provide your for an opportunity of choosing what he'd like to download and act in a servlet accordingly: if he chose PDF - return PDF; if he chose text - return text and if he chose both - pack them in an archive and return it.
Note that the first variant is both clumsy and not user friendly and as far as I'm concerned should be avoided at all costs. A page where user controls what he gets is a much better alternative.
You could wrap them in a DTO object or place them in the session to reference from a JSP.

httpClient, problem to do a POST of a Multipart in Chunked mode...

Well I am wondering how I can achieve to post a multipart in chunked mode. I have 3 parts, and the files which can be big so must be sent in chunks.
Here what I do :
MultipartEntity multipartEntity = new MultipartEntity() {
#Override
public boolean isChunked() {
return true;
}
};
multipartEntity.addPart("theText", new StringBody("some text", Charset.forName("UTF-8")));
FileBody fileBody1 = new FileBody(file1);
multipartEntity.addPart("theFile1", fileBody1);
FileBody fileBody2 = new FileBody(file2);
multipartEntity.addPart("theFile2", fileBody2);
httppost.setEntity(multipartEntity);
HttpParams params = new BasicHttpParams();
HttpProtocolParams.setVersion(params, HttpVersion.HTTP_1_1);
HttpClient httpClient = new DefaultHttpClient(params);
HttpResponse httpResponse = httpClient.execute(httppost);
On the server side, I do receive the 3 parts but the files for example are not chunked, they are received as one piece... basically total I see 4 boundaries appearing only : 3 --xxx, 1 at the end --xxx-- .
I thought the override of isChunked would do the trick but no... ;(
Is what I am trying to do feasible ? How could I make that work ?
Thanks a lot.
Fab
To generate a multipart body chunked, one of the part must have it size unavailable. Like a part that is streaming.
For example let assume your file2 is a really big video. You could replace the part of your code:
FileBody fileBody2 = new FileBody(file2);
multipartEntity.addPart("theFile2", fileBody2);
wtih that code:
final InputStreamBody binVideo = new InputStreamBody(new FileInputStream(file2), "video/mp4", file2.getName());
multipartEntity.addPart("video", binVideo);
since now the third part is an InputStream instead of File, your multipart HTTP request will have the header Transfer-Encoding: chunked.
Usually any decent server-side HTTP framework (such as Java EE Servlet API) would hide transport details such as transfer coding from the application code. just because you are not seeing chunk delimiters by reading from the content stream does not mean the chunk coding was not used by the underlying HTTP transport.
You can see exactly what kind of HTTP packets HttpClient generates by activating the wire logging as described here:
http://hc.apache.org/httpcomponents-client-ga/logging.html

Resources