Is there a way to test only part of your code with jmeter.
My scenario is as follows: User sends an HTTP request. The body data gets inserted into the table, which is read by another service and put on Kafka topic. I would only like to do the performance testing from the point when data gets inserted into the db and until is put on kafka topic.
A normal JMeter HTTP Request wouldn't work, since the HTTP response won't adhere to the data being processed and put on kafka topic.
Also, I believe, I can't just use the JDBC Request, since when data from the request, that gets inserted into the db produces a cascade of other inserts, and all this data is needed by that other service.
Any help would be much appreciated.
You can do the following:
Use HTTP Request sampler to kick off the transaction
Use While Controller and JSR223 Sampler to wait until the message appears in Kafka (see How to Do Kafka Testing With JMeter)
Put the While Controller under the Transaction Controller to measure end-to-end processing time
Related
I have a Flink application that consumes incoming messages on a Kafka topic with multiple partitions, does some processing then sends them to a sink that sends them over HTTP to an external service. Sometimes the downstream service is down the stream processing needs to stop until it is back in action.
There are two approaches I am considering.
Throw an exception when the Http sink fails to send the output message. This will cause the task and job to restart according to the configured restart strategy. Eventually the downstream service will be back and the system will continue where it left off.
Have the Sink sleep and retry on failure; it can do this continually until the downstream service is back.
From what I understand and from my PoC, with 1. I will lose exactly-least once guarantees since the sink itself is external state. As far as I can see, you cannot make a simple HTTP endpoint transactional, as it needs to be to implement TwoPhaseCommitSinkFunction.
With 2. this is less of an issue since pipeline will not proceed until the sink makes a successful write, and I can rely on back pressure throughout the system to pause the retrieval of messages from the Kafka source.
The main questions I have are:
Is it a correct assumption that you can't make a TwoPhaseCommitSinkFunction for a simple HTTP endpoint?
Which of the two strategies, or neither, makes the most sense?
Am I missing simpler obvious solutions?
I think you can try AsyncIO in Flink - https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/operators/asyncio/.
Try to make the HTTP endpoint send a response once all operation has been done for the request, e.g. In http server, the process for the request has been done and the result has been committed to DB. Then use a http async client in AsyncIO operator. The AsyncIO operator will wait until the response is received by the operator. If any error happened, the Flink streaming pipeline will fail and restart the pipeline based on recovery strategy.
All requests to HTTP endpoint without receiving response will be in the internal buffer of AsyncIO operator, and once streaming pipeline failed, the requests pending in the buffer will be saved in the checkpoint state. It will also trigger back pressure when the internal buffer is full.
I'm using Spring Boot 2.1.4, Kafka and React as a frontend UI. I have a user registration process from the UI which requires a backend process and it's data before the registration is complete.
The flow is like this:
The frontend UI makes a request to an API which returns a token and puts a message on to a request Kafka queue
The message is processed by a backend process (which takes approximately 1 minute)
When the process is finished, a message with the token and data is written to a reply Kafka queue which indicates the process is complete
What I want is the frontend UI to make the initial API request which returns immediately, show a loading screen and display a ready message when the registration process is complete.
I have thought of a couple of options:
Attach a KafkaListener to the reply queue. Once the reply message appears, store the response and token in a datastore (e.g. Redis). Provide an API to the UI which checks the datastore for the token. The UI will poll this API every 10 seconds. If the response is not available after 2 mins, the user will be asked to check back later.
Use WebSockets with React. I've not used WebSockets before but the only thing I'm unsure of is if I have multiple instances of the registration microservice, will this cause any issues with client/api communication.
Any recommendations or any other options on the best way to handle this?
Attach a KafkaListener to the reply queue. Once the reply message appears, store the response and token in a datastore (e.g. Redis). Provide an API to the UI which checks the datastore for the token. The UI will poll this API every 10 seconds. If the response is not available after 2 mins, the user will be asked to check back later.
This will work. I would use the built in RocksDB for storage though, just for simplicity. Below is the documentation for exposing a state store to be queryable outside of kafka streams.
https://kafka.apache.org/20/documentation/streams/developer-guide/interactive-queries.html
Use WebSockets with React. I've not used WebSockets before but the only thing I'm unsure of is if I have multiple instances of the registration microservice, will this cause any issues with client/api communication.
It can potentially cause issues. It depends on the implementation of the registration service. You won't know which instance of the registration service a client will establish a connection with. For instance session needs to be managed in a external datasource like Redis or you would have to use a laod balancer that supports sticky sessions (a bit of an archaic solution).
I have a page with multiple widgets, each receiving data from a different query in the backend. Doing a request for each will consume the limit the browser puts on the number of parallel connections and will serialize some of them. On the other hand, doing one request that will return one response means it will be as slow as the slowest query (I have no apriori knowledge about which query will be slowest).
So I want to create one request such that the backend runs the queries in parallel and writes each result as it is ready and for the frontend to handle each result as it arrives. At the HTTP level I believe it can be just one body with serveral json, or maybe multipart response.
Is there an angularjs extension that handles the frontend side of things? Optimally something that works well with whatever can be done in the Java backend (didn't start investigating my options there)
I have another suggestion to solve your problem, but I am not sure you would be able to implement such a thing as from you question it is not very clear what you can or cannot do.
You could implement WebSockets and the server would be able to notify the front-end about the data being fetched or it could send the data via WebSockets right away.
In the first example, you would send a request to the server to fetch all the data for your dashboard. Once a piece of data is available, you could make a request for that particular piece and given that the data was fetched couple of seconds ago, it could be cached on the server and the response would be fast.
The second approach seems a more reasonable one. You would make an HTTP/WebSocket request to the server and wait for the data to arrive over WebSocket.
I believe this would be the most robust an efficient way to implement what you are asking for.
https://github.com/dfltr/jQuery-MXHR
This plugin allows to parse a response that contains several parts (multipart) by having a callback to parse each part. This can be used in all our frontends to support responses for multiple data (widgets) in one requests. The server side will receive one request and use servlet 3 async support (or whatever exists in other languages) to ‘park’ it, sending multiple queries, writing each response to the request as each query returns (and with the right multipart boundary).
Another example can be found here: https://github.com/anentropic/stream.
While both of these may not be compatible with angularjs, the code does not seem complex to port there.
In my Apache Camel application, I have a very simple route:
from("aws-sqs://...")
.aggregate(constant(true), new AggregationStrategy())
.completionSize(100)
.to("SEND_AGGREGATE_VIA_HTTP");
That is, it takes messages from AWS SQS, groups them in batches of 100, and sends them via HTTP somewhere.
Exchanges with messages from SQS are completed successfully on getting into the aggregate stage, and SqsConsumer deletes them from the queue at this point.
The problem is that something might happen with an aggregated exchange (it might be delivered with an error), and messages will be lost. I would really like these original exchanges to be completed successfully (messages to be deleted from a queue) only when an aggregated exchange they're in is also completed successfully (a batch of messages is delivered). Is there a way to do this?
Thank you.
You could set deleteAfterRead to false and manually delete the messages after you've sent them to you HTTP endpoint; You could use a bean or a processor and send the proper SQS delete requests through the AWS SDK library. It's a workaround, granted, but I don't see a better way of doing it.
I'm posting data on my ElasticSearch database.
I've noticed that data is not immediately available, it requires some milliseconds to show up in a GET request. I can live with that (after all, the calls are asynchronous so this behavior is expected) but in my test code I need to POST some data and immediately after retrieve it. At the moment I'm using a sleep(5) just to be sure data is available but how can I synchronize with the db?
To ensure data is available, you can make a refresh request to corresponding index before GET/SEARCH:
http://localhost:9200/your_index/_refresh
Or refresh all indexes:
http://localhost:9200/_refresh