In this FAQ question it says that compression is used automatically when the browser supports it and that I don't need to modify my application in any way.
My question is, does that apply to Channel API messages too?
I have an application that needs to send relatively large JSON (text) data through a persistent connection and I'm hoping I can get things through faster if they are compressed.
If not, I can think of a workaround to have the server send just a ping through the channel when a big load comes through and have the browser then make a GET request to fetch it (and that would "automatically" compress it), but that would add the latency of another request.
Data sent over the connection the Channel API uses is gzip compressed.
However, Channel API messages are limited to 32K uncompressed, so for anything bigger than that you'll need to use the ping/GET method anyway.
Related
I have two applications, spring boot backend and react frontend. I need to load a lot of data (lets say 100 000 objects, each 3 Integer fields), and present it on a leaflet map. However i don't know which protocol should I use. I thought about two approaches:
Do it with REST, 1 000 (or more/less) objects each request, create some progress bar on front end so user does not refresh the page all the time because he thinks something is wrong.
Do it with websocket, so it is faster? Same idea with progress bar, however I am worried that if user starts to refresh the page, backend will stream the data even though connection from frontend is crashed and new one is established, for the new one the process will begin too, and so on.
If it is worth mentioning, I am using spring-boot 2.3.1, together with spring cloud (eureka, spring-cloud-gateway). Websocket i chose is SockJS, data is being streamed by SimpMessagingTemplate from org.springframework.messaging.simp.SimpMessagingTemplate.
If you have that amount of data and alot of read write operations, I would recommend not returning it in either websocket or rest call(reactor or MVC) sending big amount of data over tcp has it issues, what I would recommend is quite simple, save the data to Storage(AWS S3 for example), return the S3 bucket url, and from the client side read the data from the S3 directly,
alternatively you can have a message queue that the client is subscribe on(pub/sub), publish the data in the server side, and subscribe to it on the client side, but this may be an overkill.
If you are set on rest you can use multipart data see the stack overflow question here:
Multipart example
We have a few nodejs servers where the details and payload of each request needs to be logged to SQL Server for reporting and other business analytics.
The amount of requests and similarity of needs between servers has me wanting to approach this with an centralized logging service. My first instinct is to use something like Amazon SQS and let it act as a buffer with either SQL Server directly or build a small logging server which would make database calls directed by SQS.
Does this sound like a good use for SQS or am I missing a widely used tool for this task?
The solution will really depend on how much data you're working with, as each service has limitations. To name a few:
SQS
First off since you're dealing with logs, you don't want duplication. With this in mind you'll need a FIFO (first in first out) queue.
SQS by itself doesn't really invoke anything. What you'll want to do here is setup the queue, then make a call to submit a message via the AWS JS SDK. Then when you get the message back in your callback, get the message ID and pass that data to an invoked Lambda function (you can write those in NodeJS as well) which stores the info you need in your database.
That said it's important to know that messages in an SQS queue have a size limit:
The minimum message size is 1 byte (1 character). The maximum is
262,144 bytes (256 KB).
To send messages larger than 256 KB, you can use the Amazon SQS
Extended Client Library for Java. This library allows you to send an
Amazon SQS message that contains a reference to a message payload in
Amazon S3. The maximum payload size is 2 GB.
CloudWatch Logs
(not to be confused with the high level cloud watch service itself, which is more sending metrics)
The idea here is that you submit event data to CloudWatch logs
It also has a limit here:
Event size: 256 KB (maximum). This limit cannot be changed
Unlike SQS, CloudWatch logs can be automated to pass log data to Lambda, which then can be written to your SQL server. The AWS docs explain how to set that up.
S3
Simply setup a bucket and have your servers write out data to it. The nice thing here is that since S3 is meant for storing large files, you really don't have to worry about the previously mentioned size limitations. S3 buckets also have events which can trigger lambda functions. Then you can happily go on your way sending out logo data.
If your log data gets big enough, you can scale out to something like AWS Batch which gets you a cluster of containers that can be used to process log data. Finally you also get a data backup. If your DB goes down, you've got the log data stored in S3 and can throw together a script to load everything back up. You can also use Lifecycle Policies to migrate old data to lower cost storage, or straight remove it all together.
I have a page with multiple widgets, each receiving data from a different query in the backend. Doing a request for each will consume the limit the browser puts on the number of parallel connections and will serialize some of them. On the other hand, doing one request that will return one response means it will be as slow as the slowest query (I have no apriori knowledge about which query will be slowest).
So I want to create one request such that the backend runs the queries in parallel and writes each result as it is ready and for the frontend to handle each result as it arrives. At the HTTP level I believe it can be just one body with serveral json, or maybe multipart response.
Is there an angularjs extension that handles the frontend side of things? Optimally something that works well with whatever can be done in the Java backend (didn't start investigating my options there)
I have another suggestion to solve your problem, but I am not sure you would be able to implement such a thing as from you question it is not very clear what you can or cannot do.
You could implement WebSockets and the server would be able to notify the front-end about the data being fetched or it could send the data via WebSockets right away.
In the first example, you would send a request to the server to fetch all the data for your dashboard. Once a piece of data is available, you could make a request for that particular piece and given that the data was fetched couple of seconds ago, it could be cached on the server and the response would be fast.
The second approach seems a more reasonable one. You would make an HTTP/WebSocket request to the server and wait for the data to arrive over WebSocket.
I believe this would be the most robust an efficient way to implement what you are asking for.
https://github.com/dfltr/jQuery-MXHR
This plugin allows to parse a response that contains several parts (multipart) by having a callback to parse each part. This can be used in all our frontends to support responses for multiple data (widgets) in one requests. The server side will receive one request and use servlet 3 async support (or whatever exists in other languages) to ‘park’ it, sending multiple queries, writing each response to the request as each query returns (and with the right multipart boundary).
Another example can be found here: https://github.com/anentropic/stream.
While both of these may not be compatible with angularjs, the code does not seem complex to port there.
Hi I am currently using channel API for my project. My client is a signage player which receives data from app engine server only when user changes a media content. Appengine sends data to client only ones or twice a day. Do you think channel api is a over kill for this? what are some other alternatives?
Overall, I'd think not. How many clients will be connected?
Per https://cloud.google.com/appengine/docs/quotas?hl=en#Channel the free quota is 200 channel-hours/day, so if you have no more than 8 clients connected you'll be within the free quota -- no "overkill".
Even beyond that, per https://cloud.google.com/appengine/pricing , there's "no additional charge" beyond the computational resources keeping the channel open entails -- I don't have exact numbers but I don't think those resources would be "overkill" compared with alternatives such as reasonably frequent polling by the clients.
According to the Channel API documentation (https://cloud.google.com/appengine/features/#channel), "The Channel API creates a persistent connection between an application and its users, allowing the application to send real time messages without the use of polling.". IMHO, yours might not the best use case for it.
You may want to take a look into the TaskQueue API (https://cloud.google.com/appengine/features/#taskqueue) as an alternative of sending data from AppEngine to the client.
I'm writing a p2p chess game that sends 2 byte messages back and forth (e.g. e4 or c4). I'm considering the use of GAE Channel API. I noticed that this API causes the browser to send a heartbeat message to the server with POST URL https://849.talkgadget.google.com/talkgadget/dch/bind?VER=8&clid=...
That fires about every second. I won't be charged for the response data and response headers for those heartbeat requests correct?
Also, when I send data from the server to a browser over a channel, am I charged for only the json string itself or all http header/payload packets?
Google has a newer (and totally free!) API you should look at instead of the channel API (unless its restrictions cant be worked arround.)
GCM (google cloud messaging) is free, with a few restrictions like packet size (2kb in some cases) but it will handle everything for you (queuing, broadcast to all, broadcast to topics, one-to-one messaging, battery-efficient mobile libraries (android and iOS), native chrome support etc.
https://developers.google.com/cloud-messaging/
Make sure to also see this s.o. answer for GCM implementation tips: https://stackoverflow.com/a/31848496/2213940