I want to be able to monitor issues from mobile application like from backend micro-services.
I'm not aware of any real time monitoring for mobile applications outside.
I think that it can really help to monitor mobile application and report errors from the application and not only from the backend services. Sometimes the application is connected to multiple services and has its own logic so it seems like one place to catch all errors and wrong behaviour.
Are there any tools outside?
If for example I'll use mParticle/Segment as Hub to report events, can I connect it to Graphite somehow which is push-based monitoring ? Maybe through SQS / AWS Lambda ?
https://www.mparticle.com/integrations
In theory, yes it's possible to send data to Graphite using a combination of SQS + Lambda. I've tested this by writing some metric data to SQS and used a node js lambda function to read and forward that data to our carbon endpoint at https://hostedgraphite.com via UDP per our language guide here
Having said that, there are some further considerations that we must take in order to ensure this works: the main one being data format. Graphite/Carbon require data in a specific format, something that mParticle might not support directly. As such, you will need an AWS Lambda that formats the messages and then forwards to Graphite (or optionally, to another SQS queue where another Lambda reads and forwards that data to Graphite).
Related
We have a few nodejs servers where the details and payload of each request needs to be logged to SQL Server for reporting and other business analytics.
The amount of requests and similarity of needs between servers has me wanting to approach this with an centralized logging service. My first instinct is to use something like Amazon SQS and let it act as a buffer with either SQL Server directly or build a small logging server which would make database calls directed by SQS.
Does this sound like a good use for SQS or am I missing a widely used tool for this task?
The solution will really depend on how much data you're working with, as each service has limitations. To name a few:
SQS
First off since you're dealing with logs, you don't want duplication. With this in mind you'll need a FIFO (first in first out) queue.
SQS by itself doesn't really invoke anything. What you'll want to do here is setup the queue, then make a call to submit a message via the AWS JS SDK. Then when you get the message back in your callback, get the message ID and pass that data to an invoked Lambda function (you can write those in NodeJS as well) which stores the info you need in your database.
That said it's important to know that messages in an SQS queue have a size limit:
The minimum message size is 1 byte (1 character). The maximum is
262,144 bytes (256 KB).
To send messages larger than 256 KB, you can use the Amazon SQS
Extended Client Library for Java. This library allows you to send an
Amazon SQS message that contains a reference to a message payload in
Amazon S3. The maximum payload size is 2 GB.
CloudWatch Logs
(not to be confused with the high level cloud watch service itself, which is more sending metrics)
The idea here is that you submit event data to CloudWatch logs
It also has a limit here:
Event size: 256 KB (maximum). This limit cannot be changed
Unlike SQS, CloudWatch logs can be automated to pass log data to Lambda, which then can be written to your SQL server. The AWS docs explain how to set that up.
S3
Simply setup a bucket and have your servers write out data to it. The nice thing here is that since S3 is meant for storing large files, you really don't have to worry about the previously mentioned size limitations. S3 buckets also have events which can trigger lambda functions. Then you can happily go on your way sending out logo data.
If your log data gets big enough, you can scale out to something like AWS Batch which gets you a cluster of containers that can be used to process log data. Finally you also get a data backup. If your DB goes down, you've got the log data stored in S3 and can throw together a script to load everything back up. You can also use Lifecycle Policies to migrate old data to lower cost storage, or straight remove it all together.
Hi I am currently using channel API for my project. My client is a signage player which receives data from app engine server only when user changes a media content. Appengine sends data to client only ones or twice a day. Do you think channel api is a over kill for this? what are some other alternatives?
Overall, I'd think not. How many clients will be connected?
Per https://cloud.google.com/appengine/docs/quotas?hl=en#Channel the free quota is 200 channel-hours/day, so if you have no more than 8 clients connected you'll be within the free quota -- no "overkill".
Even beyond that, per https://cloud.google.com/appengine/pricing , there's "no additional charge" beyond the computational resources keeping the channel open entails -- I don't have exact numbers but I don't think those resources would be "overkill" compared with alternatives such as reasonably frequent polling by the clients.
According to the Channel API documentation (https://cloud.google.com/appengine/features/#channel), "The Channel API creates a persistent connection between an application and its users, allowing the application to send real time messages without the use of polling.". IMHO, yours might not the best use case for it.
You may want to take a look into the TaskQueue API (https://cloud.google.com/appengine/features/#taskqueue) as an alternative of sending data from AppEngine to the client.
I'm writing a simple XMPP chat application. The interface has been made minimal to accommodate mobile devices. The client uses strophe.js which utilizes a bi-directional persistent connection (BOSH) between the javascript application and XMPP server.
Would this persistent connection consume a lot of bandwidth? I know most mobile phone users have some sort of monthly data quota - I don't want to hog it.
Yes, if you do the math, you need to account for:
HTTP headers sent & received
Possible cookies to/from the server
BOSH typically sends a packet every minute both ways (called the empty body). This takes up considerable bandwidth.
You might want to consider using websockets instead.
http://blog.superfeedr.com/xmpp-over-websockets/
Is there an open source WebSockets (JavaScript) XMPP library?
The XEP (draft): https://datatracker.ietf.org/doc/html/draft-moffitt-xmpp-over-websocket-00
Can anyone think of a good way to allow the server to notify the client based upon server processing? For example, consider the following events:
A user requests a deletion of data, however, due to it's long-running time, we kick it off to a queue.
The client receives a "Yes we completed your transaction successfully".
The server deletes the item and now wants to update any local structures any clients may be using (I'd also like to notify the user).
I know this can be done by client-side polling. Is there a event bus type way to do this? Any suggestions are welcome, but please keep in mind I am using GWT with App Engine.
The standard AJAX interaction is that the client sends requests to the server and expects some sort of response back fairly quickly.
In order for the server to initiate a request to the client, you will need to use WebSockets, and experimental HTML5 feature currently only supported by Chrome.
Or, to simulate this kind of interaction, you can use Comet (long-polling), made available in GWT by the rocket-gwt project.
You want server events for GWT? Have a look at GwtEventService (they couldn't have chosen a better name): http://code.google.com/p/gwteventservice/wiki/StartPage
Of course, it uses a Comet implementation, but you can't do any different when using HTTP, the client always initiates the communication. Request, response.
In ASP.NET, I usually log exceptions at server-side, In windows forms I can either log exceptions server-side or write to a log file on the client. Silverlight seems to fit somewhere in between.
I wanted to know what everyone else is doing to handle their Silverlight exceptions and I was curious if any best practices have emerged for this yet.
For real logging that you could store & track, you will need to do it on the server, since you can't be guaranteed anything on the client will be persisted.
I would suggest exposing a "LogEvent(..)" method on a server side web service (maybe you already have one) which would then do the same kind of logging you do in ASP.net
Here's a video about basic web service calls in Silverlight if you haven't done that yet
http://silverlight.net/learn/learnvideo.aspx?video=66723
I'm not sure about any logging best practices though, my first guess would be to do the best practicies for logging in a web sevice on the server and expose that to the client.
Hope this helps!
I would say that Silverlight fits much better to ASP.NET side of the model. You have server which serves web page. An object (Silverlight app) on the page pings data service to fetch data and display it.
All data access happens on the server side and it does not matter if data is used to create ASP.NET pages on the server or sent raw to the RIA for display. I do log any failures in data service on server side (event log works fine) and do not allow any exception to pass to WCF. When client does not receive expected data (it gets null collection or something similar), it display generic data access error to the user. We may need to extend that soon to pass a bit more information (distinguishing between access denied/missing database/infrastructure failure/internal error/etc), but we do not plan to pass exception error messages to the client.
As for client side, sometimes we may get in situation where async call times out -- it is just another message. For general exceptions from client code (typically, bugs in our code), I just pass exception to the browser to display in same manner as any script exception.
Also take a look at the new Silverlight Integration Pack for Enterprise Library from Microsoft patterns & practices. It provides support for logging exceptions to isolated storage or remote services and is configurable via policies in external config or programmatically. Batch logging and automatic retry (in case of occasionally connected scenarios) are also supported.
Use the Isolated Storage available for Silverlight application. You should store here your log.
Then you can develop a mecanism to send the user log to a webservice like the Windows bug report service.
It very much depends on the type of application that youre developing.
if its an mvc / mvp based architecture then your model, or most of it at least, will be on the server, and this is where most of your exceptions will be thrown i would imagine, so you can log them there and choose to display a message to the user or not.
for exceptions from the client you may want to know the details so just send them back.