I'm developing a web application which acts as an email client for mobile. In this application user can login and provide numerous email ids for monitoring.There are two main classes in the web application. 1.MailGetter 2.MailFormatter
Behaviour of MailGetter class:
Timertask initiated which will execute for every 10 minutes
Obtains the numerous email ids from database which are provided for monitoring
establishes connection with mail server for the first email id and obtains recently arrived email message object
passes the message object to MailFormatter class
Behavior of MailFormatter class:
parses the email message object
various recursive calls made if the message has many multiparts inorder to parse parts one by one
downloads also attachments along with this message
returns an xml string to the MailGetter class which will be stored as simple text file with the following content:
Example:
<mail>
<from>FromEmailID</from>
<to>ToEmailID</to>
<subject>Subject</subject>
<body>Email Body</body>
<attachments>
attachment
</attachments>
</mail>
MobileResponderServlet: A separate servlet is also coded in the web application which will read the simple xml text file and send the read content to mobile
The main demerit of this application might be "MailGetter" class will be waiting until all the functions (including recursive calls) of "MailFormatter" class finishes execution.Once the control is returned from "MailFormatter" class to "MailGetter" class, it will obtain the next message object from mail server and passes it to "MailFormatter" class.So intimating about new emails to the mobile user consumes time. Even when "MailFormatter" class is implemented as a separate thread,consider in a case if there are 1000 new emails in a single inbox (for a single emailid) which will be invocating 1000 "MailFormatter" threads, which will make the process more resource intensive.
So I'm deciding to unplug "MailFormatter" from "MailGetter". "MailGetter" will be running as a separate web application in one server whereas "MailFormatter" will be running as a separate web application in another server. After obtaining the recent email message object "MailGetter" web application persists (via message.writeTo(FileOutputstream)) the message object in a location which is also common to "MailFormatter". "MailFormatter" class then reads (via MimeMessage(Session,InputStream) constructor) and parses the message object one by one and then stores the "XML Content" in another location which will be read by "MobileResponderServlet" and sent to mobile.
Will this process be efficient in real time? Will this be arising problems especially while sharing message objects between "MailGetter" and "MailFormatter" web applications? Please do let me know if there are any other ways. This web application will be handling more than 5000 users (minimum) who have provided numerous email ids for monitoring.
I think the only practical thing you can do is put together some test scenario. The above has too many variables to warrant a practical answer relating to performance.
Put together a source mailbox that has a set of test emails, then knock together some simple mechanism to query this, dump your messages out, and then use your second process to consume. This mechanism would most likely not be your complete solution, but should be representative.
It would be a good idea to make this repeatable and consistent, so as you implement your real solution, you can study whether it's getting slower, and/or benchmark and measure consistently.
Related
Problem
I have an Android and iOS app, looking like a classic social network. I need to update UI in real time. Currently, I use a classic system of a client polling each second to a php script by HTTP. The php script bother the database every second for every client and responds, most of the time that there is no new update. If there is a new update, the php script process it and send it back to the client app.
There are 3 problems in this approach : (1) slow user experience (1 second delay each time) + high battery and data usage, (2) apache machines bothered each second by incoming HTTP request, (3) database machine bothered each second by the apaches machines (requesting if they are new stored updates in the main database).
I feel that this system could be substentially improved. For problem (1), I know a TCP connection can be "piped" to the app, but there is still problem (3) because the thread behind the socket still polls the database each second to know if they are new stored updates for their member ID.
Solution ?
I thought of a system to get rid of any activity (client, apaches and database) if there are no new updates. There would be : N apaches server on N machines, a load balancer exposed to the Internet. Behind, these apache server, connected only to local network, 1 "central" database and one "update" database, dedicated for the update system. The "update" database would store 2 tables :
1 table for the mapping between user tokens (and their member ID), and the thread ID and name of current apache machine holding the thread. One user ID may have several connection tokens, but one connection token is associated to only one unique couple (PID - machine name). Each time a user connects to the app, it would create a TCP con held by one thread (in one apache machine), and the [thread ID - machine name] would be stored in that table.
1 table to store the updates themselves. They contain all the informations needed to get up-to-date data (either in raw primitive form like string or int, or in "reference" form, telling the recipient TCP threads it needs to compute "at sending time" some params, for more complex data structures)
The system would be the following :
(1) A user wants to send a message to another user. The app client of the sender sends an HTTP request to the app API endpoint; the load balancer forwards the request to one of the apache machines.
(2) The apache server requests the main database to insert the "user message" row.
(3) The apache server requests the "update" database to know if the recipient has any currently connected device.
(4) if there is at least one connected device, insert an "update" row in the "update" database with all the informations needed, and wake up all thread associated to the recipient user ID (maybe using C signals ?).
(5) All the thread(s) associated to the recipient user ID wake up, they look in the "update" database for new updates associated with their user ID, they process their parameters (especially if there are references params to be computed), they send them back to the recipient devices via TCP.
So my final question is : is such a system feasible, reliable and if so, do you think it can be optimal in term of database and apache machines performence ?
I'm more a front-end programmer and I'm not used to implement complex server architecture, so I wanted to have some opinions before diving into the code, especially if I missed something in my approach (storing PIDs is reliable ? Is it possible for one machine to wake up a thread in another machine through local network ? ...)
PS : I already tried Firebase cloud messaging, but the problem is that they authorize only a one dimension array to be sent with update params. When dealing with complex data structure (like a "user message"), when I receive a signal from FCM in my client app, I still need to make an extra HTTP call to my server to retrieve the new "user message" JSON payload. So, good for my apaches and databases machines (they are not bothered when there is no new updates), bad for the client app that has to send additional HTTP requests. Once again, tell me if I missed something here :)
Thanks for reading
Let's say I run a recorded script for 'New User Registration' function of a web site to evaluate the response time for entire scenario. When I run the recorded script from JMeter, for each registration script, is there a new user record getting created in the application database ?
Yes, if you record registration and correlate it (meaning you create a valid unique name for every request) you will create a real user in your environment.
JMeter is simulating a real scenario which effect your environment.
That is part of the reason JMeter will be executed in different environment than production (as stage)
Well-behaved JMeter script must represent a real user using a real browser as close as it is possible.
Browsers execute HTTP requests and render the response
JMeter executes the same HTTP requests but doesn't render the response, instead it records performance metrics like response time, connect time, latency, throughput, etc.
HTTP is a stateful protocol therefore given you execute the same request you will get the same response. So if there are no mistakes in your script it either should create a new user or fail due to non-unique username error.
Yes, if your script accurately represents the full set of data flows associated with the business process, "New User Registration," then the end state of that process should be identical to that of the user behavior so modeled.
A record will be created in the database. If not, then your user is not accurate in its behavior
The current project is in Node.js with the Expressjs framework. We have an application with client/prospect information, authenticated users are allowed to modify the database and initiate long-running processes on the server. As an example, printing a 30 pg document could be one process.
The application user needs two main things:
A response when the process begins.
A response (notification) when the process ends.
We currently handle need #1 in standard express fashion by ensuring the process starts followed by res.json({msg: 'Process Started']); back to the Angular front end. Need #2 is currently handed with an email to the user that initiated the process with a process status report.
I would like to improve how we handle need #2. Specifically, I would like to send a JSON string to the user to display in the interface.
Questions:
Is it possible to do this over HTTP?
Does this functionality exist within Express or a well-know middleware.
Assuming 1 & 2 are false. My solution is to run a TCP socket server to maintain a socket with the required users. When a process ends a message is sent to the user with a status update. Can anyone comment on the issues my solution presents?
Yes to both 1 and 2. Essentially what you seek to achieve here is to push from the server to the client. The need to do this is pretty ubiquitous in web applications and there have been various solutions for it over the years with various fancy names. You might like to read up on Ajax, Comet, Long-polling, Websockets.
For your node application, take a look at socket.io. In a nutshell, what this framework does is it abstracts the complexities of Ajax, Websockets, etc. into a single API. Put another way, socket.io gives you bi-directional communications between your node application and front end.
Is Apex only permitted on “native” applications that are hosted on force.com?
Or is Apex also available for external applications to hit the “Open APIs” such as REST API and Bulk API?
I think part of my confusion lies in how the term “Rest API” is used in various documents. In other parts of the software world, REST is usually means an HTTP based protocol to exchange data across different domains (and with certain formats, etc ). However, I think Rest API in sales force might SOMETIMES refer to an optional means for native apps to retrieve salesforce data from within force.com. Is that correct?
Not sure I understand your question...
Apex can be used "internally" in:
database triggers,
classes
Visualforce controllers that follow MVC pattern,
logic that parses incoming emails and for example makes Case or Lead records out of them,
asynchronous jobs that can be scheduled to recalculate some important stuff every night
and you can have utility classes for code reuse across these
A "kind of internal" would be to use the "Execute Anonymous" mechanism that lets you fire one-off code snippets against environment. Useful for prototyping of new classes, data fixes etc. You can do it for example in Eclipse IDE or the Developer Console (upper right corner next to your name).
And last but not least - "external" usage.
Apex code can be exposed as webservice and called by PHP, .NET, Java, even JavaScript applications. It's a good choice when:
you want to reuse same piece of logic for example on your own Visualforce page as well as in some mobile application that would be passing couple strings around or a simple JSON object
beats having to reimplement the logic in every new app and maintaining that afterwards
imagine insertion of Account and Contact in one go - your mobile device would have to implement some transaction control and delete the Acc if the Contact failed to load. Messy. And would waste more API calls (insert acc, insert con, ooops, delete acc). With a method exposed as webservice you could accept both parameters into your Apex code, do your magic and well, if it fails - it's all in one transaction so SF will roll it back for you.
There are 2 main methods:
SOAP API primarily uses global methods marked with webservice keyword. Easiest way for other applications to start calling these is to extract from SF and "consume" so-called "enterprise WSDL" file. It's a giant XML file that can be parsed in your .NET app to generate code that will help you write code you're familiar with. These generated classes will construct the XML message for you, send it, process the response (throw your own exceptions if SF has sent an error message) and so on.
Very simple example:
global class MyWebService {
webService static Id makeContact(String lastName, Account a) {
Contact c = new Contact(lastName = 'Weissman', AccountId = a.Id);
insert c;
return c.id;
}
}
REST API allows you to do similar things but you need to use correct HTTP verbs ("PUT" is best for inserts, "PATCH" for updates", "DELETE" and so on).
You can read more about them in the REST API guide: http://www.salesforce.com/us/developer/docs/apexcode/index_Left.htm#CSHID=apex_rest_methods.htm|StartTopic=Content%2Fapex_rest_methods.htm|SkinName=webhelp
I have one very weird question.
There are 2 Silverlight Client
1. Admin
2. User
Now, I want a scenario wherein the Admin Silverlight can initiate a function call on the User Silverlight.
Pretty much a newbie with SL so wonder if that would be possible.
I'd appreciate any help.
Thanks
I suppose the applications are not in the same browser / machine, and when you describe the usage pattern as admin and user, I take that there are probably more users than admins.
You might want to take a look at duplex bindings for WCF services - this is a web service binding that allows pushing notifications to clients from the server. When all clients establish such a channel, you can implement hub-and-spoke communication between clients.
This blog post gives a good receipt for getting started:
http://silverlightforbusiness.net/2009/06/23/pushing-data-from-the-server-to-silverlight-3-using-a-duplex-wcf-service/
If they are both in the same frame/browser, you could call JavaScript in the first using the HtmlPage API, which could interact with the second.
So:
Silverlight control -> injects JS into HtmlPage -> JS interacts with Silverlight control 2 (assuming this is possible, please correct me if wrong) -> Silverlight control responds.
If they are in separate windows or running "out of browser", I would expect it wouldn't work.
If the 2 instances are seperated (i.e., the admin is on one machine and the user is on another) there's no direct way to do it. However, you can rig it up with a publisher/subscriber style system.
Assumption: You have some sort of shared data store between the two, maybe a database or something.
Idea: You have the admin client write a request to this shared data store; an entry in a table, or a new file in a network share, or something. You have the user client app regularly scan this table/share for new entries, say every .5 seconds or so. When it sees the entry, it executes the requested operation, storing any return values back to the shared store. When the admin sees the return value, he knows the operation has been successfully executed.
There are a couple of options that I can think of.
You could implement some sort of remote procedure call via web services whereby one Silverlight app posts a request to call the method, and the other Silverlight regularly checks for method call requests.
If hosted on the same HTML page in a browser, you could use javascript to allow the two controls to interact.
However, direct communication between two Silverlight instances isn't supported, and while the suggestions may help to achieve something close to what you want, they don't provide a complete solution that will work in all scenarios.