I have Exchange 2010 running and i need to track all the emails. I have made a program to get emails from Exchange and do what i need, but the problem is, that if user deletes email or moves it to archive, i can't access it.
Is there a way to make Exchange to write message tracking log to database? Or at least a way to launch program when email is received?
Is good that you wrote an application to do this, but you shouldn't had. Exchange already has Mailbox Audit Logging features and that what you should use, see Understanding Mailbox Audit Logging. You can configure Journaling which automatically records all incomming and outgoing mails. This is a complex topic and rather than re-invent the wheel, I recommend reading about what is already available.
Related
we want to build an application (c#/.Net) for the following Scenario:
internal "alert System". Users should be informed about it-system outage, planned downtime for Services and so on.
only one-way : central Service will push Messages to user
we also Need the possibility to enable/disable a message, for example:
The message "there a Problems with mail System" should be removed from every Computer after the Problem is solved
we want to shedule Messages for planned maintanance
about 1000 windows Clients, we also want to "group" this Clients, so we can control which Client will get a message
First thought was writing small application which will query every X seconds a central database for new and existing Messages.
Maybe somebody has already worked on similar Project?
Is a Client with database query a way to go? Better to use other Technology, like WCF Service?
Thanks for your help
Marc
Sounds like you need an enhanced version of push notifications.
I'd suggest using push for all the messaging, it's delivered faster and I find it more reliable. Simply make the client connect to a message server and maintain the connection open. Whenever a message is supposed to be displayed to the client, have the server push it trough the connection (that's where the name comes from).
To group and manage the clients you could use a database, it's probably the best way to go, but the server needs to handle all the open connections, and databases can only store DATA, not virtual objects representing a connection, so the server software need to manage them in a different way.
My suggestion: Whenever the server receives an incoming client connection, it will accept and query the client computer for a ID number that will also be used to find that client's information in the database.
Then it will create a dictionary using that ID as key, and the connection as the value.
This way at the time of sending a message to a determined group, you can do in two ways:
1) You can load from the database the IDs that belong to that group, and then send the messages to them. You will have to check whether that ID exists in the dictionary's KEYS array, because it is possible that a determined client is not yet connected.
2) You can iterate of the KEYS array of dictionary, check to which group that ID is part of, and if it is the desires group, send it.
If you're dealing with a big number of clients, I suggest you use method 1.
To disable/remove a message from the client's computer, simply have the server send a special Command message that the client software interprets as "remove that message". To make this possible every non-command message must have unique IDs, so that command messages can tell the client software which message that command applies to.
Your project sounds very interesting.
I would be glad to help you by writing a library you could use, or just help you figure it out on your own if you prefer. (Free of charge, just for the experience).
Are there any "design patterns" related to processing important financial operations so that there's no way that a local database can become out of sync because of some errors ?
Example:
A financial transaction record is created in a local db, then a request is sent to a remote payment API endpoint to charge a customer. Pseudocode:
record = TransactionRecord.create(timestamp=DateTime.now, amount=billed_amount, status=Processing)
response = Request.post(url=remote_url, data=record.post_data)
if response.ok:
record.mark_as_ok()
else:
record.mark_failed()
Now, even if I handle errors that can be returned by the remote payment service a lot of other bad things can still happen: DB server can go down, network connection can go down etc., at arbitrary points in time.
In the above code the DB server can become inaccessible right after creating the transaction record, so it might not be possible to mark that record as ok, even if the financial transaction itself has been performed successfuly by the remote service.
In other words: customer is charged but we don't have that booked..
This can be worked around in a number of ways - by periodically syncing with the remote service, by investigating TransactionReturn-s which are being processed but are older than e.g. 10 minutes or an hour.
But my question is if there are some well established patterns for handling such situations (where money is involved, so everything should work properly "all the time") ?
PS. I'm not sure what tags should I use for this question, feel free to re-tag it.
I don't think there is any 'design pattern' to address cases such as database connection going down or network connection going down as it happens in your scenario. Any of those two scenarios are major fault events and would most likely require manual intervention.
There is not much coding you can do to address them other than being defensive by doing proper error checking, providing proper notifications to support and automatically disabling functionality which does not work (if the application detects that the payment service is down then 'Submit payment' button should be disabled).
You will be able to cut down significantly on support if you do proper error handling and state management. In your case, the transaction record would have to change its state from Pending -> Submitted -> Processed or Rejected or something like this.
Also, not every service provides functionality to for syncing up.
One of our problems is that our outbound email server sucks sometimes. Users will trigger an email in our application, and the application can take on the order of 30 seconds to actually send it. Let's make it even worse and admit that we're not even doing this on a background thread, so the user is completely blocked during this time. SQL Server Database Mail has been proposed as a solution to this problem, since it basically implements a message queue and is physically closer and far more responsive than our third party email host. It's also admittedly really easy to implement for us, since it's just replacing one call to SmtpClient.Send with the execution of a stored procedure. Most of our application email contains PDFs, XLSs, and so forth, and I've seen the size of these attachments reach as high as 20MB.
Using Database Mail to handle all of our application email smells bad to me, but I'm having a hard time talking anyone out of it given the extremely low cost of implementation. Our production database server is way too powerful, so I'm not sure that it couldn't handle the load, either. Any ideas or safer alternatives?
All you have to do is run it through an SMTP server and if you're planning on sending large amounts of mail out then you'll have to not only load balance the servers (and DNS servers if you're planning on sending out 100K + mails at a time) but make sure your outbound Email servers have the proper A records registered in DNS to prevent bounce backs.
It's a cheap solution (minus the load balancer costs).
Yes, dual home the server for your internal lan and the internet and make sure it's an outbound only server. Start out with one SMTP server and if you get bottle necks right off the bat, look to see if it's memory, disk, network, or load related. If its load related then it may be time to look at load balancing. If it's memory related, throw more memory at it. If it's disk related throw a raid 0+1 array at it. If it's network related use a bigger pipe.
What's the best strategy to keep all the clients of a database server synchronized?
The scenario involves a database server and a dynamic number of clients that connect to it, viewing and modifying the data.
I need real-time synchronization of the data across all the clients - if data is added, deleted, or updated, I want all the clients to see the changes in real-time without putting too much strain on the database engine by continuous polling for changes in tables with a couple of million rows.
Now I am using a Firebird database server, but I'm willing to adopt the best technology for the job, so I want to know if there is any kind of already existing framework for this kind of scenario, what database engine does it use and what does it involve?
Firebird has a feature called EVENT that you may be able to use to notify clients of changes to the database. The idea is that when data in a table is changed, a trigger posts an event. Firebird takes care of notifying all clients who have registered an interest in the event by name. Once notified, each client is responsible for refreshing its own data by querying the database.
The client can't get info from the event about the new or old values. This is by design, because there's no way to resolve this with transaction isolation. Nor can your client register for events using wildcards. So you have to design your server-to-client notification pretty broadly, and let the client update to see what exactly changed.
See http://www.firebirdsql.org/doc/whitepapers/events_paper.pdf
You don't mention what client platform or language you're using, so I can't advise on the specific API you would use. I suggest you google for instance "firebird event java" or "firebird event php" or similar, based on the language you're using.
Since you say in a comment that you're using WPF, here's a link to a code sample of some .NET application code registering for notification of an event:
http://www.firebirdsql.org/index.php?op=devel&sub=netprovider&id=examples#3
Re your comment: Yes, the Firebird event mechanism is limited in its ability to carry information. This is necessary because any information it might carry could be canceled or rolled back. For instance if a trigger posts an event but then the operation that spawned the trigger violates a constraint, canceling the operation but not the event. So events can only be a kind of "hint" that something of interest may have happened. The other clients need to refresh their data at that time, but they aren't told what to look for. This is at least better than polling.
So you're basically describing a publish/subscribe mechanism -- a message queue. I'm not sure I'd use an RDBMS to implement a message queue. It can be done, but you're basically reinventing the wheel.
Here are a few message queue products that are well-regarded:
Microsoft MSMQ (seems to be part of Windows Professional and Server editions)
RabbitMQ (free open-source)
Apache ActiveMQ (free open-source)
IBM WebSphere MQ (probably overkill in your case)
This means that when one client modifies data in a way that others may need to know about, that client also has to post a message to the message queue. When consumer clients see the message they're interested in, they know to refresh their copy of some data.
SQL Server 2005 and higher support notification based data source caching expiry.
I'm building a mobile application in VB.NET (compact framework), and I'm wondering what the best way to approach the potential offline interactions on the device. Basically, the devices have cellular and 802.11, but may still be offline (where there's poor reception, etc). A driver will scan boxes as they leave his truck, and I want to update the new location - immediately if there's network signal, or queued if it's offline and handled later. It made me think, though, about how to handle offline-ness in general.
Do I cache as much data to the device as I can so that I use it if it's offline - Essentially, each device would have a copy of the (relevant) production data on it? Or is it better to disable certain functionality when it's offline, so as to avoid the headache of synchronization later? I know this is a pretty specific question that depends on my app, but I'm curious to see if others have taken this route.
Do I build the application itself to act as though it's always offline, submitting everything to a local queue of sorts that's owned by a local class (essentially abstracting away the online/offline thing), and then have the class submit things to the server as it can? What about data lookups - how can those be handled in a "Semi-live" fashion?
Or should I have the application attempt to submit requests to the server directly, in real-time, and handle it if it itself request fails? I can see a potential problem of making the user wait for the timeout, but is this the most reliable way to do it?
I'm not looking for a specific solution, but really just stories of how developers accomplish this with the smoothest user experience possible, with a link to a how-to or heres-what-to-consider or something like that. Thanks for your pointers on this!
We can't give you a definitive answer because there is no "right" answer that fits all usage scenarios. For example if you're using SQL Server on the back end and SQL CE locally, you could always set up merge replication and have the data engine handle all of this for you. That's pretty clean. Using the offline application block might solve it. Using store and forward might be an option.
You could store locally and then roll your own synchronization with a direct connection, web service of WCF service used when a network is detected. You could use MSMQ for delivery.
What you have to think about is not what the "right" way is, but how your implementation will affect application usability. If you disable features due to lack of connectivity, is the app still usable? If you have stale data, is that a problem? Maybe some critical data needs to be transferred when you have GSM/GPRS (which typically isn't free) and more would be done when you have 802.11. Maybe you can run all day with lookup tables pulled down in the morning and upload only transactions, with the device tracking what changes it's made.
Basically it really depends on how it's used, the nature of the data, the importance of data transactions between fielded devices, the effect of data latency, and probably other factors I can't think of offhand.
So the first step is to determine how the app needs to be used, then determine the infrastructure and architecture to provide the connectivity and data access required.
I haven't used it myself, but have you looked into the "store and forward" capabilities of the CF? It may suit your needs. I believe it uses an Exchange mailbox as a message queue to send SOAP packets to and from the device.
The best way to approach this is to always work offline, then use message queues to handle sending changes to and from the device. When the driver marks something as delivered, for example, update the item as delivered in your local store and also place a message in an outgoing queue to tell the server it's been delivered. When the connection is up, send any queued items back to the server and get any messages that have been queued up from the server.