Best database to use for business routing workflows - database

I'm looking to create a database for a CAPA (corrective action/preventive action form). Basically, one person creates the initial report. This is forwarded on to the CAPA manager, who fills out another field on the form. Then it goes to the investigator, who fills out some information. Then it goes back to the CAPA manager and so forth.
My instinct is to create an Access frontend with a SQL Server backend. I'd need to kick off an e-mail at each step of the process. I'd also need to send reminder e-mails if it gets stale for X days at any of the stages.
I know I could really accomplish this with most databases out there, but my main question is, is there any database or third party product out there that would make the process of setting up that routing workflow easier? I have other projects where the routing would be similar to the one above or an approval workflow.

Based on your description it is absolutely possible to implement this project with proposed technologies.
If your company already has adopted SharePoint 2010 it can be also good option for implementing collaborative work flows.
Also from my experience MS Access works best if connection to server is stable like intranet.

Related

Designing databases and applications for hosted / cloud solutions

Are there any resources available that can guide someone on how to 'think' about the various components of a hosted / cloud solution before going ahead and starting to make a hosted application? If that made no sense, what I mean to ask is are there any guidance books/websites on what things need to be considered when making a cloud application?
I am attempting to make a hosted CRM-style software application that will serve many hundreds of customers. The application is powered by a SQL server database with many tables and a ColdFusion, HTML5, CSS, Javascript front-end. If I was installing this application and its components at each client site, then each installation is unique to that customer. But somehow I have to replicate this uniqueness in the cloud which is baffling me.
Only two things have come to mind so far:
The need for a unique database per customer in SQL server
The need to change DB connection strings per customer in the web application
My thought process has come to a block when I am trying to envisage how to design the application to serve so many different customers. Even though the application that all customers use will is the same (same DB tables, same front-end), the data that they store and retrieve will be specific to them. So I was thinking that surely each customer needs a separate database creating for them? Is it feasible to create a replica database for each customer? If I need to update some tables or add a new table, how would I do this for hundreds of different databases?
From the front-end I guess each unique customer log-in would change DB connection strings so that they can only access their database. Other than this I can't think of anything else that needs to change per customer basis.
When a new customer wants to sign up, it needs to be clear to me what I need to create for them to have access to the application. I guess this is ultimately what I need to think of but I'm stuck.
If anyone can suggest some things to think of or if there is a book or website on this kind of thing that someone could point me to I'd really be very thankful.
EDIT:
I was looking at an article about Salesforce.com and it says
"In order to ensure privacy of data for each user and give an effect of each having their own database, the data from different users are securely isolated from one another."
Anyone know how this is achieved or how it may be done?
Found some great information here. It is called multi-tenant database design and seems to be a common topic. Once I get the database designed then the application can sit nicely on top.
https://dba.stackexchange.com/questions/1043/what-problems-will-i-get-creating-a-database-per-customer

Listening for List updates using the SharePoint Client Object Model

I am looking for a decently efficient way to listen for List changes on a SharePoint site using only the Client Object Model. I understand how backwards this idea is, but I am trying to keep from having to push any libraries to the SharePoint servers on install. Everything is supposed to be drop and go on a local machine.
I've thought about a class that just loops a timer and keeps querying the ClientContext from the last date of successful query on, but that seems horribly inefficient.
I know this is a client object model, but is there any way to get notifications from the server on changes from the client only?
I am afraid that this is not possible by using the client object model. If you need to poll too often that the user experience suffers from the slow performance too much, you would need to catch the list changes on the server side. deploy a solution with a feature registering an SPItemEventReceiver to your list.
I understand your reluctance to push server-side code to the SP farm; without it, you can save discussions and explanations to the customer's administrators. However, some tasks are more efficient or even feasible only when run on the server. You can consider Sandbox Solutions for such functionality. They are deployed to SP not by the farm administrator but to a site collection by an site collection administrator by a friendly web UI. This needs less privileges, more relaxed company policies to comply with, and can be better accepted by your customers. You can develop, test and even use your solution in your site collection only without affecting the entire farm. Microsoft recommends even farm-wide solutions to be designed with as much as possible functionality in sandboxed solutions, putting only the necessary minimum to a farm solution.
If deploying the entire application as sandbox solution would not be possible, you could combine a sandboxed solution gathering the changes with an external web site requesting the gathered data from the site collection, or in you case with a client-only application as you are speaking about. (Sandboxed solutions have one big limitation: You cannot make a web request from within the site collection outside; you can only access the site collection from outside.)
--- Ferda

Is this the right architecture for our MMORPG mobile game?

These days I am trying to design architecture of a new MMORPG mobile game for my company. This game is similar to Mafia Wars, iMobsters, or RISK. Basic idea is to prepare an army to battle your opponents (online users).
Although I have previously worked on multiple mobile apps but this is something new to me. After a lot of struggle, I have come up with an architecture which is illustrated with the help of a high-level flow diagram:
We have decided to go with client-server model. There will be a centralized database on server. Each client will have its own local database which will remain in sync with server. This database acts as a cache for storing things that do not change frequently e.g. maps, products, inventory etc.
With this model in place, I am not sure how to tackle following issues:
What would be the best way of synchronizing server and client databases?
Should an event get saved to local DB before updating it to server? What if app terminates for some reason before saving changes to centralized DB?
Will simple HTTP requests serve the purpose of synchronization?
How to know which users are currently logged in? (One way could be to have client keep on sending a request to server after every x minutes to notify that it is active. Otherwise consider a client inactive).
Are client side validations enough? If not, how to revert an action if server does not validate something?
I am not sure if this is an efficient solution and how it will scale. I would really appreciate if people who have already worked on such apps can share their experiences which might help me to come up with something better. Thanks in advance.
Additional Info:
Client-side is implemented in C++ game engine called marmalade. This is a cross platform game engine which means you can run your app on all major mobile OS. We certainly can achieve threading and which is also illustrated in my flow diagram. I am planning to use MySQL for server and SQLite for client.
This is not a turn based game so there is not much interaction with other players. Server will provide a list of online players and you can battle them by clicking battle button and after some animation, result will be announced.
For database synchronization I have two solutions in mind:
Store timestamp for each record. Also keep track of when local DB
was last updated. When synchronizing, only select those rows that
have a greater timestamp and send to local DB. Keep a isDeleted flag
for deleted rows so every deletion simply behaves as an update. But
I have serious doubts about performance as for every sync request we
would have to scan the complete DB and look for updated rows.
Another technique might be to keep a log of each insertion or update
that takes place against a user. When the client app asks for sync,
go to this table and find out which rows of which table have been
updated or inserted. Once these rows are successfully transferred to
client remove this log. But then I think of what happens if a user
uses another device. According to logs table all updates have been
transferred for that user but actually that was done on another
device. So we might have to keep track of device also. Implementing
this technique is more time consuming but not sure if it out
performs the first one.
I've actually worked on some of the titles you mentioned.
I do not recommend using mysql, it doesn't scale up correctly, even if you shard. If you do you are loosing any benefits you might have in using a relational database.
You are probably better off using a no-sql database. Its is faster to develop, easy to scale and it is simple to change the document structure which is a given for a game.
If your game data is simple you might want to try couchDB, if you need advanced querying you are probably better of with MongoDB.
Take care of security at the start. They will try to hack the game for sure and if you have a number of clients released it is hard to make security changes backward compatible. SSL won't do much as the end user is the problem not an eavesdropper. Signing or encrypting your data will make it harder for a user to add items and gold to their accounts.
You should also define your architecture to support multiple clients without having a bunch of ifs and case statements. Read the client version and dispatch that client to the appropriate codebase.
Have a maintenance mode with flags for upgrading, maintenance, etc. It will cut you some slack if you need to re-shard your DB or any other change that might require downtime.
Client side validations are not enough, specially if using in app purchases. I agree with the above post. Server should control game logic.
As for DB sync, its best to memcache read only data. Typical examples are buyable items, maps, news, etc. User data is harder as you might not be able to afford loosing any modified data. The easiest setup is to cache user data for a couple of hours and write directly to the DB every time. If you are using no-sql it will probably withstand a high load without the need of using a persistence queue.
I see two potential problem hidden in the fact that you store all the state on the client, and then update the state on the server using a background thread.
How can the server validate the data being posted? If someone hacked your application, they could modify the code so whenever they swing their sword (or whatever they do in your game), it is always a hit. Doing that in a single player game is not that big a deal, but doing that in an MMORPG can ruin the experience for everyone else. So the server should validate every update of data - or even better, the server should be in charge of every business rule. So when you swing your sword against an opponent, that should be a server call, and the server returns whether or not it is a hit, and how many hit points the opponent lost.
What about interaction with other players (since you say it is an MMORP, there will be interaction with other players)? Since you say that you update the server, and get updates in a background thread, interaction will be sluggish. When you communicate with another character you have first wait for you background thread to sync data, but you also have to wait on the background thread of the other player to sync data.
Looks nice. But what is the client-side made of ? Web ? Can you use threading to synchronize both DB ? I should make the game in that way that it interacts immediately with the local DB, and let some background mechanism do the sync (something like a snapshot). This leads me to think about mysql replication. I think it is worth to be tried, but I never did. It also brings you answers to other questions. But what about the charge (how many customers are connected together) ?
http://dev.mysql.com/doc/refman/5.0/en/replication.html
Make your client issue commands to the server ("hit player"), and server send (relevant) events to client ("player was killed"). I wouldn't advice going with data synchronization. Server should be responsible for all important game decisions.

Subscription website architecture questions + SQL Server & .NET

I have a few questions about the architecture of a subscription service I am about to embark on and I am looking for some feedback on how best to set it up.
I won’t have a large amount of customers as Basecamp, maybe a few hundred and was wondering what would be a solid architecture for setting up the customer sites. I’m running SQL Server and .NET on a dedicated machine. Should create a new database for each customer as to have control and isolation of data or keep them all in one database?
I am also thinking of creating a sub-domain for each customer as well so modifications can be made to each site as needed. The customer URLs would look like this:
https://customer1.foobar.com
https://customer2.foobar.com
I am going to have the ability to ‘plug-in’ reports that will be uploaded to the site so each customer can customize as needed. Off the top of my head this necessitates having each sub domain on its own code-base for the uploading of these reports.
So on the main site the customer would sign up for their new subscription and I would programmatically create a new directory for the customer from the main code base and then create a sub domain pointing to the new directory for the customer and then finally their database.
Does this sound about right? Am I on the right track? How do other such sites accomplish the same thing?
Thanks for letting me bend your ear for a bit on this.
From a maintenance perspective, having a virtual directory for each customer scares me. Having done something similar, I would create separate domain pointers as you are intimating. Then you can check the referral headers to see what should be displayed. I would probably create one main site template and dynamically brand it for each customer. You can still create separate folders for customer specific reports or if you really need custom pages unique to that customer. I just wouldn't make each their own site.
The advantage of separate sites (including databases) is that the fate on one client isn't bound to all others. It'd be easier to upgrade (trial) to a sub-set before deploying to everyone else. The big issue here, as Scot points out, is time. You'd want to have things as automated as possible (and well tested), etc. It's also easy when a client leaves. You can always just back-up their database and send it to them (for example).
Auto-provisioning new sites and databases isn't easy, and the account that does that will need plenty of privileges - so your security testing will need to be better than usual.
A multi-tenancy approach is good for minimizing your time but you do have to be careful, you don't want customers data getting mixed up.
One approach that will work, within the one app (and database), is to make use of HttpHandlers (MVC framework, perhaps) so that some sort of client identifier is in part of the URL - but the folder doesn't have to physically exist (or virtually in the IIS sense). That way you don't have to worry about getting folder permission correct; but you do have to be careful about correctly identifying clients, their ids, and making sure clients can't make calls that use an id that isn't theirs.
https://www.foobar.com/[clientid]/subscriptions
The advantage of this is it's relatively straight forward: everything is in the application, and you don't have to worry about adding new DNS records, setting directory and/or database permissions, etc.

Setting up a Reporting Server to liberate resource from a webserver

Yay, first post on SO! (Good work Jeff et al.)
We're trying to solve a bottleneck in one of our web-applications that was introduced when we started allowing users to generate reports on-demand.
Our infrastructure is as follows:
1 server acting as a Webserver/DBServer (ColdFusion 7 and MSSQL 2005)
It's serving a web-application for our backend users and a frontend website. The reports are generated by the users from the backend so there's a level of security where the users have to log in (web based).
During peak hours when reports are generated it brings the web-application and frontend website to unacceptable speed due to SQL Server using resources for the huge queries and afterward ColdFusion generating multi page PDFs.
We're not exactly sure what the best practice would be to remove some load, but restricting access to the reports isn't an option at the moment.
We've considered denormalizing data to other tables to simplify the most common queries, but that seems like it would just push the issue further.
So, we're thinking of getting a second server and use it as a "report server" with a replicated copy of our DB on which the queries would be ran. This would fix one issue, but the second remains: generating PDFs is resource intensive.
We would like to offload that task to the reporting server as well, but being in a secured web-application we can't just fire HTTP GET to create PDFs with the user logged in the web-application from server 1 and displaying it in the web-application but generating/fetching it on server 2 without validating the user's credential...
Anyone have experience with this? Thanks in advance Stack Overflow!!
"We would like to offload that task to the reporting server as well, but being in a secured web-application we can't just fire HTTP GET to create PDFs with the user logged in the web-application from server 1 and displaying it in the web-application but generating/fetching it on server 2 without validating the user's credential..."
why can't you? you're using the world's easiest language for writing webservices. here are my suggestions.
first, move the database to it's own server thus having cf and sql server on separate servers. the first reason to do this is performance. as already mentioned, having both cf and sql on the same server isn't an ideal setup. the second reason is for security. if someone is able to hack your webserver, well there right there to get your data. you should have a firewall in place between your cf and sql server to give you more security. last reason is for scalability. if you ever need to throw more resources or cluster your database, it's easier when it's on it's own server.
now for the webservices. what you can do is install cf on another server and writing webservices to handle the generation of reports. just lock down the new cf server to accept only ssl connections and pass the login credentials of the users to the webservice. inside your webservice, authenticate the user before invoking the methods to generate the report.
now for the pdfs themselves. one of the methods i've done in the pass is generating a hash based on some parameters passed (user credentials and the generated sql to run the query) and then once the pdf is generated, you assign the hash to the name of the pdf and save it on disk. now you have a simple caching system where you can look to see if the pdf already exists and if so, return it, otherwise generate it and cache it.
in closing, your problem is not something that most haven't seen before. you just need to do a little work and your application will magnitudes faster.
The most basic best practice is to not have the web server and db server on the same hardware. I'd start with that.
You have to separate the perception between generating the PDF and doing the calculations. Both are separate steps.
What you can do is
1) Create a report calculated table that will run daily and populated it with all the calculated values for all your reports.
2) When someone requests a PDF report, have the report do a simple select query of the pre-calculated values. It will be much less db effort than calculating on the fly. You can use coldfusion to generate the PDF if it's using the fancy pdf settings. Otherwise you may be able to get away with using the raw PDF format (it's similar to html markup) in text form, or use another library (cfx_pdf, a suitable java library, etc) to generate them.
If the users don't need to download and only need to view/print the report, could you get away with flash paper?
An alternative is also to build a report queue. Whether you put it on the second server or not, what CF could do if you can get away with it, you could put report requests into a queue, and email them to the users as they get processed.
You can then control the queue through a scheduled process to run as regularly as you like and do only create a few reports at a time. I'm not sure if it's a suitable approach for your situation.
As mentioned above, doing a stored procedure may also help, and make sure you have your indexes set correctly in MySQL. I once had a 3 minute query that I brought down to 15 seconds because I forgot to declare additional indexes in each table that were being heavily used.
Let us know how it goes!
In addition to advice to separate web & db servers, I'd tried to:
a) move queries into stored procedures, if you're not using them yet;
b) generate reports by scheduler and keep them cached in special tables in ready-to-use state, so customers only select them with few fast queries -- this should also decrease report building time for customers.
Hope this helps.

Resources