How to create a simple database with Clients' Info (email addresses, phone, affiliation) and different Campaigns’ drive Info (current interests, ongoing research..etc). The database should allow a user to log daily information such as:
Date of when the client was contacted;
Date of when the client have contacted me back;
status of interaction (interested client, not interested, considering it...etc)
action required on the current status (with automated alerts to prompt and action);
log date of action taken;
data analysis to see evolution of ongoing and past exchange of emails with clients;
Multiple users using the same database;
Checking that there are no repeated clients (email ID verification)
Considering various e-mails addresses for the same person and checking all of them for duplicate contacts purpose;
importing current database from excel (including duplicates and consider that the same client can be included in different campaigns but never contacted twice in the same here)
create simple interface to access database and perform all daily actions such as log information, and upload news contacts with themes information for each campaign drive details.
Important: I don't know much about programming and this should be something simple to develop and use. Any ideas?
It sounds like you are looking for more than just a database (E.G., a place to store stuff) and are looking at a solution UI, DB, Reports, etc. Assuming this is correct and that you want to use Microsoft technologies (there are tons of other alternatives Ruby on Rails, Java, PHP, MySql, Mongo, etc.) I would suggest learning about Nerd Dinner written by Microsoft in ASP.NET MVC3 using SQL Server for the RDBMS as the concept matches some of your requirements above.
Related
I am working on a project with someone who has developed a desktop app for people who run charity voucher companies. They have customers who have accounts, who put money in their accounts, and who write charity vouchers (bit like cheques) to charities.
He wants me to write a web site where both charities and customers can log in and see details of their accounts, vouchers issued, etc.
As most of the data will be coming from his app to my web site, we agreed to use his primary key IDs in my database, so it will be easy to match up the data.
We're quite well into it, and it I've discovered that he is a staunch opponent of relational databases. His database doesn't have any foreign reference keys at all, just IDs in tables. He does individual queries on each table to see if the related data is there.
I want to use Entity Framework, but am not sure if I can, as I can't be sure that the data he sends me will be complete. For example, he might send me details of a voucher, which will have a customer ID and a charity ID, but the customer may not have been sent, so the customer ID on the voucher won't exist in the customers table.
Any ideas what I can do? I can't have foreign links between my tables, as this will throw errors whenever it comes across incomplete data, but if I don't have any links, then I've lost the whole benefit of using EF.
My only thought so far is to leave the tables unrelated, and then add partial classes for the entities, with properties that will look like navigation properties, but that will check to see if the "foreign" data is there, and if so, return it.
This might work, but seems like a lot of effort. Anyone any better suggestions as to how I handle this situation?
This is a very late answer, but since I stumbled across this question, it might be useful for others.
Microsoft recently announced that EF Core (within ASP.NET Core 2.1) will have a provider for Cosmos DB:
Cosmos DB provider preview: We have been developing an EF Core
provider for the DocumentDB API in Cosmos DB. This is the first
document database provider we have produced, and the learnings from
this exercise are going to inform improvements in the design of the
subsequent release after 2.1. The current plan is to publish an early
preview of the Cosmos DB provider in the 2.1 timeframe.
NOTE: A short video containing major features to be delivered with ASP.NET Core 2.1 can be seen here.
We maintain a Software as a Service (SaaS) web application that sits on top of a multi-tenant SQL Server database. There are about 200 tables in the system, this biggest with just over 100 columns in it, at last look the database was about 10 gigabytes in size. We have about 25 client companies using the application every entering their data and running reports.
The single instance architecture is working very effectively for us - we're able to design and develop new features that are released to all clients every month. Each client experience can be configured through the use of feature-toggles, data dictionary customization, CSS skinning etc.
Our typical client is a corporate with several branches, one head office and sometimes their own inhouse IT software development teams.
The problem we're facing now is that a few of the clients are undertaking their own internal projects to develop reporting, data warehousing and dashboards based on the data presently stored in our multi-tenant database. We see it as likely that the number and sophistication of these projects will increase over time and we want to cater for it effectively.
At present, we have a "lite" solution whereby we expose a secured XML webservice that clients can call to get a full download of their records from a table. They specify the table, and we map that to a purpose-built stored proc that returns a fixed number of columns. Currently clients are pulling about 20 tables overnight into a local SQL database that they manage. Some clients have tens of thousands of records in a few of these tables.
This "lite" approach has several drawbacks:
1) Each client needs to develop and maintain their own data-pull mechanism, deal with all the logging, error handling etc.
2) Our database schema is constantly expanding and changing. The stored procs they are calling have a fixed number of columns, but occasionally when we expand an existing column (e.g. turn a varchar(50) into a varchar(100)) their pull will fail because it suddenly exceeds the column size in their local database.
3) We are starting to amass hundreds of different stored procs built for each client and their specific download expectations, which is a management hassle.
4) We are struggling to keep up with client requests for more data. We provide a "shell" schema (i.e. a copy of our database with no data in it) and ask them to select the tables they need to pull. They invariably say "all of them" which compounds the changing schema problem and is a heavy drain on our resources.
Sorry for the long winded question, but what I'm looking for is an approach to this problem that other teams have had success with. We want to securely expose all their data to them in a way they can most easily use it, but without getting caught in a constant process of negotiating data exchanges and cleaning up after schema changes.
What's worked for you?
Thanks,
Michael
I've worked for a SaaS company that went through a similar exercise some years back and Web Services is the probably the best solution here. incidentally, one of your "drawbacks" is actually a benefit. Customers should be encouraged to do their own data pulls because each customer's needs on timing and amount of data will be different.
Now instead of a LITE solution, you should look at building out a WSDL with separate CRUD calls for each table and good filtering capabilities. Also, make sure you have change times for records on each table. this way a customer can hit each table and immediately pull only the records that have been updated since the last time they pulled.
Will it be easy. Not a chance, but if you want scalability, it's the only route to go.
ood luck.
For a contact management system web app that allows tennants to upload lists of contact records (having varying field structures) and then displays these back to multiple users (within tennant) one at a time is there a good p/saas database solution to handle this
-it would need to allow uploading lists with custom fields (20K records per list)
-allow updating of fields changed when users edit them (user may update 60 records a minute)
-would need to allow running queries against the lists to determine next record to display (this part utilizes set fields)
Obviously a scalable, easy to use, hassle free as possible design is the aim here.
Will it be easier than developing a local database design?
(Prefer not to use a full paas would like to keep the application tier seperate.)
If your queries are always against the set fields and not the custom fields, then any database would do assuming you keep the "custom" fields in blob format (xml, for example).
If that is the case then your local database design is pretty simple. If you host on amazon ec2, then you can use either of their saas database solutions (mysql or SimpleDB even).
-Dave
I have a service which process emails in a mailbox and once processed, stores some information from the email in the database. At the minute the schema looks something like:
ID
Sender
Subject
Body (result of being parsed/stripped to plain text)
DateReceived
I am building a web front-end for the database and the main purpose of storing the emails is to provide the facility for users to look back and see what they have sent. However, another reason is for auditing purposes on my end.
The emails at the moment are being moved to specific mailbox folders. So what I plan to start doing is once the email is processed, record it in the database and delete the email from the mailbox instead of just moving it.
So a couple of questions...
1) Is it a good idea to delete the actual email from exchange? Is it better to hold onto it just in case?
2) To keep the size of the fields down I was stripping the HTML out of the emails, is this a bad idea? should I just store the email as it is received?
Any other advice/suggestions would be great.
In both cases I think you should hold onto the original emails. Storage is cheap, but if disk space is really an issue look to compression rather than excision to solve it.
Both your of your use cases (historical record and audit) will be better served by storing the complete unabridged email in the database. Once you start tampering with the data, albeit "just" removing formatting, it becomes difficult to prove that you haven't edited it in other, more significant ways. Especially if you have deleted the original email instead of archiving it.
You don't say what business you're in, but the other thing to remember is whether there are any data retention policies active within your organisation or in the wider jurisdiction. Compliance is becoming gnarlier all the time.
I would maintain the messages on the Mailbox on a specific folder as you are doing and probably wouldn't even save anything on a database given you can access the Mailbox from within your application.
The Exchange team over the years has developed several APIs for accessing the Mailbox's contents.
With Exchange Server 2007 and 2010, the recommended API would be Exchange Web Services which can be used from any language/environment that is capable of accessing Web Services.
If you are developing with a .Net language (C#, VB.NET for instance), your best bet would be EWS Managed API.
If you are really going to do something meaningful with the body, you can save the results as named properties (extended properties in EWS parlance) on the message itself.
There are other APIs with corresponding functionality for previous versions of Exchange.
If I am building a CRM web application to sell as a membership service, what is the best method to design and deploy the database?
Do I have 1 database that houses 100s of records per table or deploy multiple databases for different clients?
Is it really an issue to use a single database since I believe sites like Flickr use them?
Multiple clients is called "multi-tenant". See for example this article "Multi-Tenant Data Architecture" from Microsoft.
In a situation like a CRM system, you will probably need to have separate instances of your database for each customer.
I say this because if you'd like larger clients, most companies have security policies in place regarding customer data. If you store their customer data in the same database as another customer, you're running the risk of exposing one companies confidential data to another company (a competitor, etc.).
Sites like Flickr don't have to worry about this as much since the majority of us out on the Interwebs don't have such strict policies regarding our personal data.
Long term it is easiest to maintain one database with multiple clients' data in it. Think about deployment, backup, etc. However, this doesn't keep you from having several instances of this database, each containing a subset of the full client dataset. I'd recommend to grow the number of databases after you have established the usefulness/desirability of your product. Having complex infrastructure is not necessary if you have no traffic....
So, I'd just put a client id in the relevant tables and smile when client 4 comes in and the extent of your new deployment is one insert statement.