Why would you not want to consolidate Mission Critical Databases? - database

Suppose if you wanted to consolidate all of your mission critical databases into one instance so that you can save some licensing money, what would be the potential risks and are there any good articles or case studies on this? I realize this is a terrible idea but I have somebody that wants to do this and is willing to maximize the hardware resources if needed. I am trying to present him with something quantifiable or some articles that can steer him away from doing this.
There are three big mission critical databases which includes Billing, Dynamics CRM, and an in house built application to keep track of transactions. These are high volume databases for a small/mid sized company. I need quantifiable or a good case study support in order to convince somebody that this is the wrong path to go towards. Any other advice on how I can convince this person would be helpful also.

The answer depends. On first glance, it may look like a bad idea. On the other hand, if the goal is to consolidate everything on one server, and then replicate that server in a remote environment, then you are on the way to a more reliable system. IT might prefer having everything in one place, rather than dealing with mission critical servers spread out over the terrain.
One major issue is the need for a larger machine. So, if any of the systems use software whose license depensd no the size of the machine, you nmight end up spending more money because you need a larger server. I've seen this happen with SAS licencing, for instance.
Perhaps the biggest issue, though, is that the different applications are probably on different development cycles -- whether developed in-house or from outside vendors. So, updating the hardware/operating system/software can become a nightmare. A fix or enhanced functionality in A might require an OS patch, which in turn, has not been tested on B. This maintenance issue is the reason why I would strongly advocate separate servers.
That said, mission critical applications are exactly that, mission critical. The driving factor should not be a few dollars on hardware. The driving factors should be reliability, maintenance, performance, sustainability, and recovery.

The comments made by Oded, Catcall and Gilbert are spot on.
The bank where I learnt the IT trade ran its entire core business on a single MVS (later Z/OS) mainframe, which ran a single DBMS and a single transaction processor (unless you counted TSO as a transaction processor).
The transaction processor went down regularly (say, once a day). It never caused the bank to go broke because it was always up and running again in less than a minute. Mileage may vary, but losing one minute of business time in an entire working day (480 minutes, or < 0.25%) really isn't dangerously disruptive.
The single DBMS went down too, at times (say, twice a month). I can still hear the sysprogs yelling "DBMS is down" over the fence to the helpdesk officers, meaning "expect to be getting user calls". It never caused the bank to go broke because it was always up and running again in a matter of minutes. Mileage may vary, but losing a couple of minutes of business time each month really shouldn't be dangerously disruptive.
The one time I do remember when the bank was really close to bankruptcy was when the development team had made a mess out of a new project in the bank's absolute core business, and the bank was as good as completely out of business (its REAL business) for three or four days in a row. That wasn't 0.25% loss of business time, but almost 100 TIMES more.
Moral of my story ? Two of them. (a) It's all about risk assessment (= probability assessment) and risk-weighted (= probability-weighted) cost estimation. (b) If you ask a question on SO (which implies a kind of recognition/expectation that answerers have more expertise than you on the subject matter), and people like Oded and Catcall provide you with a concise answer, which is accurate and to the point, then don't ask for papers or case studies to back up their answers. If you don't want to accept the experts' expertise, then why bother asking anything in the first place ?

Related

How to share data across an organization

What are some good ways for an organization to share key data across many deparments and applications?
To give an example, let's say there is one primary application and database to manage customer data. There are ten other applications and databases in the organization that read that data and relate it to their own data. Currently this data sharing is done through a mixture of database (DB) links, materialized views, triggers, staging tables, re-keying information, web services, etc.
Are there any other good approaches for sharing data? And, how do your approaches compare to the ones above with respect to concerns like:
duplicate data
error prone data synchronization processes
tight vs. loose coupling (reducing dependencies/fragility/test coordination)
architectural simplification
security
performance
well-defined interfaces
other relevant concerns?
Keep in mind that the shared customer data is used in many ways, from simple, single record queries to complex, multi-predicate, multi-sort, joins with other organization data stored in different databases.
Thanks for your suggestions and advice...
I'm sure you saw this coming, "It Depends".
It depends on everything. And the solution to sharing Customer data for department A may be completely different for sharing Customer data with department B.
My favorite concept that has risen up over the years is the concept of "Eventual Consistency". The term came from Amazon talking about distributed systems.
The premise is that while the state of data across a distributed enterprise may not be perfectly consistent now, it "eventually" will be.
For example, when a customer record gets updated on system A, system B's customer data is now stale and not matching. But, "eventually", the record from A will be sent to B through some process. So, eventually, the two instances will match.
When you work with a single system, you don't have "EC", rather you have instant updates, a single "source of truth", and, typically, a locking mechanism to handle race conditions and conflicts.
The more able your operations are able to work with "EC" data, the easier it is to separate these systems. A simple example is a Data Warehouse used by sales. They use the DW to run their daily reports, but they don't run their reports until the early morning, and they always look at "yesterdays" (or earlier) data. So there's no real time need for the DW to be perfectly consistent with the daily operations system. It's perfectly acceptable for a process to run at, say, close of business and move over the days transactions and activities en masse in a large, single update operation.
You can see how this requirement can solve a lot of issues. There's no contention for the transactional data, no worries that some reports data is going to change in the middle of accumulating the statistic because the report made two separate queries to the live database. No need to for the high detail chatter to suck up network and cpu processing, etc. during the day.
Now, that's an extreme, simplified, and very coarse example of EC.
But consider a large system like Google. As a consumer of Search, we have no idea when or how long it takes for a search result that Google harvests to how up on a search page. 1ms? 1s? 10s? 10hrs? It's easy to imaging how if you're hitting Googles West Coast servers, you may very well get a different search result than if you hit their East Coast servers. At no point are these two instances completely consistent. But by large measure, they are mostly consistent. And for their use case, their consumers aren't really affected by the lag and delay.
Consider email. A wants to send message to B, but in the process the message is routed through system C, D, and E. Each system accepts the message, assume complete responsibility for it, and then hands it off to another. The sender sees the email go on its way. The receiver doesn't really miss it because they don't necessarily know its coming. So, there is a big window of time that it can take for that message to move through the system without anyone concerned knowing or caring about how fast it is.
On the other hand, A could have been on the phone with B. "I just sent it, did you get it yet? Now? Now? Get it now?"
Thus, there is some kind of underlying, implied level of performance and response. In the end, "eventually", A's outbox matches B inbox.
These delays, the acceptance of stale data, whether its a day old or 1-5s old, are what control the ultimate coupling of your systems. The looser this requirement, the looser the coupling, and the more flexibility you have at your disposal in terms of design.
This is true down to the cores in your CPU. Modern, multi core, multi-threaded applications running on the same system, can have different views of the "same" data, only microseconds out of date. If your code can work correctly with data potentially inconsistent with each other, then happy day, it zips along. If not you need to pay special attention to ensure your data is completely consistent, using techniques like volatile memory qualifies, or locking constructs, etc. All of which, in their way, cost performance.
So, this is the base consideration. All of the other decisions start here. Answering this can tell you how to partition applications across machines, what resources are shared, and how they are shared. What protocols and techniques are available to move the data, and how much it will cost in terms of processing to perform the transfer. Replication, load balancing, data shares, etc. etc. All based on this concept.
Edit, in response to first comment.
Correct, exactly. The game here, for example, if B can't change customer data, then what is the harm with changed customer data? Can you "risk" it being out of date for a short time? Perhaps your customer data comes in slowly enough that you can replicate it from A to B immediately. Say the change is put on a queue that, because of low volume, gets picked up readily (< 1s), but even still it would be "out of transaction" with the original change, and so there's a small window where A would have data that B does not.
Now the mind really starts spinning. What happens during that 1s of "lag", whats the worst possible scenario. And can you engineer around it? If you can engineer around a 1s lag, you may be able to engineer around a 5s, 1m, or even longer lag. How much of the customer data do you actually use on B? Maybe B is a system designed to facilitate order picking from inventory. Hard to imagine anything more being necessary than simply a Customer ID and perhaps a name. Just something to grossly identify who the order is for while it's being assembled.
The picking system doesn't necessarily need to print out all of the customer information until the very end of the picking process, and by then the order may have moved on to another system that perhaps is more current with, especially, shipping information, so in the end the picking system doesn't need hardly any customer data at all. In fact, you could EMBED and denormalize the customer information within the picking order, so there's no need or expectation of synchronizing later. As long as the Customer ID is correct (which will never change anyway) and the name (which changes so rarely it's not worth discussing), that's the only real reference you need, and all of your pick slips are perfectly accurate at the time of creation.
The trick is the mindset, of breaking the systems up and focusing on the essential data that's necessary for the task. Data you don't need doesn't need to be replicated or synchronized. Folks chafe at things like denormalization and data reduction, especially when they're from the relational data modeling world. And with good reason, it should be considered with caution. But once you go distributed, you have implicitly denormalized. Heck, you're copying it wholesale now. So, you may as well be smarter about it.
All this can mitigated through solid procedures and thorough understanding of workflow. Identify the risks and work up policy and procedures to handle them.
But the hard part is breaking the chain to the central DB at the beginning, and instructing folks that they can't "have it all" like they may expect when you have a single, central, perfect store of information.
This is definitely not a comprehensive reply. Sorry, for my long post and I hope it adds to thoughts that would be presented here.
I have a few observations on some of the aspect that you mentioned.
duplicate data
It has been my experience that this is usually a side effect of departmentalization or specialization. A department pioneers collection of certain data that is seen as useful by other specialized groups. Since they don't have unique access to this data as it is intermingled with other data collection, in order to utilize it, they too start collecting / storing the data, inherently making it duplicate. This issue never goes away and just like there is a continuos effort in refactoring code and removing duplication, there is a need to continuously bring duplicate data for centralized access, storage and modification.
well-defined interfaces
Most interfaces are defined with good intention keeping other constraints in mind. However, we simply have a habit of growing out of the constraints placed by previously defined interfaces. Again a case for continuos refactoring.
tight coupling vs loose coupling
If any thing, most software is plagued by this issue. The tight coupling is usually a result of expedient solution given the constraint of time we face. Loose coupling incurs a certain degree of complexity which we dislike when we want to get things done. The web services mantra has been going rounds for a number of years and I am yet to see a good example of solution that completely alleviates the point
architectural simplification
To me this is the key to fighting all the issues you have mentioned in your question. SIP vs H.323 VoIP story comes into my mind. SIP is very simplified, easy to build while H.323 like a typical telecom standard tried to envisage every issue on the planet about VoIP and provide a solution for it. End result, SIP grew much more quickly. It is a pain to be H.323 compliant solution. In fact, H.323 compliance is a mega buck industry.
On a few architectural fads that I have grown up to.
Over years, I have started to like REST architecture for it's simplicity. It provides a simple unique access to data and easy to build applications around it. I have seen enterprise solution suffer more from duplication, isolation and access of data than any other issue like performance etc. REST to me provides a panacea to some of those ills.
To solve a number of those issues, I like the concept of central "Data Hubs". A Data Hub represents a "single source of truth" for a particular entity, but only stores IDs, no information like names etc. In fact, it only stores ID maps - for example, these map the Customer ID in system A, to the Client Number from system B, and to the Customer Number in system C. Interfaces between the systems use the hub to know how to relate information in one system to the other.
It's like a central translation; instead of having to write specific code for mapping from A->B, A->C, and B->C, with its attendance exponential increase as you add more systems, you only need to convert to/from the hub: A->Hub, B->Hub, C->Hub, D->Hub, etc.

What slowed down development on your project and how did you overcome it?

Is there anything that slowed development down on a project you worked on and if so how did you improve it?
We recently introduced continuous integration to solve the problem of a constantly broken build.
To increase code quality we introduced code reviews.
The client was constantly changing the static data (lookups) so we introduced a change control process around it.
Communication with our offshore colleagues was difficult so we introduced office communicator
I would be interested in hearing about things that slowed your team down and how you got round them.
Our biggest productivity loss is when developers don't reach the point where they are "programming in the zone".
Developers can be exponentially more productive if they don't have distractions and just zone into what they are doing.
Reading Stackoverflow.com and trying to figure out answers to users' questions consume quite an amount of time. Oh... wait...
Number one far beyond anything else: Failure to adequately or accurately determine requirements.
Cascades into failures to estimate timescales correctly (obviously), an inability to handle change (because you weren't in possession of a full picture at the start), and increased change requirements (actually original requirements manifested as change because you didn't pick them up initially).
A lot of this can be mitigated to an extent by a mutual understanding that you are in an adaptive, formative cycle (i.e. agile), the really destructive thing is when you think you have good information.
Personally, I found that over zealous project managers has caused very slow development in the past. PM's who need very accurate specs written and meetings to cover the project etc causes lots of problems and sometimes you just need to tell them how little you are getting done. Also, I have found that client mind changes has lost me a lot of time in the past and I am working on a sign-off process where clients will get charged for wasted time when they change their mind.
Some things slowing down the project:
Multi-site (offshore) communication (sometimes even distributed team). I tried to set up time boxed conferences (status, requirements clarification) and with strict control of things to be discussed. Of course, with meeting minutes in the end, so no further discussions on what has been decided.
Continuous changes from the customer. They tend to be verbal, asked directly to the development team, fragmenting the development/testing team. I use to force a single point of communication when it comes to changes - the change control board. The handling of the changes (analysis, technical solution, planning, etc.) is done and in this control board. The conclusions are documented. The small changes are planned and handled as a bulk, for the sake of efficiency.
Updating technical documentation - this looks like a slowdown from development perspective, but it usually pays back on other activities (handling changes, discussions with the customer, onboarding, etc). So it must be done :). What shouldn't be done is to put too many details adds little value. However, the right degree of details... there's no rule to find it out :).
I almost forgot: "Analysis - Paralysis": thinking too much (on a technical solution). Having second thoughts, etc. This will definetly slow down the development as a whole. Adopting a pragmatic attitude might help.
If a large chunk of development happens offshore then
Make sure your offshore colleagues have the best hardware/software resources available. I have seen this to be a serious cause of productivity loss. The offshore contractor will provide its developers outdated versions of development tools. Their development machines will have poor configurations (RAM size etc.) causing serious productivity losses. Based on the requirements, ensure that both offshore and onsite developers have the same software and hardware available for development.
Moreover, network issues across continents will really slow down development. In many projects I worked for, offshore teams were connecting to databases located in US slowing down development/testing dramatically. It's a big demotivator when for ex. firing a single select query takes several seconds to complete.

How to get a customer to understand the importance of a qualified DBA?

I'm part of a software development company where we do custom developed applications for our clients.
Our software uses MS SQL Server and we have encountered some customers which do not have a DBA on staff to manage the databases or if they do, they lack the necessary knowledge to perform their job adequately.
We are in the process of drafting a contract with one of those customers to provide development services for new functionality on our software during the next year, where they have an amount of hours available for customization of our software.
Now they want us to include also a quote for database administration services and the problem is that they are including a clause that says that those services will be provided only when they request it.
My first reaction is that db administration is an ongoing process and not something that they can call us once a month to come for a day or two. I'm talking about a central 1TB+ MSSql Cluster and 100 branch offices with MSSql Workgroup edition.
My question is for any suggestions on how I could argue that there must be a fixed amount of hours every month for dba work and not only when their management thinks they need it (which I’m guessing would only be when they have a problem).
PS: Maybe this will be closed as not programming related. But I'm a programmer and I have this problem. My work is software development but i don't want to lose this client and the only solution I can think of is to find a way for the client to understand the scope so we can hire a qualified DBA to provide them with the service they require.
Edit: We are in a Latin American country with clients in the Spanish speaking region. My guess is that in more developed countries there is a culture that knows how delicate the situation is.
This is definitely one of those 'you can lead a horse to water, but you can't make them drink' situations.
My recommendation here would be to quote the DBA services as hourly, and make the rate high enough that you can outsource the work if you decide you want to. When (not if) the SQL servers start to have problems, the firm is on the hook.
I would also recommend that you include in your quote a non-optional 2 hour database technology review once per year. This is your opportunity to say 'You spent XXX on database maintenance this year, most of which was spent fighting fires that could have been easily avoided if you had just spent XXXX/4 and hired a DBA. We care about you as a customer, and we want you to save money, so we really recommend that you commit to using a DBA to perform periodic preventative maintenance'.
I would also recommend that you categorize any support requests as having root cause b/c of database maintenance vs other causes. This will let you put a nice pie chart in front of the customer during their annual review (which they are going to pay you to perform). It is critical to manage the perception so they don't think your code is causing the problems. You might even go so far as to share these metrics (db related issue vs non-db related issue) with them on a quarterly basis.
Sometimes people need to experience pain before they change. The key is to not be in between the hammer and their thumb as they learn the lesson, and hourly quoted work is one way of doing this.
As a side note, this sort of question is of great interest to a large number of developers. I'd say that this sort of thing could impact a programmer's quality of life more than any algorithm or library question ever could. Thanks for asking it!
No DBA on a system that size is a disaster waiting to happen. If they don't understand that, they are not qualified to run a database that size. I'd recommend that they talk to other companies with similar sized databases and have them ask them about their DBAs and what they do for them, and if they think they could survive without them.
Perhaps the link below from MS SQL Tips could give you some good talking points. But people who aren't technical wont respond to a technical explanation of the necessity of good DBA you are likley going to have to work toward proving the cost of bad DBA. Work out the worst case scenarios and see how they feel about them. If you can make it seem like a good financial move (and I think we all know it is) it will be an easy sell.
http://www.mssqltips.com/tip.asp?tip=1278

When developing a new system - should the db schema always be discussed with the stakeholders? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I'm several layers/levels above the people involved in the project I'm about to describe.
The general requirement is for a web based issue management system. The system is a small part of a much larger project.
The lead pm has a tech pm who is supposed to handle this portion of the project. The lead pm asked me if it's normal for the help information to not be in the context of where the help was requested. The lead pm was providing feedback about the site and wanted modal dialogs and such for error messages and wanted me to take a look. I'm looking at the system and I'm thinking...
a new app was developed in cold fusion!?!?
the app has extremely poor data validation
the app data validation page navigates away from the data entry form
the app help page navigates away from the form
the db schema was not discussed between the developer and the pm
the db schema was not discussed because it does not exist
there is a menu page - i.e. once you go to a page, you have to go back to main menu and then go to the next page you want
the lead pm does not know what the dbms is...
there is a tech pm and she does not know what a dbms is...
the lead pm has wanted to fire the tech pm for a long time, but the tech pm is protected...
the lead pm suggested that the exact functionality desired exists in several proprietary projects (several of which are open source - bugtracker, bugzilla, etc.), but the tech pm and dev wouldn't listen.
I have two questions?
Do I
fire the dev?
fire the tech pm and the person protecting her?
fire the lead pm?
download and configure bugtracker/bugzilla for them and then fire all of them?
download and configure bugtracker/bugzilla for them and then go have a beer to forget my sorrows?
and isn't it SOP for the db schema to be discussed and rigorously thought through very early in the project?
EDIT:
I used to work with a wide variety of clients with disparate levels of technical knowledge (and intelligence). I always discussed the db schema with the stakeholder. If they didn't know what a schema was, I would teach them. If they didn't have the background to understand, I would still discuss the schema with them - even if they didn't realize we were talking about the schema. In most of the projects I've been directly involved in, the data is the most important part of the system. Thoroughly hashing out the schema/domain model has been critical in getting to a good understanding of the system and what things can be done and reported on. I have high regard for the opinions of the posters on SO. It's interesting to note that my approach is not the usual course.
BTW - the sad thing is that the project uses tax payer funds and the IT portion is a collaboration with a prestigious university... the dev and tech pm are long time employees - they are not inexperienced. I find it especially sad when I know intelligent and hard-working people who are jobless and people like these are employed.
When I was younger, I would report this type of ineptitude up the chain and expect appropriate action. Now that I'm up the chain, I find myself not wanting to micro-manage other people's responsibilities.
My resolution was to have two beers and get back to my responsibilities...
Okay, the first thing, to answer your question: No NO, a thousand times NO! The users are not people you should be discussing db schemata with; in general, you'd as well discuss calculus with a cow. Even if they have the technical background, what about the next time the requirements change; should they be involved in the schema update?
More generally, this sounds like a case where the technical leads let the problem get out of touch with the "customers" or stakeholders. If you're being asked to actually fix the problem, I'd suggest you need to build a GUI prototype of some sort, maybe even just a storyboard, and walk through that. then you'll have an idea where things stand.
Extended: yes, it WOULD be normal to discuss the DB schema within the project. I'd say you do need to think seriously about some, um, major counseling with the leads.
Extended more: I understand your point, but the thing is that the database schema is an implementation detail. We're so used to databases we let ourselves lose track of that, and end up with applications that, well, look like databases. But the database isn't what delivers customer value; it's whether the customer can do the things they want. If you tie the ways the customer sees the application to the DB schemata, then you tie them to one implementation; a change, such as denormalizing a table in order to make a more efficient system, becomes something you have to show the customer. Better to show them the observables, and keep these details to ourselves.
But I suspect we're having a terminology clash, too. I would have agreed with you on "domain model." If, by db schema, you mean only those tables and relations visible in the user's view of the system, the "use cases" if you will, then we'd be agreeing.
The DATA should be discussed with the stakeholders, absolutely yes. The DB SCHEMA should NOT be discussed with the stakeholders except under special circumstances, where the stakeholders are all "database savvy".
So how can you discuss the DATA without discussing the DB Schema? This is the primary use that I've found for Entity-Relationship (ER) diagrams, and the ER model in general. A lot of database designers tend to treat ER as a watered down version of relational data modeling (RDM). In my experience, it can be used much more profitably if you don't think of it as watered down RDM.
What is one difference between ER and RDM? In RDM, a many to many relationship requires a junction box in the middle. This junction box holds foreign keys that link the junction box to the participants in the many to many relationship.
In ER, when applied strictly, junction boxes are unnecessary in many to many relationships. You just indicate the relationship as a line, and indicate the possibility of "many" at both ends of the line. In fact, ER diagrams don't need foreign keys at all. The concept of linkage by means of foreign keys can be left out of the discussion with most users.
Data normalization is utterly irrelevant to ER diagramming. A well built ER diagram will have very little harmful redundancy in it, but that's largely serendipity and not the result of careful planning.
The "entities" and "relationships" in a stakeholder oriented ER diagram should only include entities that are understood by the subject matter experts, and not include entities or relationships that are added in the course of logical database design.
Values to be held in a database and served up on demand can be connected to attributes, and attributes can in turn be connected to either entities or relationships among entities. In addition, attributes can be tied to domains, the set of possible values that each attribute can take on. Some values stored in databases, like foreign keys, should be left out of discussions with most stakeholders.
Stakeholders who understand the data generally have an intuitive grasp of these concepts, although the terms "entity", "relationship", "attribute", and "domain", may be unfamiliar to them. Stakeholders who do not understand the subject matter data require special treatment.
The beauty of ER models and diagrams is that they can be used to talk about data not only in databases, but also as the data appears in forms that users can see. If you have any stakeholders that don't understand forms and form fill out, my suggestion is that you try to keep them away from computers, if that's still possible.
It's possible to turn a well built ER diagram into a moderately well built relational schema by a fairly mechanical process. A more creative design process might result in a "better" schema that's logically equivalent. A few technical stakeholders need to understand the relational schema and not merely the ER diagram. Don't show the relational schema to people who don't need to know it.
Well, first you probably should review very carefully the relationship between the tech pm and her sponsor. I'm surprised you say the tech pm is protected when you later imply you can fire the protector. Either she is, or she is not protected. If you can fire the protector, then she is NOT protected.
So it sounds like no-one is protected, and worse - NO-ONE is communicating. I'd recommend the following: call a meeting with the lead pm, the tech pm and the dev. Once together, ask each in turn: "without referencing anything except YOUR work (i.e. you can't blame anyone else for the duration of this exercise), tell me in 5 minutes or less why I should NOT fire you today".
I realize this is extreme advice, but you have described a HORRIBLE solution to a classic problem. Every aspect of this project and the resulting "code" sounds like a disaster. You probably should have had a greater hand in the oversight of this mess, but you didn't (for whatever reason). I realize that you should expect hired professionals at the PM level to do better than this.
Hence my recommendation for a SEVERE shake-up of the team. Once you put the fear of unemployment one the table (and I'd tell them that you are writing up the failure to communicate for each one), then REQUIRE them to post plans for immediate communication improvement PLUS detailed timelines for fixing the mess by the end of the week.
Then get off your own bum because you're now the LEAD-lead PM on this project.
If they shape up and pull off a comeback on this disaster, then slowly start increasing their responsibilities again. If not... there's always a door.
Cheers,
-R
the lead pm suggested that the exact
functionality desired exists in
several proprietary projects (several
of which are open source - bugtracker,
bugzilla, etc.), but the tech pm and
dev wouldn't listen.
If this is true, tell the lead pm to be more assertive; then tell him/her to install bugzilla and be done with it. If the tech pm and dev weren't listening because of stubbornness, they need a little chat...
Either way, I'd say you have a problem with your organization... How many thousands of dollars were lost because of a case "not developed here"? However, given that it reached the point of implementation, there are problems further upstream than the development level...
As far as discussing the db schema with everybody, I'd say no. Everyone who can positively contribute should be involved after the application requirements have been gathered.
Wow, sounds like a disaster. Let me address your points in rough order:
First, people develop in languages they find comfortable. If someone is still comfortable in an older environment when much better alternatives exist, it is a sure sign that they have little appetite for skill acquisition.
Data validation prevents people from going too far down a path only to find it is a blind alley. Lack of validation means the developer isn't thinking about the user. Also, it is not something tacked on at the end...it simply doesn't work that way.
Web "dialogs" cannot be "modal" in the sense you are thinking. However, it is easy enough to pop up an additional window. Help on a page should almost always use a pop up window of this sort.
Data validation should NEVER navigate away from the page where data is entered - this is horrible UI design.
The DB schema is kind of the least of your problems. If the developer is responsible for delivering functionality and is clearly competent in data schema design, I wouldn't think it critical to discuss the nuances of the schema with the lead PM. It should be discussed among various code-level stakeholders and it must be capable of handling the requirements of the work. However, the important thing from the PM's perspective isn't the schema so much as the operational aspects. Of course, if you have no faith in the developer's ability to construct a good db schema, all bets are off.
If you seriously don't know what the dbms is, you may have a serious problem. Do you have a standard? If everyone else in the extended project is using MS SQL Server and this guy chose Oracle, how do you transfer expertise and staff into or out of this project? This is a sign of an organization out of control.
There are two reasons for ignoring alternative proprietary products. First, they may not truly meet your needs. Second, the tech PM and developer may simply be featherbedding or engaging in some nasty 'not invented here' justification for wasting your resources. The problem is that you aren't likely to have enough insight, at your level, to know the difference between the two.
With respect to firing the Dev...is it possible to help him by sponsoring some additional training? If this person is otherwise a good employee and knows your business well, I'd be very hesitant to fire them when all that is needed is a push in the right direction.
The tech PM sounds like she really isn't doing her job. She is the logical person to point out the flaws I am writing about and pushing for improvement. The real question, vis a vis her position, is whether she can learn to be a better advocate for your organizational interests.
The lead PM sounds too passive as well. Comments made above regarding the tech PM apply here as well.
If bugtracker, etc. really work then it would make sense to go that route. However, you might want to be a bit more circumspect about firing people.
First off, I agree with Charlie Martin about the db schema.
Second,
It sounds like the developer on the project is very green - is this his/her first programming job? If so, I would only fire the dev if their resume says something else.
I don't know how involved the lead/tech pms are expected to be in a project, but it sounds like the responsibility is dev > tech pm > lead pm. If that is the case, then the tech pm completely dropped the ball. You may want to find out why the ball was dropped and fire/keep her based on that, but a botched job like that is reprimand time where I work.
Finally, imho, the "protection" stuff is b.s. - you need to reward and reprimand people based on their quality and value, not who their aunt is.
Good luck! Cheers!
Wow. I feel your pain.
Looks to me as if the first source of your problem is the techPM who is "protected". Why is she protected and by whom? I once was on a project where the ceo's secretary became first the business analyst and then (after he quit) the project manager because they were having an affair. She didn't know what language we programmed in and thought requirements were a waste of time. Since she was protected by someone as high as possible in the organization, the only real solution was to look elsewhere for employment.
You seem to think you can fire her and her protector so it may be someone lower than you but above the lead PM so he couldn't do anything about it but you can? Yes, you should fire the two of them.
The lead PM may or may not be salvageable depending on who the protector was. He could have been between a rock and a hard place where he knew what to do but due to the nature of the relationship between the tech pm and her protector was unable to exert any influence over her and the people who reported to her. I was in that position once where two of my bosses were having an affair with one of my subordinates and it creates all kinds of organizational havoc (which is why the protector must be fired as well as the tech PM). Give him the benefit of the doubt and discuss with him how he would would handle things differently if the tech pm and her protector were out of the way. If you like what you hear, you can keep him but organizationally you will need to step in and make sure that is it clear this person is in charge and no one will be allowed to ignore him. Once a lead has lost authority, he can only get it back with the strong backing from management.
I would also sit down with the lead and the developer and explain exactly what is unacceptable in the project as it currently stands. If the developer feels unable to take direction from the lead (assuming you decide to keep him) or is unable to adjust to a new way of doing business or cannot understand why the code as it stands is unacceptable, cut your losses and get rid of him as well. A new person is likely to work better for the lead if he is salvageable anyway because he won't have a history of ignoring him.
I wouldn't necessarily think that the db schema should always be shared with stakeholders. Most people wouldn't know what to do with that sort of information. If you're trying to make sure that the product fits the requirements, the requirements should be clearly laid out up front and verified throughout the development of the project.
If you're having problems with the dev, that's just par for the course. Someone more trust-worthy should have been found. If you hired a poor coder, that was your mistake.
There are a few possible solutions:
Get a better coder. He'll hate working through all the bad code but hopefully he'll slug through it till it's done. Hopefully you're willing to pay him good money.
Keep the coder and make him fix it all. Hire a new PM that can manage him better. That coder knows his code best and it might take less time for him to just improve it. In the long run, you're better off not keeping a bad coder on payroll so lose him when you're done.
Suck it up, buy a beer for everyone involved and start over with opensource. You'll probably still need a tech PM to manage the software. You'll also have to forget about doing anything custom at that point. Perhaps a contractor could manage this.
Either way, you're gonna lose some money. Should probably keep a closer eye on things next time.
I tend to think of it this way. The database schema is there to support the application's data storage requirements. What data the application needs to store will be determined by the end user's requirements. If you're not consulting your end user as to their requirements for the application you're obviously headed for trouble, but provided you have a good handle on their requirements (and likely future requirements) then database schema is a technical decision which can be made by the project team without direct input from the end user/client.
An end user is unlikely to understand the intricacies of tables, fields, normalization, etc, but they'll understand "the system needs to do xyz". Talk to the end users in a language they understand, and let your team make the appropriate technical decisions.
My big question is about the relationship between the lead pm and the tech pm's protector: did the lead pm have good reason to fear retaliation from the protector? It's entirely possible that he felt unable to do anything until the situation got bad enough that it was clearly important for people above the protector. In that case, he doesn't deserve any more harsh treatment.
The tech pm is apparently incompetent at her job, and her protector is more interested in favoring her than getting the work done. That suggests to me that they need to be dealt with in some fashion, at minimum with a talk about the importance of getting real work done, at maximum firing both of them.
The dev is likely hunkered down, trying to survive politically, given the climate you've outlined. I can't tell enough about the dev to give any advice.
Therefore, if your description and my amazing psychic powers have given me a clear and accurate picture:
Shield the lead pm from retaliation, and tell him to ditch all the crud and implement an off-the-shelf solution. (If he can't select one himself reliably, he shouldn't be lead pm.)
Discipline the tech pm and her protector. You really don't want to have people wrecking enterprise productivity that way.
The dev is the lead pm's responsibility. Leave it at that. Don't micromanage more than you have to. Have a couple of beers. Get back to your usual work.

Trialware/licensing strategies [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I wrote a utility for photographers that I plan to sell online pretty cheap ($10). I'd like to allow the user to try the software out for a week or so before asking for a license. Since this is a personal project and the software is not very expensive, I don't think that purchasing the services of professional licensing providers would be worth it and I'm rolling my own.
Currently, the application checks for a registry key that contains an encrypted string that either specifies when the trial expires or that they have a valid license. If the key is not present, a trial period key is created.
So all you would need to do to get another week for free is delete the registry key. I don't think many users would do that, especially when the app is only $10, but I'm curious if there's a better way to do this that is not onerous to the legitimate user. I write web apps normally and haven't dealt with this stuff before.
The app is in .NET 2.0, if that matters.
EDIT: You can make your current licensing scheme considerable more difficult to crack by storing the registry information in the Local Security Authority (LSA). Most users will not be able to remove your key information from there. A search for LSA on MSDN should give you the information you need.
Opinions on licensing schemes vary with each individual, more among developers than specific user groups (such as photographers). You should take a deep breath and try to see what your target user would accept, given the business need your application will solve.
This is my personal opinion on the subject. There will be vocal individuals that disagree.
The answer to this depends greatly on how you expect your application to be used. If you expect the application to be used several times every day, you will benefit the most from a very long trial period (several month), to create a lock-in situation. For this to work you will have to have a grace period where the software alerts the user that payment will be needed soon. Before the grace period you will have greater success if the software is silent about the trial period.
Wether or not you choose to believe in this quite bold statement is of course entirely up to you. But if you do, you should realize that the less often your application will be used, the shorter the trial period should be. It is also very important that payment is very quick and easy for the user (as little data entry and as few clicks as possible).
If you are very uncertain about the usage of the application, you should choose a very short trial period. You will, in my experience, achieve better results if the application is silent about the fact that it is in trial period in this case.
Though effective for licensing purposes, "Call home" features is regarded as a privacy threat by many people. Personally I disagree with the notion that this is any way bad for a customer that is willing to pay for the software he/she is using. Therefore I suggest implementing a licensing scheme where the application checks the license status (trial, paid) on a regular basis, and helps the user pay for the software when it's time. This might be overkill for a small utility application, though.
For very small, or even simple, utility applications, I argue that upfront payment without trial period is the most effective.
Regarding the security of the solution, you have to make it proportional to the development effort. In my line of work, security is very critical because there are partners and dealers involved, and because the investment made in development is very high. For a small utility application, it makes more sense to price it right and rely on the honest users that will pay for the software that address their business needs.
There's not much point to doing complicated protection schemes. Basically one of two things will happen:
Your app is not popular enough, and nobody cracks it.
Your app becomes popular, someone cracks it and releases it, then anybody with zero knowledge can simply download that crack if they want to cheat you.
In the case of #1, it's not worth putting a lot of effort into the scheme, because you might make one or two extra people buy your app. In the case of #2, it's not worth putting a lot of effort because someone will crack it anyway, and the effort will be wasted.
Basically my suggestion is just do something simple, like you already are, and that's just as effective. People who don't want to cheat / steal from you will pay up, people who want to cheat you will do it regardless.
If you are hosting your homepage on a server that you control, you could have the downloadable trial-version of your software automatically compile to a new binary every night. This compile will replace a hardcoded datetime-value in your program for when the software expires. That way the only way to "cheat" is to change the date on your computer, and most people wont do that because of the problems that will create.
Try the Shareware Starter Kit. It was developed my Microsoft and may have some other features you want.
http://msdn.microsoft.com/en-us/vs2005/aa718342.aspx
If you are planning to continue developing your software, you might consider the ransom model:
http://en.wikipedia.org/wiki/Street_Performer_Protocol
Essentially, you develop improvements to the software, and then ask for a certain amount of donations before you release them (without any DRM).
One way to do it that's easy for the user but not for you is to hard-code the expiry date and make new versions of the installer every now and then... :)
If I were you though, I wouldn't make it any more advanced than what you're already doing. Like you say it's only $10, and if someone really wants to crack your system they will do it no matter how complicated you make it.
You could do a slightly more advanced version of your scheme by requiring a net connection and letting a server generate the trial key. If you do something along the lines of sign(hash(unique_computer_id+when_to_expire)) and let the app check with a public key that your server has signed the expiry date it should require a "real" hack to bypass.
This way you can store the unique id's serverside and refuse to generate a expiry date more than once or twice. Not sure what to use as the unique id, but there should be some way to get something useful from Windows.
I am facing the very same problem with an application I'm selling for a very low price as well.
Besides obfuscating the app, I came up with a system that uses two keys in the registry, one of which is used to determine that time of installation, the other one the actual license key. The keys are named obscurely and a missing key indicates tampering with the installation.
Of course deleting both keys and reinstalling the application will start the evaluation time again.
I figured it doesn't matter anyway, as someone who wants to crack the app will succeed in doing so, or find a crack by someone who succeeded in doing so.
So in the end I'm only achieving the goal of making it not TOO easy to crack the application, and this is what, I guess, will stop 80-90% of the customers from doing so. And afterall: as the application is sold for a very low price, there's no justification for me to invest any more time into this issue than I already have.
just be cool about the license. explain up front that this is your passion and a child of your labor. give people a chance to do the right thing. if someone wants to pirate it, it will happen eventually. i still remember my despair seeing my books on bittorrent, but its something you have to just deal with. Don't cave to casual piracy (what you're doing now sounds great) but don't cripple the thing beyond that.
I still believe that there are enough honest people out there to make a for-profit coding endeavor worth while.
Don't have the evaluation based on "days since install", instead do number of days used, or number of times run or something similar. People tend to download shareware, run it once or twice, and then forget it for a few weeks until they need it again. By then, the trial may have expired and so they've only had a few tries to get hooked on using your app, even though they've had it installed for a while. Number of activation/days instead lets them get into a habit of using your app for a task, and also makes a stronger sell (i.e. you've used this app 30 times...).
Even better, limiting the features works better than timing out. For example, perhaps your photography app could limit the user to 1 megapixel images, but let them use it for as long as they want.
Also, consider pricing your app at $20 (or $19.95). Unless there's already a micropayment setup in place (like iPhone store or XBoxLive or something) people tend to have an aversion to buying things online below a certain price point (which is around $20 depending on the type of app), and people assume subconciously if something is inexpensive, it must not be very good. You can actually raise your conversion rate with a higher price (up to a point of course).
In these sort of circumstances, I don't really think it matters what you do. If you have some kind of protection it will stop 90% of your users. The other 10% - if they don't want to pay for your software they'll pretty much find a way around protection no matter what you do.
If you want something a little less obvious you can put a file in System32 that sounds like a system file that the application checks the existence of on launch. That can be a little harder to track down.

Resources