Code promotion: Enforcing the rules - user-acceptance-testing

So here is our problem:
We have a small team of developers with their own ways of doing things-- I am trying to formalize a process in which we are required to promote our code in the following order:
Local sandbox > Dev > UAT > Staging > Live
Developers develop/test as they go on their own sandbox, Dev is its own box that we would use for continuous integration, UAT is another site in IIS on the dev box, which uses our dev database. We then promote to staging, which is a site in IIS on the Live box and using live data (just like the live, hence staging). Then, finally, we promote to live.
Here are a few of my questions:
1.) Does this seem to be best practice? If not, what needs to be done differently?
2.) How do I enforce the rules to the developers? Often developers skip steps in order to save time... this should not be tolerated and would be great if it could be physically enforced.
3.) How do I enforce these rules to the business group? The business group just wants to get features out FAST. Do we promote only on certain days?

sounds like a good setup to me ... we don't have as may areas where I work. We have DEV > QA > Production.
1) I'm not exactly sure what "best practice" is, but your setup seems like a very good practice to me. My only concern would be in the Sandbox environment. Are there assurances that the developer's code is backed up daily? just in case their machine crashes hard? I'd hate to lose good dev code.
2) We have a "release coordinator" here who enforces access to Sourcesafe and TFS and also controls access to the QA environment so there are only certain times when they are available.
3) The same goes for the business testers except their authority comes through the Project Managers. The PM's have a document that gets filled out per project and testing teams are specified.
We do only promote on certain days (every other Thursday). However we do understand that there can be emergencies and we have done production releases on off days when the need calls for it, but those emergencies are documented after the fact and analyzed to see what went wrong and where we can make improvements.
I'd say as long as your environment is controlled and documented you should be fine. It would be good to make sure everything is backed up in the sandbox area and that a small group of people control the access to the other environments. I would also recommend you keep good documentation on the comings and goings of the "secured" environments, just in case something goes wrong you can backtrack through the logs and see what might have happened or who might have done it, not necessarily to point fingers but to go back and say "what exactly did you upload/change?" so we can see what might have caused the issue.
Best of luck to you,

Scott already answered pretty well, so i won't repeat his logic. The one he seems to have missed was:
How do I enforce these rules to the business group?
The problem is, you can not enforce anything on the business group. Only their managers can.
What you (as IT) can do is meet with the business side managers, and lay out the cost/benefit analysis.
Worst case bug
Likelyhood of that bug without proper process
Cost to the company of such bug.
Ideally, the bug would be something which actually happened in the past instead of a theoretical thing :)
Then compare that with the relatively insignificant (just make some estimates, hopefully with business user's input) cost of having proper process and associated slowdowns.
Basically, you need their buy-in, to convince them it's in their interest to not cut corners.

We have a similar setup at my shop. We enforce this by using different physical machines and who has access via passwords, etc. I develop locally on my own VPC, then check the code in. That is the end of it as far as I am concerned. Another person has access to the dev box where he runs builds as needed, he does not have access to the "live" box, another person does. This person has access to both the "dev" box and the "live" boxes--this way he can do an emercency deploy etc. if needed. Once a build has gone to dev and been tested, then, and only then, is a "live" build done.

Related

Rolling back changes in Salesforce

I am just getting started with doing moderate web development work in Salesforce for my company, and I'm looking for some feedback/insight into the deployment process. Right now it's looking like we will be doing a fair amount of custom work using visual force and apex. What I am wondering is if I screw something up in my production org (data or metadata) is there a way to roll back to a snapshot or previously released version of my org that still works? With the mediocre development tools I am worried that when bugs do arise that I won't have a good fast way of addressing the situation.
I was reading about different ways to set up source control here:
How can multiple developers efficiently work on one force.com application?
But I haven't found anyone walking through the process of essentially reverting a change set or changing branches. Are the protections built into salesforce good enough that I just won't have to worry about bugs in production? Should I just not worry about having to revert a change set?
One of the ways this is handled is through the proper use of sandbox orgs associated with your production org. You can always keep a sandbox org that has the "blessed" instances of everything while you use another sandbox org to do major development destined for deployment to production. In the event that something seriously wrong occurs between the new development in your dev sandbox being deployed to production, you can roll-forward from your blessed sandbox to revert to what was completely working previously.
That being said, you're on to something when you ask about not worrying about bugs in production. Not that they won't happen, because they will, but rather that you'll soon begin to get a different sense for what broken means. A change set is only one way to get changes from one org to another, and a rather recent development on the platform. They have some limitations like not moving custom setting data, but generally work really well.
But it's true that when you've got good unit tests in place, coupled with all of the rest of the imposed referential integrity checks, it's really not that common to "break the build" so to speak, and wish to revert to some global snapshot of everything at a different point in time. More frequently, in my experience, you will revert isolated units back to previous versions and can do this with sandboxes or source control by pushing an earlier version forward until a fix is found.
Adam
I've been researching an app on the app exchange that at least looks like it will give me what I want. The product is Snapshot by Dreamfactory. Interestingly the sales people I talked to at Dreamfactory told me that salesforce uses their app internally to manage changes. I find it kind of unfortunate that this capability isn't included with my license but... here are the specifics of what I found that will be helpful for my specific question:
The ability to take a snapshot of your orgs meta data and copy or deploy it to another org. This will allow me to deploy/rollback changes.
The ability to diff 2 different snap shots (from different orgs) and see the details of what changed. This will help me to track down the cause of problems when they do arise.

force.com ISV development, deployment, support

We're an ISV that's completed our first app on force.com. It's an xRM-like app with extended workflow to build out complex campaigns (not simple marketing-like campaigns) and integration with on-premise software. The platform brings enormous value, and at the same time some challenges. Interested in other ISV experiences around the following:
Application upgrade process. Customers expect cloud app upgrade to "just happen". Reality is that there's inevitable manual pre- and post-upgrade steps that can fill many pages. We don't want to burden the customer with this, and at the same time while we're happy to do the upgrade work for the customer, we don't want access to customer data and the need for elaborate security assurances that come along with that access. A conundrum.
Development environment. Agile/scrum development relies on achieving full test automation and continuous integration, yet full automation beyond unit test seems difficult or impossible.
Background processing. Constraints on scheduled jobs, callouts, and futures, and issues with transaction management present challenges to traditional software development.
Curious what other ISVs have found.
Thanks!
I am now working at my second Force.com ISV and so have a fair amount of experience in releasing products on the platform (have seen 4 separate products releases, 1 which included 3 version releases and 1 including another version update).
If possible, you should try to remove any pre/post install steps that the user requires to do. It sounds tough, and it is, but its the biggest reason people don't adopt a product. The idea is that it is quick and easy to install, one click, and any extra effort detracts from the user experience. Ensuring your system is data independent is a good way of getting around the data security issues you referred to, and obviously you can offer a consultancy to do the upgrade work. A sensible idea might be to have a list of all the objects and fields that are affected by your products installation and then to do a check of the customer org before installing. I would also say that installing in sandbox and doing a couple of weeks user testing can highlight any problems you may have in future very effectively.
It is not true that full test automation beyond unit tests cannot occur and is actually very simple. The key is having the necessary framework setup. So you would have a central version control system where your code is stored (a key agile part). Then you create a script so that when code is committed, it runs an install on a SFDC org, running all tests and reporting back. You can then get this script to run a set of apex classes or upload a bunch of CSV files to put data in with either further fuller apex tests to run functionality or selenium running to do a set of tests. You can then also use this test data and script for knocking out demo environments for sales guys.
The governor and background processing limits are a bit tight, but they keep on being increased. Maybe you should integrate with Heroku or similar to do some larger external processing? I will say though I think it improves programming abilities in general, making you think about what it is your doing and the best way to do it. This then leads to a more pleasant end user experience. Batch apex jobs area a good way of doing this processing and you can use the asyncapexjob object to report back on the status f a run to users.
Hope that helps and gives you a different perspective!
Paul

Using virtual machines for development

I've recently been given the role of managing or development environment which includes:
Managing the version control system (subversion) in which we typically have one major branch which is released to production every 6 months, a maintenance branch which is released every 2 month to fix non-major bugs found by users and a couple of branches related to bugs which just can't wait for the maintenance release.
Managing our databases so that we have a development database for each branch of the code
We've not long moved over to using the version control system and have had the following issues:
Developers who work on a number of branches concurrently can quite often end up developing against the wrong database (we have around 15 developers)
A lack of a decent strategy for managing the release of branches into production and the propagation back into other branches
A lack of a decent strategy for managing the databases associated with each branch (i.e. should we keep a script which is aligned with the production environment and then a script to bring each database user in line with the needs of the branch)
I had thought of using a Virtual Machine for each branch of the code (i.e. A VM containing an Oracle Express database user, a Coldfusion Administrator with the correct setup for things like data sources, and development tools like the IDE and Tortoise).
I was looking for any suggestions anybody might have to help with any of these issues as I'm finding it really difficult to manage the process. I understand that no 2 companies have the exact same setup but I'd welcome any help.
I think that the best solution for you can be to start using continuous integration applied to your product life cycle strategy.
You can read about it over the web:
Continuous integration
Great open-source framework for continues integration!
I hope this helps you, but your question is quite hard to answer 'cause there are a lot of parameters to answer which always very from company to company, you should consider hiring a consultant to help you. He/She will have to come to your company and help you decide and implement.
I would start by asking each of the developers why this kind of mistake happens. If a developer has recently made the mistake, then get them to explain how they did it and what might help them in future. Also talk to developers that have not recently made a mistake.
I'm assuming that you have a server with Oracle and all the different flavors of the db running on it using different port numbers. In that case you would create a new db instance to go with each branch and the problem is how to help the developer set up a context before working on the branch.
Tortoise SVN is a nice tool, but perhaps this is a situation where it would be better to have some kind of small app that does the checkout, and remove Tortoise from the machines. The small app could keep a window floating on screen showing the currently active branch, and it could handle checkout and checkin, as well as making sure the right port number is used.

Disadvantages of the Force.com platform [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
We're currently looking at using the Force.com platform as our development platform and the sales guys and the force.com website are full of reasons why it's the best platform in the world. What I'm looking for, though, is some real disadvantages to using such a platform.
Here are 10 to get you started.
Apex is a proprietary language. Other than the force.com Eclipse plugin, there's little to no tooling available such as refactoring, code analysis, etc.
Apex was modeled on Java 5, which is considered to be lagging behind other languages, and without tooling (see #1), can be quite cumbersome.
Deployment is still fairly manual with lots of gotchas and manual steps. This situation is slowly improving over time, but you'll be disappointed if you're used to having automated deployments.
Apex lacks packages/namespaces. All of your classes, interfaces, etc. live in one folder on the server. This makes code much less organized and class/interface names necessarily long to avoid name clashes and to provide context. This is one of my biggest complaints, and I would not freely choose to build on force.com for this reason alone.
The "force.com IDE", aka force.com eclipse plugin, is incredibly slow. Saving any file, whether it be a class file, text file, etc., usually takes at least 5 seconds and sometimes up to 30 seconds depending on how many objects, data types, class files, etc. are in your org. Saving is also a blocking action, requiring not only compilation, but a full sync of your local project with the server. Orders of magnitude slower than Java or .NET.
The online developer community does not seem very healthy. I've noticed lots of forum posts go unanswered or unsolved. I think this may have something to do with the forum software salesforce.com uses, which seems to suck pretty hard.
The data access DSL in Apex leaves a lot to be desired. It's not even remotely competitive with the likes of (N)Hibernate, JPA, etc.
Developing an app on Apex/VisualForce is an exercise in governor limits engineering. Easily half of programmer time is spent trying to optimize to avoid the numerous governor limits and other gotchas like visualforce view state limits. It could be argued that if you write efficient code to begin with you won't have this problem, which is true to an extent. However there are many times that you have valid reasons to make more than x queries in a session, or loop through more than x records, etc.
The save->compile->run cycle is extremely slow, esp. when it involves zipping and uploading the entire static resource bundle just to do something like test a minor CSS or javascript change.
In general, the pain of a young, fledgling platform without the benefits of it being open source. You have no way to validate and/or fix bugs in the platform. They say to post it to their IdeaExchange. Yeah, good luck with that.
Disclaimers/Disclosures: There are lots of benefits to a hosted platform such as force.com. Force.com does regularly enhance the platform. There are plenty of things about it I like. I make money building on force.com
I see you've gotten some answers, but I would like to reiterate how much time is wasted getting around the various governor limits on the platform. As much as I like the platform on certain levels, I would very strongly, highly, emphatically recommend against it as a general application development platform. It's great as a super configurable and extensible CRM application if that's what you want. While their marketing is exceptional at pushing the idea of Force.com as a general development platform, it's not even remotely close yet.
The efficiency of having a stable platform and avoiding big performance and stability problems is easily wasted in trying to code around the limits that people refer to. There are so many limits to the platform, it becomes completely maddening. These limits are not high-end limits you'll hit once you have a lot of users, you'll hit them almost right away.
While there are usually techniques to get around them, it's very hard to figure out strategies for avoiding them while you're also trying to develop the business logic of your actual application.
To give you a simple sense of how developer un-friendly the environment is, take the "lack of debugging environment" referred to above. It's worse than that. You can only see up to 20 of the most recent requests to the server in the debug logs. So, as you're developing inside the application you have to create a "New" debug request, select your name, hit "Save", switch back to your app, refresh the page, click back to your debug tab, try to find the request that will house your debug log, hit "find" to search for the text you're looking for. It's like ten clicks to look at a debug output. While it may seem trivial, it's just an example of how little care and consideration has been given to the developer's experience.
Everything about the development platform is a grafted-on afterthought. It's remarkable for what it is, but a total PITA for the most part. If you don't know exactly what you are doing (as in you're certified and have a very intimate understanding of Apex), it will easily take you upwards of 10-20x the amount of time that it would in another environment to do something that seems like it would be ridiculously simple, if you can even succeed at all.
The governor limits are indeed that bad. You have a combination of various limits (database queries, rows returned, "script statements", future calls, callouts, etc.) and you have to know exactly what you are doing to avoid these. For example, if you have a calculated rollup "formula" field on an object and you have a trigger on a child object, it will execute the parent object triggers and count those against your limits. Things like that aren't obvious until you've gone through the painful process of trying and failing.
You'll try one thing to avoid one limit, and hit another in a never ending game of "whack a limit". In the process you'll have to drastically re-architect your entire app and approach, as well as rewrite all of your test code. You must have 75% test code coverage to deploy into production, which is actually very good thing, but combined with all of the other limits, it's very burdensome. You'll actually hit governor limits writing your test code that wouldn't come up in normal user scenarios, but that will prevent you from achieving the coverage.
That is not to mention a whole host of other issues. Packaging isn't what you expect. You can't package up your app and deliver it to users without significant user intervention and configuration on the part of the administrator of the org. The AppExchange is a total joke, and they've even started charging 5K just to get your app listed. Importing with the data loader sucks, especially if you have any triggers. You can't export all of your data in one step that includes your relationships in such a way that it can easily be re-imported into another org in a single step (for example a dev org). You can only refresh a sandbox once a month from production, no exceptions, and you can't include your data in a refresh by default unless you have called your account executive to get that feature unlocked. You can't mass delete data in custom objects. You can't change your package names. Certain things can take numerous days to complete after you have requested them, such as a data backup before you want to deploy an app, with no progress report along the way and not much sense of when exactly the export occurred. Given that there are synchronicity issues of data if there are relationships between the data, there are serious data integrity issues in that there is no such thing as a "transaction" that can export numerous objects in a single step. There are probably some commercial tools to facilitate some of this, but these are not within reach to normal developers who may not have a huge budget.
Everything else the other people said here is true. It can take anywhere from five seconds to a minute sometimes to save a file.
I don't mean to be so negative because the platform is very cool in some ways and they're trying to do things in a multi-tenant environment that no one else is doing. It's a very innovative environment and powerful on some levels (I actually like VisualForce a lot), but give it another year or two. They're partnering with VMware, maybe that will lead to giving developers a bit more of a playpen rather than a jail cell to work in.
Here are a few things I can give you after spending a fair bit of time developing on the platform in the last fortnight or so:
There's no RESTful API. They have a soap based API that you can call, but there is no way of making true restful calls
There's no simple way to take their SObjects and convert them to JSON objects.
The visual force pages are ok until you want to customize them and then it's a whole world of pain.
Visual force pages need to be bound to SObjects otherwise there's no way to get the standard input fields like the datepicker or select list to work.
The eclipse plugin is ok if you want to work by yourself, but if you want to work in a large team with the eclipse plugin forget it. It doesn't handle synchronizing to and from the server, it crashes and it isn't really helpful at all.
THERE IS NO DEBUGGER! If you want to debug, it's literally debugged by system.debug statements. This is probably the biggest problem I've found
Their "MVC" model isn't really MVC. It's a lot closer to ASP.NET Webforms. Your views are tightly coupled to not only the models but the controllers as well.
Storing a large number of documents is not feasible. We need to store over 100gb's of documents and we were quoted some ridiculous figure. We've decided to implement our document storage on amazons S3 infrastructure
Even tho the language is java based, it's not java. You can't import any external packages or libraries. Also, the base libraries that are available are severely limited so we've found ourselves implementing a bunch of stuff externally and then exposing those bits as services that are called by force.com
You can call external SOAP or REST based services but the message body is limited to 100kb's so it's very restrictive in what you can call.
In all honesty, whilst there are potential benefits to developing on something like the force.com platform, for me, you couldn't use the force.com platform for true enterprise level apps. At best you could write some basic crud style applications but once you move into anything remotely complicated I'd be avoiding it like the plague.
Wow- there's a lot here that I didn't even know were limitations - after working on the platform for a few years.
But just to add some other things...
The reason you don't have a line-by-line debugger is precisely because it's a multi-tenant platform. At least that's what SFDC says - it seems like in this age of thread-rich programming, that isn't much of an excuse, but that's apparently the reason. If you have to write code, you have "System.debug(String)" as your debugger - I remember having more sophisticated server debugging tools in Java 1.2 about 12 years ago.
Another thing I really hate about the system is version control. The Spring framework is not used for what Spring is usually used for - it's really more off a configuration tool in SFDC rather than version control. SFDC provides ZERO version-control.
You can find yourself stuck for days doing something that should seem so ridiculously easy, like, say, scheduling a SFDC report to export to a CSV file and email to a list of recipients... Well, about the easiest way to do that is create a custom object with a custom field, with a workflow rule and a Visualforce email template... and then for code you need to write a Visualforce component that streams the report data to the Visualforce email template as an attachment and you write anonymous APEX code schedule field-update of the custom object... For SFDC developers, this is almost a daily task... trying to put about five different technologies together to do tasks that seem so simple.... And this can cause management headaches and tensions too - Typically, you'd find this out after getting a suggestion to do something that doesn't work in the user-community (like someone already said), and then trying many things that, after you developed them you'd find they just don't work for some odd-ball reason - like "you can't schedule a VisualForce page", or "you can't call getContent from a schedulable context" or some other arcane reason.
There are so many, many maddening little gotcha's on the SFDC platform, that once you know WHY they're there, it makes sense... but they're still very bad limitations that keep you from doing what you need to do. Here's some of mine;
You can't get record owner information "out of the box" on pretty much any kind of record - you have to write a trigger that links the owner on create of the record to the record you're inserting. Why? Short answer because an owner can be either a "person" or a "queue", and the two are drastically different entities... Makes sense, but it can turn a project literally upside down.
Maddening security model. Example: "Manage Public Reports" permission is vastly different from "Create and Customize Reports" and that basically goes for everything on the platform... especially folders of any kind.
As mentioned, support is basically non-existent. If you are an extremely self-sufficient individual, or have a lot of SFDC resources, or have a lot of time and/or a very forgiving manager, or are in charge of a SFDC system that's working fine, you're in pretty good shape. If you are not in any of these positions, you can find yourself in deep trouble.
SFDC is a very seductive business proposition... no equipment footprint, pretty good security, fixed price, no infrastructure, AND you get web-based CRM with batchable, and schedualble processing... But as the other posters said, it is really quite a ramp-up in development learning, and if you go with consulting, I think the lowest price I've seen was $200/hour.
Salesforce tends integrate with other things years after some technologies become common-place - JSON and jquery come to mind... and if you have other common infrastructures that you want to do an integration with, like JIRA, expect to pay a lot extra, and they can be quite buggy.
And as one of the other posters mentioned, you are constantly fighting governor limits that can just drive you nuts... an attachment can NOT be > 5MB. Period. And sometimes < 3MB (if base64 encoded). Ten HTTP callouts in a class. Period. There are dozens of published governor limits, and many that are not which you will undoubtedly find and just want to run out of your office screaming.
I really, REALLY like the platform, but trust me - it can be one really cruel mistress.
But in fairness to SFDC, I'd say this: the biggest problem I find with the platform is not the platform itself, but the gargantuan expectations that almost anyone who sees the platform, but hasn't developed on it has.... and those people tend to be in positions of great authority in business organizations; marketing, sales, management, etc. Huge disconnects occur and heads roll, or are threatened to roll daily - all because there's this great platform out there with weird gotchas and thousands of people struggling daily to get their heads around why things should just work when they just don't and won't.
EDIT:
Just to add to lomaxx's comments about the MVC; In SFDC terminology, this is closely related to what's known as the "viewstate" -- aand it can be really buggy, in that what is on the VF page is not what is in the controller-class for the page. So, you have to go throught weird gyrations to synch whats on the page with what the controller is going to write to SF when you click your "save" button (or make your HTTP callout or whatever).... man, it's annoying.
I think other people have covered the disadvantages in more depth but to me, it doesn't seem to use the MVC paradigm or support much in the way of code reuse at all. To do anything beyond simple applications is an exercise in frustration compared to developing an application using something like ASP.Net MVC.
Furthermore, the tools, the data layer and the frustration of trying to refactor code or rename fields during the development process doesn't help.
I think as a CMS it's pretty cool but as a platform for non CMS applications, it's doesn't make sense to me.
The security model is also very very restrictive... but this isn't the worst part. You can't currently assert whether a user has the ability to perform a particular action.
You can check to see what their role is, but you can't check if that role has permissions to perform the current action.
Even worse is the response from tech support to "try the action and if there's an exception, catch it"
Considering Force.com is a "cloud" platform, its ability to act as a client to an external WSDL-defined service is pretty underwhelming. See http://force201.wordpress.com/2010/05/20/when-generate-from-wsdl-fails-hand-coding-web-service-calls/ for what you might end up having to do.
To all above, I am curious how the release of VMforce, allowing Java programmer to write code for Force.com, changes the disadvantages above?
http://www.zdnet.com/blog/saas/vmforcecom-redefines-the-paas-landscape/1071
I guess they are trying to address these issues. At dreamforce they mentioned they we're trying to drop the Governor limits to only 4. I'm not sure what the details are. They have a REST API for early access, and they bought heroku which is a ruby development in the cloud. They split out the database, with database.com so you can do all your web development on and your db calls using database.com.
I guess they are trying to make it as agnostic as possible. But right about now these are all announcements and early access so like their Safe Harbor statements don't purchase on what they say, only on what they currently have.

How do I convince someone they need to upsize from ms access to sql server or similar

I am having a real problem at work with a highly ingrained developer obsessed with ms access. Users moan about random crashes, locking errors, freeze's, the application slowing down (especially in 2007) but seem to be very resistant to moving it. Most of the time they blame the computer and can't be convinced it's the fact its a mdb sat on a network drive and nothing to do with the hardware sat in front of them which is brand new.
There is a front end vb program hanging off it but I don't think it would take more than a couple of weeks to adjust, infact I would probably re-write it as it has year on year messy code from a previous developer.
What are my best arguments to convince them we need to move it?
Does anyone else have similar problems with developers stuck in their ways?
how about the random, crashes, locking errors, freeze's, slow downs (sic).
A quick search on the web finds some useful materials:
Best Practices When Using Microsoft Office Access 2003 in a Multi-user Environment - if the changes here can't be implemented, or would effectively take a rewrite, then that is good ammunition for doing it right.
SQL Server vs MS Access - pay special attention to feature limits. Eg You can only have 32,000 objects in an access DB. Caveat: though it says 255 concurrent users, and that is probably a technical limitation, the practical limitation is really MUCH lower.
It's hard to convince people that are not willing to learn and are not open to new ideas. You can go on about speed issues, concurrency issues, security problems.. but ultimately, some people will just never listen. Go over their heads. Rewrite it in tools from this decade and show them up. Refuse to be involved with the project and further. I don't know what the political situation is, but technically, MS access is wrong for what you are doing, from what you've described.
come in on a weekend, copy the database to sql server, change the app's connect-strings to sql server, retest the application, then uninstall ms-access...everywhere.
then don't say anything about it, let him think that the problems 'fixed themselves' and that the users are still using ms-access
To me it depends on how many concurrent users you have and how big the database is. If you have more than 5 concurrent users then you should be thinking about a database server. The network traffic starts to get out of hand and with each concurrent user you add it just gets worse.
I have created reliable access based systems for years. If you are having random crashes, locking issues, and slow downs then you aren't doing something right. I typically will have an mda local with the mdb on the network when creating an app in access. To have good performance it's key to have the proper indexes and queries optimized for getting just the data you need. Whether using a separate app, access, or some app running against sql server you need to actively handle record locking properly. You can't just blindly let access lock your records.
Forget the arguments about DB Size, it is an uninformed reason to shift to a client-server platform in 90% of the cases I hear it brought up.
Your best arguments are based on features explained at a low tech level:
(1) You can backup and perform maintenance on the DB without kicking out the users (which introduces costly downtime).
(2) Faster recovery if data is accidentally deleted/mangled or corrupted. Again, less risk and less downtime. This is always a good foundation for a business case.
(3) If (and only if) you anticipate the need to scale quite a bit, the upgrade will better allow that.
(4) If you need to run automated jobs/updates, SQL can do this much more elegantly.
Remember the contra-indications for SQL, it is easy to get on your technical high-horse about this platform versus that, but you have to balance the benefits against the costs.
SQL is a Helluva lot more expensive to maintain as it requires dedicated hardware, expensive licenses (Server OS and DB) and usually at least a part time DBA that is going to cost you a bare minimum of $75K (if you get luck AND work out of Podunk Iowa).
The best possible advice I can give you is to make sure that you have a good attitude and are known as someone who does quality work and gets things done. It sounds like you don't have any control in the situation so what you need is influence.
Find a way to solve a problem (probably a different one that is less threatening to the people involved) in the way you are suggesting. Make it work blindingly fast and flawlessly. Make it work so well that people start asking for you when they need something done. Get it done quickly, which you should be able to do because you'll be using the right tools for the job.
Be a good person to work with, not the PITA that knows how everyone else should write their code. Be able to give an answer for what you might do differently and why, but don't automatically assume that your ideas are always the best. Maybe there are trade-offs that you don't know about -- no money in the budget for the extra CALs, we have this other app that needs to be done first. This doesn't sound like your situation, but looking for opportunities to understand before making constructive criticisms can go a long way to helping people be receptive.
The other thing is that this probably has nothing to do with the technical aspects of the situation and everything to do with the insecurities of the other developer. "This is all I know. If we change it, I won't understand it and then where will I be." Look for ways to help the other guy grow -- when he's having a problem, find resources that will help him develop good technical solutions. Suggest that everyone in your department get some training in new technologies. Who knows, one good SQL Server course and the guy could become the SQL Server evangelist in the organization because now THAT'S what he knows.
Lastly, know when to cut your losses, so to speak. If you find that you're not able to do anything about the situation, don't add to the complaining. Move on to something that you can control and do it as well as you can. Maybe in the future you'll be in a position that you do have control or influence in the situation and can do something about it. If you find that you're in a company that's more dysfunctional than most, find a way to move on to a place where the environment is better.
It is possible, and actually fairly easy, to convert an Access database to having the tables/views in SQL Server while still using the Access app as a front-end.
From there, your Access-obsessed developer can still have fun with all that VBA code. Meanwhile, on the back-end, you add indexes and such to speed everything up. Maybe someday you get lucky, and he asks about stored procedures. Then, the app is just a front-end, and who cares what it's written in? Your data is safe in SQL Server.
It is possible for you to do this yourself, but just leave the production app ALOOOOOOOOOOOONE. Take a copy, and convert that copy. Then, host it for a couple of users to TEST drive .. make your version of the Access app show "TEST APP" in big red letters. If your developer asks what you're doing, you can say the truth -- you are testing to see if converting only the tables/views might be of some help to the overall app.
This way, you get the best of both worlds, keep your developer happy, make the users happier (hopefully), and if you play it right, your bosses will know that you handled a knotty personnel issue with your technological prowess and your maturity.
I once had similar problems with someone I would not hesistate to call a complete idiot.
It was not possible to convince them of the issues with access. In the end it was easier to force the issue than do it "nicely", cruel to be kind.
If they resist then you can always go above their head. Management must be aware of crashes and stability related issues. Present a plan to them to improve stability and they are likely to at least listen. They will probably then want a meeting with all developers to discuss so go into it armed with plenty of ammo.
More than "How to convince them", let's talk about "How to do it without anybody noticing"!
First of all I advice you not to mix together the code optimisation issue and the SQL server one. Do not give users a chance to complain about SQL while bugs are related to something else.
If your code is really unbearable, rewrite the app before switching to SQL, keeping in mind the following points to make the final transition to SQL Server completely transparent for final users.
This is what we did 18 months ago, and I am sure we still have users thinking our database is Access:
Export current access database to SQL through available Wizard in access for testing purposes (many problems might occur, and you could need another tool such as the one proposed here).
Create a unique connection object at the application level, so that you can freely switch from Access to SQL at any time (at development level, you can even add an input box at startup to ask which connection to use). We chose an ADODB connection object, but it will also work with ODBC connection.
In case you use SQL syntax to update tables, make sure that all SELECTs, INSERTs, UPDATEs and DELETEs make use of this connection. In case you use recordset, make sure that all of them use this connection at opening time.
When needed, update all connexion specific code by adding a "SELECT CASE" type_Of_TheConnexion options
Switch to SQL connection ..and debug till you're done!
The problems you will find are mainly linked to SQL syntax, where MSSQL uses ' instead of " and # as separators. Date format is also an issue, where standard SQL format is 'YYYYMMDD' while MS-Access format depends on computer locals (beware of conversions from date to string!) and is stored as "YYYY-MM-DD" (if I remember ...). Boolean in SQL are 0 and 1, while they are True/False or 0/-1 in Access ...
Test, update code, and when you are ok, make a new data transfer, lock your app on the SQL connection, and distribute a new runtime.
It depends on the type of application and data load of your database but Access is quite efficient, even over the network.
Depending on the amount of data your users deal with you could easily scale up to a 100 users on a network just using a from and back-end Access database.
Looks like in your case a rewrite may be in order. If your application is data-centric if doesn't make much sense to develop it in VB6: the tools given by Access are much better than anything you'd be able to make, especially when considering Access 2007.
Upsizing to SQL Server is only really required if you're getting into issues of:
Security:
you need to make sure that only the rights users can access data. You can do your own security in Access, but it's never going to be as strong as SQL Server.
Scalability:
you're dealing with lots of data and complex queries or a lot of users and it would be better to have dedicated hardware to handle the load for the clients. The issue with this though is that while removing the pressure from the less-capable clients machines, you're adding a lot more to the server.
Integrity:
With the back-end database being just a file that needs R/W access for all connected clients, there's always the possibility that someone is going to do something bad or that a client may crash and leave the database corrupted.
If your number of users is average (I'd say 30), then there's probably no real need to upscale:
Use MS Access 2007 to develop your application, then just use the MS Access 2007 Runtime (it's free!) on all client machines to get a more modern user interface (uses the Ribbon and has lots of UI enhancements over previous versions).
You can't be the cheapness of that solution : you only need full retail version of MS Access and all the rest is free, regardess of the number of users!
Don't think that moving to SQL Server is going to improve performance of your queries: MS Access often does a better job of optimizing the queries for you (it knows what needs to be displayed and does lots of caching and optimization).
Make sure you only edit small amounts of data at any one time (don't use dynaset queries just to display vast amounts of data in a datasheet; use a snapshot instead and open a detail form that only contains the data to edit when necessary.
Cache complex queries locally.
Built some caching mechanism that leaves a copy of the results of a complex query on the local machine. The gain in performance is pretty amazing and if the query doens't change much (for instance a log of stock operations) you can just persist the complex/big query locally and append new records as necessary.
There is so much more to say.
Bottom line is: you may be looking at a rewrite, but don't dismiss Access as the solution because your current application was poorly written.
Try bechmarking and showing the stats to him
Making people change can sometimes be a real pain in the butt.
I would have to say the main argument would have stability and speed, but of course like you have said they already know this a still won't move.
Another thing to try would be to show them the power of LINQ to SQL and how much cleaner it would make your application. Like Daniel Silveira said you could try and throw a couple of stats there way and see if they are convinced.
We have a app build using MS access as a back end and I can't wait till we get our new SQL server so I can move everything to that.
You could show him the perf results comparing the two, but if he's really set in his ways and refuses to change, there isn't much you can do except force him somehow.
If you're his boss then just force him to change it to use SQL. If not, then convince your boss to force the change by showing him the perf results and explain it'll fix the issues you're having.
Errr, leave the team? You seem to be working with the totally wrong set of people. Now, if the team IS your company, then you are working with the wrong company.
Of course once you leave the company, you could tell your clients that you could solve the network problems on their own and make them leave the company as well. Then give them an improved system that works on SQL Server Express.

Resources