Related
In the past I have used Zoho Creator which worked well but now I need something with a far better front end and something self hosted so I have been trying to find a CMS that can do what Creator does. I am currently using WP Toolset but is a nightmare to do the calculations I need it to. Have tried PRocesswire but no front end. Does anyone know of a CMS that is easy to "Fetch" data from other tables and fields and then return an answer? Or another idea altogether?
I’m aware of a company who is doing just that. The app, delivery and all of it is in the cloud with management locked away multiple security U2F keys away form mere mortals. But the point is, this idea, to make it easier, backer resistant (never proof), and all that 1 it’s on the drawing hoard at some start ups. Not just one. Check into it. I wish I could give you more but I’m part of a team doing just that and it’s outside development that can caused either unseen bugs, too many classes or wrong type of classes, or otherwise screw with our once perfect baby. So we are in essence sandboxinv all developers and forking their repo, even going a step further and giving them a dev repo that’s formed from our actual repo in real world terms.
I want to make sure that the following procedure is a good one since I lack experience with WPF applications. I have done some research but did not see any that meet my requirements, especially with the multi-users at a single station.
Problem: I have an application that needs to switch users with out closing. What is the best option for accomplishing this?
The Stage: I am using a MongoDb database, C# WPF, Custom Authentication. It is a single screen with other windows for Administrative tasks.
The authentication is has two options. First is a normal username and salted password following industry practices. The second I mentioned is a short log in code that is stored the same as a password (IE a username and password in one). This because it is a very busy place and a current requirement is for it to time out quickly, say 5 seconds, so others can not enter information under someone else's credentials. Up to 8 people at a time need to be able to quickly log in and add some information. The environment is a high employee theft place.
My solution: This includes the above authentication. Once logged in, the information is loaded into an ApplicationState public static object that has binding between properties and some of the objects on the screen, example displaying who is logged in. When a Command is issued it is checked against the permissions loaded into the ApplicationState to verify adequate permissions exist. The timer is on another thread, like here: http://www.codeproject.com/Articles/42884/Implementation-of-Auto-Logoff-Based-on-User-Inacti
Review: I spent a day researching and designing this solution. I could not find anyone that had the requirements I have. With my lack of experience with WPF and C#, I was not sure of the vocabulary to use in the search and my research. The other option was to rebuild the MasterWindow on every log in, but that seemed to be a bad idea. Is there another way to implement this or will this suffice? I have not completed coding it, but my tests are working.
You are asking a quite broad question here, with a number of design decisions involved which are not necessarily dependent on one another. It is a little bit hard to give a 'correct' answer here, so just a number of thoughts:
When using WPF, it really makes sense to dig into the MVVM pattern. You have probably heard of it when doing research on WPF. (If you're using it already, never mind). There's a bit of a learning curve, but it frees you of quite a few design decisions. In your case, having a ViewModel which exposes the user related properties (Username, commands for logging on and of, etc...) allows you to easily handle the account switching without worrying about the view being updated or something like that. WPF binding mechanisms will do that for you. Do some research on MVVM and INotifyPropertyChanged, if you haven't done so already.
Working with global hooks to track user action seems like a bit of overkill to me. I would consider resetting the timeout on any user action which affects the ViewModel (Add data, remove data, etc.). If this is not sufficient, than before using global OS level hooks, I would work with events on your applications window. You could subscribe to the MouseMove event of your window and reset the timeout-timer there. That should do, because as I understand the requirements, there won't be anything happening outside of your application. Global hooks usually involve a lot of P/Invoke, which is to be avoided if possible.
Apart from that, you're approach seems reasonable as far as I can tell... Though it is not an exhaustive answer, I hope it gives you some input for the next steps.
It's basically a CRUD application.. Are there any Access add-ons to simplify development of screens, reports, SQL stored procedures? I'm not familiar with VBA or Macros. I need to be able to control cursor movement, perform conditional if's, what happen before or after entering a value, table lookup's, etc. etc.
I've tinkered around with Access, already have a schema and relations defined, but I imagine the learning curve will take quite some time and heard it takes time to get up to speed on VBA and Macros.. I've even heard its better to develop this app with a VB add-on called RadVolution. I would actually rather develop this app with as a touch-screen POS-style app, but not aware of any RAD Tools or SDK's for that. I'm familiar with Informix-SQL and non-visual Basic, but have no C or Object-Oriented language experience, I'm old school procedural language (Basic, COBOL).. Anyone willing to work or help me with this conversion project?
A real strength of Access is its simplicity, it is described as a "rapid application database development and reporting tool". For the most part, you do not need any VBA for forms and reports, if it looks complicated, it is likely you are doing something wrong. The only question is how many users do you expect to have and where do you expect to run your application? A lot of bad press is due to misuse, used properly, Access can be very handy indeed.
You really don't want some type of automated conversion anyway. The reason is that you need to CHANGE how you done some these things to work with the architecture change that occurs when you move from one development system to another.
An automated conversion is probably not much worth it since so often how things were done in the previous system will be done differently in the new system. For example when people came from DOS based (text screen based) FoxPro applications to Access there was two significant changes the developer had to make:
1) No record numbers in Access
FoxPro (which was a dBase compatible clone) had its roots in being a singer user file based database system. So this was system designed from the ground up to operate on your personal computer. This meant the file and programming system used sequential records from a file on the hard drive. This model was much like punched cards data processing. And I should point out there's nothing wrong with this type of model, but the software approaches and designs you used for punch card data processing is somewhat different then compared to interactive multiuser systems.
What is SIGNIFICANT and most important here is on a conceptual level when you write software inside of Access then record numbers or so call record order as a conceptual idea that as you write software is NOT relevant. However in Foxpro assuming record order was a legitimate assumption. This is an architecture change. I remember back in early 90's that in many forums one the FIRST questions from long time Foxpro developer asked is:
How come access does not have record numbers like Foxpro does?
The answer was simple and that answer is/was Access considers data a big huge unordered bucket of data. In other words if you wanted some order to your data, you had to add something like an invoice number, perhaps even a time stamp, or something else. For something like a sub form as a general rule you can rely the auto number, but that auto number should never been seen by the users. No matter what, you had to use a SQL query that SAYS what order you want.
Another important detail is if you add 10 records to a table (even in single user mode), if you then retrieve those 10 records from that table, you can NOT assume the record order will be the same as when you added them. In other words if you need a particular order, you have to sort that data by some column. This is an assumption that FoxPpro, or people who used FORTRAN and punch cards could always assume. Such assumptions cannot be made when using Access. In fact this assumption cannot be made with any modern server based system such as SQL server.
However this "lack of" record order assumption was SIGNIFICANTLY important later on down the road. This "assuming" means that THEN your WHOLE Access design was now based on assumptions that ALLOWED easy conversion to multiuser systems OR that of client to server (both need this assuming).
So your software designs could never say go to the next or previous record (based on record number) since records are now a mumble jumble of records being entered by different people. The next two reocrds in that table (or previous ones) might not be yours! So keep in mind that while Access allows you go to the next/previous record inside of a form, it NEVER does so based on record number but ONLY on the data that's been CURRENTLY loaded into that form. In FoxPro you would often move around by actually using command that said go to record 4 in the table.
In Access we don't say go to the 4th record in the table. You might say go to the 4th record in some data that we pulled from a table into a form, but that 4th record has absolutely nothing to do with the actual physical fourth record in the table. A small change, but one that was required for multi-user systems that we started to use 10 years later (so the smale change in software produced beneifts 10 years later!).
As a general rule this architecture or philosophical concept of record order in a table is not a very big deal at all for most of the software you write, but if you needed to use SQL server later on, then it was a big deal. And I should point out that since your software is written using SQL in mind, then at least in this regards you are in good shape.
However for those that wrote applications over 4 to 5 years based on this simple record order assumption, it would have to be completely RE architectured for multiuser environment or even for Access.
I should point out that FoxPpro eventually became a brilliant object orientated client server development tool, but had to go through a significant metamorphosis than what the original architecture and designs that a typical FoxPro application had.
2) Event driven programming
In these older text based systems you tended to write one large major startup program with a main menu system included. Choosing a menu option would then perhaps branch to a section in the main program or perhaps branch out and call another portion or part of the application. However to its credit, FoxPro and a good number of dev tools did have some event type of setup, but they were not ideal. It best to re-do much of how those screens and UI will work when droping text based UI anyway - this is VERY much the case with new touch based and gesture based UI wew now see such as smartphone or iPad.
In event driven programing we as a general rule don't have that large startup program. And we also don't have a large code base for the main menu system. In event driven programming you have code that RESPONDS to a user click. Or you have code that responds to a record save. Or even navigation to the next record.
So in event driven programming you click on a button then a rather SMALL bit of code would respond to an event by the user (in this case a mouse click). So this type of programming is what we call event driven programming.
All of a sudden, your application is now not being driven or run by one big large main program, but in fact is a whole bunch a little tiny small programs stitched together by event driven code.
For people coming from an DOS based environment, or even QuickBasic, GW-basic or even many of the older text based database systems, then having one large startup program with some menu system was common.
And having a large program to "edit" one data entry screen was common.
Now such designs are turned upside down in which your menus and click events were now going to run and call code. Thus these very small routines would THEN call other bits of code to allow the application to run.
The main reason for this architecture change was the introduction of the mouse and the graphical user interface. In other words when looking at a data entry form, in place of tabbing along to the next field in a complex data entry form the user can now click on many different things and even click on the menu bar. So they can click just about ANY place on the form. This means that having one big program to run and maintain the form data entry is/was not possible. If your code was waiting for input in the Company field, then how could code be run when the user clicked on a menu bar option? Since the user could do many things in many different order than what the original programmer would anticipate then we need a different way of writing code.
At the end of the day with a GUI then code branching became too complicated for the developer to anticipate. Hence the concept of event driven programming was born to solve this dilemma. In other words our code now runs and responds to the users actions and our code is not waiting for the next user input sitting in some line of code.
Again this was a small architecture change, but many developers struggled with this conceptual change in approach to software development. On the other hand all of these changes occurred in the early 90's.
The next big change the course was object oriented programming. Access does allow you to build class objects, but it is not a full OO system.
Today the next big challenge and change and architecture is the ability to build web based systems. Again a zillion changes occur due to having "solve" different programs for the developer. Access 2010 allows one to build web based systems now, and this conceptual and architecture change is a GREATER challenge than that of learning the new macro language we have for developing web forms with Access. So a "change" in how you design software has to occur again.
I should point out that these major changes in the industry happen about once every 10 years or so.
My point of all of the above is, even if there was some automated conversion system, it really would NOT work very well because the architecture of the systems are very different. You would be handicapped by all of the old assumptions.
I also note you been around asking about using using Access in various Access forums for what about two years or even more now? You seem to be looking for some magical silver bullet that will solve your problems. There is not one!
At the end of the day you need a sit down and obtain some basic Access skills or hire someone. You need to learn the system you going to use and THEN come up wth a desing that works with Access and "is" for Access.
I should point out that the desing you choose for Access would not necessary say work with vb.net either. So don't try and take an existing desing and put a square peg in a round hole. What works in one system in terms of UI will NOT work in other systems.
I think you been fooling around much too long here. About the only good part of delaying so long is you now have to consider if you want to adopt a set of tools that allows some web integration with the application?
Office 365 works great with Access web publishing, but for complex mature applications, I think the web side is somewhat weak (but you can write hybrid applications with Access 2010 now). Here is an video of Access web in Action:
http://www.youtube.com/watch?v=AU4mH0jPntI
In the above, I swtich to running the applicaiton in a web browser. This app was 100% built in Access, including table trigger logic and code (no 3rd party tools or coding system was use - ONLY Access was used).
Considering technology right now, perhaps you have this run on iPads as they walk around in the store? There a lot of new choices here for software, but if you sit around for another 2-3 years, then you be looking to use something other then the iPad and some "other" new system.
You can certainly write your applications as more "touch" friendly in Access. However some of the new gesture based actions do not transfer to web. For example, we cannot disable keyboard input in a combo box, and this ability would REALLY help the Access application running on an iPad if we could. The reason being is when we tap or hit a combo box on iPad then it pops up on the screen the soft keyboard and we do NOT want this. And some of a really slick gesture date pickers etc. don't translate to web on the iPad (they do want you to write native apps after all!).
Microsoft Access is pretty simple as it is. There are wizards for form and report creation. You can then modify the events to perform all of the tasks you mention.
I suggest getting a good book on the subject and delving in. I learned a lot (years ago) from the Developer's Handbook series including Access Developer's Handbook and VBA Developer's Handbook.
It appears you've already made the decision to migrate your entire system to MS Access but have you ever thought about first skinning your IDS system with a platform like Grails? It's cross platform and can be deployed on any operating system that supports Java.
You could deploy the resulting application as a single user, single site or shared system depending upon the client's requirements.
Once you have migrated and enhanced all of the existing functionality it will then be trivial to convert the back-end database to another engine such as PostgreSQL.
I'm currently in the process of enhancing a legacy Informix 7 (SE/ACE/4GL) application for a client and it is working out really well.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
We're currently looking at using the Force.com platform as our development platform and the sales guys and the force.com website are full of reasons why it's the best platform in the world. What I'm looking for, though, is some real disadvantages to using such a platform.
Here are 10 to get you started.
Apex is a proprietary language. Other than the force.com Eclipse plugin, there's little to no tooling available such as refactoring, code analysis, etc.
Apex was modeled on Java 5, which is considered to be lagging behind other languages, and without tooling (see #1), can be quite cumbersome.
Deployment is still fairly manual with lots of gotchas and manual steps. This situation is slowly improving over time, but you'll be disappointed if you're used to having automated deployments.
Apex lacks packages/namespaces. All of your classes, interfaces, etc. live in one folder on the server. This makes code much less organized and class/interface names necessarily long to avoid name clashes and to provide context. This is one of my biggest complaints, and I would not freely choose to build on force.com for this reason alone.
The "force.com IDE", aka force.com eclipse plugin, is incredibly slow. Saving any file, whether it be a class file, text file, etc., usually takes at least 5 seconds and sometimes up to 30 seconds depending on how many objects, data types, class files, etc. are in your org. Saving is also a blocking action, requiring not only compilation, but a full sync of your local project with the server. Orders of magnitude slower than Java or .NET.
The online developer community does not seem very healthy. I've noticed lots of forum posts go unanswered or unsolved. I think this may have something to do with the forum software salesforce.com uses, which seems to suck pretty hard.
The data access DSL in Apex leaves a lot to be desired. It's not even remotely competitive with the likes of (N)Hibernate, JPA, etc.
Developing an app on Apex/VisualForce is an exercise in governor limits engineering. Easily half of programmer time is spent trying to optimize to avoid the numerous governor limits and other gotchas like visualforce view state limits. It could be argued that if you write efficient code to begin with you won't have this problem, which is true to an extent. However there are many times that you have valid reasons to make more than x queries in a session, or loop through more than x records, etc.
The save->compile->run cycle is extremely slow, esp. when it involves zipping and uploading the entire static resource bundle just to do something like test a minor CSS or javascript change.
In general, the pain of a young, fledgling platform without the benefits of it being open source. You have no way to validate and/or fix bugs in the platform. They say to post it to their IdeaExchange. Yeah, good luck with that.
Disclaimers/Disclosures: There are lots of benefits to a hosted platform such as force.com. Force.com does regularly enhance the platform. There are plenty of things about it I like. I make money building on force.com
I see you've gotten some answers, but I would like to reiterate how much time is wasted getting around the various governor limits on the platform. As much as I like the platform on certain levels, I would very strongly, highly, emphatically recommend against it as a general application development platform. It's great as a super configurable and extensible CRM application if that's what you want. While their marketing is exceptional at pushing the idea of Force.com as a general development platform, it's not even remotely close yet.
The efficiency of having a stable platform and avoiding big performance and stability problems is easily wasted in trying to code around the limits that people refer to. There are so many limits to the platform, it becomes completely maddening. These limits are not high-end limits you'll hit once you have a lot of users, you'll hit them almost right away.
While there are usually techniques to get around them, it's very hard to figure out strategies for avoiding them while you're also trying to develop the business logic of your actual application.
To give you a simple sense of how developer un-friendly the environment is, take the "lack of debugging environment" referred to above. It's worse than that. You can only see up to 20 of the most recent requests to the server in the debug logs. So, as you're developing inside the application you have to create a "New" debug request, select your name, hit "Save", switch back to your app, refresh the page, click back to your debug tab, try to find the request that will house your debug log, hit "find" to search for the text you're looking for. It's like ten clicks to look at a debug output. While it may seem trivial, it's just an example of how little care and consideration has been given to the developer's experience.
Everything about the development platform is a grafted-on afterthought. It's remarkable for what it is, but a total PITA for the most part. If you don't know exactly what you are doing (as in you're certified and have a very intimate understanding of Apex), it will easily take you upwards of 10-20x the amount of time that it would in another environment to do something that seems like it would be ridiculously simple, if you can even succeed at all.
The governor limits are indeed that bad. You have a combination of various limits (database queries, rows returned, "script statements", future calls, callouts, etc.) and you have to know exactly what you are doing to avoid these. For example, if you have a calculated rollup "formula" field on an object and you have a trigger on a child object, it will execute the parent object triggers and count those against your limits. Things like that aren't obvious until you've gone through the painful process of trying and failing.
You'll try one thing to avoid one limit, and hit another in a never ending game of "whack a limit". In the process you'll have to drastically re-architect your entire app and approach, as well as rewrite all of your test code. You must have 75% test code coverage to deploy into production, which is actually very good thing, but combined with all of the other limits, it's very burdensome. You'll actually hit governor limits writing your test code that wouldn't come up in normal user scenarios, but that will prevent you from achieving the coverage.
That is not to mention a whole host of other issues. Packaging isn't what you expect. You can't package up your app and deliver it to users without significant user intervention and configuration on the part of the administrator of the org. The AppExchange is a total joke, and they've even started charging 5K just to get your app listed. Importing with the data loader sucks, especially if you have any triggers. You can't export all of your data in one step that includes your relationships in such a way that it can easily be re-imported into another org in a single step (for example a dev org). You can only refresh a sandbox once a month from production, no exceptions, and you can't include your data in a refresh by default unless you have called your account executive to get that feature unlocked. You can't mass delete data in custom objects. You can't change your package names. Certain things can take numerous days to complete after you have requested them, such as a data backup before you want to deploy an app, with no progress report along the way and not much sense of when exactly the export occurred. Given that there are synchronicity issues of data if there are relationships between the data, there are serious data integrity issues in that there is no such thing as a "transaction" that can export numerous objects in a single step. There are probably some commercial tools to facilitate some of this, but these are not within reach to normal developers who may not have a huge budget.
Everything else the other people said here is true. It can take anywhere from five seconds to a minute sometimes to save a file.
I don't mean to be so negative because the platform is very cool in some ways and they're trying to do things in a multi-tenant environment that no one else is doing. It's a very innovative environment and powerful on some levels (I actually like VisualForce a lot), but give it another year or two. They're partnering with VMware, maybe that will lead to giving developers a bit more of a playpen rather than a jail cell to work in.
Here are a few things I can give you after spending a fair bit of time developing on the platform in the last fortnight or so:
There's no RESTful API. They have a soap based API that you can call, but there is no way of making true restful calls
There's no simple way to take their SObjects and convert them to JSON objects.
The visual force pages are ok until you want to customize them and then it's a whole world of pain.
Visual force pages need to be bound to SObjects otherwise there's no way to get the standard input fields like the datepicker or select list to work.
The eclipse plugin is ok if you want to work by yourself, but if you want to work in a large team with the eclipse plugin forget it. It doesn't handle synchronizing to and from the server, it crashes and it isn't really helpful at all.
THERE IS NO DEBUGGER! If you want to debug, it's literally debugged by system.debug statements. This is probably the biggest problem I've found
Their "MVC" model isn't really MVC. It's a lot closer to ASP.NET Webforms. Your views are tightly coupled to not only the models but the controllers as well.
Storing a large number of documents is not feasible. We need to store over 100gb's of documents and we were quoted some ridiculous figure. We've decided to implement our document storage on amazons S3 infrastructure
Even tho the language is java based, it's not java. You can't import any external packages or libraries. Also, the base libraries that are available are severely limited so we've found ourselves implementing a bunch of stuff externally and then exposing those bits as services that are called by force.com
You can call external SOAP or REST based services but the message body is limited to 100kb's so it's very restrictive in what you can call.
In all honesty, whilst there are potential benefits to developing on something like the force.com platform, for me, you couldn't use the force.com platform for true enterprise level apps. At best you could write some basic crud style applications but once you move into anything remotely complicated I'd be avoiding it like the plague.
Wow- there's a lot here that I didn't even know were limitations - after working on the platform for a few years.
But just to add some other things...
The reason you don't have a line-by-line debugger is precisely because it's a multi-tenant platform. At least that's what SFDC says - it seems like in this age of thread-rich programming, that isn't much of an excuse, but that's apparently the reason. If you have to write code, you have "System.debug(String)" as your debugger - I remember having more sophisticated server debugging tools in Java 1.2 about 12 years ago.
Another thing I really hate about the system is version control. The Spring framework is not used for what Spring is usually used for - it's really more off a configuration tool in SFDC rather than version control. SFDC provides ZERO version-control.
You can find yourself stuck for days doing something that should seem so ridiculously easy, like, say, scheduling a SFDC report to export to a CSV file and email to a list of recipients... Well, about the easiest way to do that is create a custom object with a custom field, with a workflow rule and a Visualforce email template... and then for code you need to write a Visualforce component that streams the report data to the Visualforce email template as an attachment and you write anonymous APEX code schedule field-update of the custom object... For SFDC developers, this is almost a daily task... trying to put about five different technologies together to do tasks that seem so simple.... And this can cause management headaches and tensions too - Typically, you'd find this out after getting a suggestion to do something that doesn't work in the user-community (like someone already said), and then trying many things that, after you developed them you'd find they just don't work for some odd-ball reason - like "you can't schedule a VisualForce page", or "you can't call getContent from a schedulable context" or some other arcane reason.
There are so many, many maddening little gotcha's on the SFDC platform, that once you know WHY they're there, it makes sense... but they're still very bad limitations that keep you from doing what you need to do. Here's some of mine;
You can't get record owner information "out of the box" on pretty much any kind of record - you have to write a trigger that links the owner on create of the record to the record you're inserting. Why? Short answer because an owner can be either a "person" or a "queue", and the two are drastically different entities... Makes sense, but it can turn a project literally upside down.
Maddening security model. Example: "Manage Public Reports" permission is vastly different from "Create and Customize Reports" and that basically goes for everything on the platform... especially folders of any kind.
As mentioned, support is basically non-existent. If you are an extremely self-sufficient individual, or have a lot of SFDC resources, or have a lot of time and/or a very forgiving manager, or are in charge of a SFDC system that's working fine, you're in pretty good shape. If you are not in any of these positions, you can find yourself in deep trouble.
SFDC is a very seductive business proposition... no equipment footprint, pretty good security, fixed price, no infrastructure, AND you get web-based CRM with batchable, and schedualble processing... But as the other posters said, it is really quite a ramp-up in development learning, and if you go with consulting, I think the lowest price I've seen was $200/hour.
Salesforce tends integrate with other things years after some technologies become common-place - JSON and jquery come to mind... and if you have other common infrastructures that you want to do an integration with, like JIRA, expect to pay a lot extra, and they can be quite buggy.
And as one of the other posters mentioned, you are constantly fighting governor limits that can just drive you nuts... an attachment can NOT be > 5MB. Period. And sometimes < 3MB (if base64 encoded). Ten HTTP callouts in a class. Period. There are dozens of published governor limits, and many that are not which you will undoubtedly find and just want to run out of your office screaming.
I really, REALLY like the platform, but trust me - it can be one really cruel mistress.
But in fairness to SFDC, I'd say this: the biggest problem I find with the platform is not the platform itself, but the gargantuan expectations that almost anyone who sees the platform, but hasn't developed on it has.... and those people tend to be in positions of great authority in business organizations; marketing, sales, management, etc. Huge disconnects occur and heads roll, or are threatened to roll daily - all because there's this great platform out there with weird gotchas and thousands of people struggling daily to get their heads around why things should just work when they just don't and won't.
EDIT:
Just to add to lomaxx's comments about the MVC; In SFDC terminology, this is closely related to what's known as the "viewstate" -- aand it can be really buggy, in that what is on the VF page is not what is in the controller-class for the page. So, you have to go throught weird gyrations to synch whats on the page with what the controller is going to write to SF when you click your "save" button (or make your HTTP callout or whatever).... man, it's annoying.
I think other people have covered the disadvantages in more depth but to me, it doesn't seem to use the MVC paradigm or support much in the way of code reuse at all. To do anything beyond simple applications is an exercise in frustration compared to developing an application using something like ASP.Net MVC.
Furthermore, the tools, the data layer and the frustration of trying to refactor code or rename fields during the development process doesn't help.
I think as a CMS it's pretty cool but as a platform for non CMS applications, it's doesn't make sense to me.
The security model is also very very restrictive... but this isn't the worst part. You can't currently assert whether a user has the ability to perform a particular action.
You can check to see what their role is, but you can't check if that role has permissions to perform the current action.
Even worse is the response from tech support to "try the action and if there's an exception, catch it"
Considering Force.com is a "cloud" platform, its ability to act as a client to an external WSDL-defined service is pretty underwhelming. See http://force201.wordpress.com/2010/05/20/when-generate-from-wsdl-fails-hand-coding-web-service-calls/ for what you might end up having to do.
To all above, I am curious how the release of VMforce, allowing Java programmer to write code for Force.com, changes the disadvantages above?
http://www.zdnet.com/blog/saas/vmforcecom-redefines-the-paas-landscape/1071
I guess they are trying to address these issues. At dreamforce they mentioned they we're trying to drop the Governor limits to only 4. I'm not sure what the details are. They have a REST API for early access, and they bought heroku which is a ruby development in the cloud. They split out the database, with database.com so you can do all your web development on and your db calls using database.com.
I guess they are trying to make it as agnostic as possible. But right about now these are all announcements and early access so like their Safe Harbor statements don't purchase on what they say, only on what they currently have.
This is a problem we all have to consider at some point.
After many years and many approaches I tend to agree in general with the staterment:
"For any protected software used by more than a few hundred people, you can find a cracked version. So far, every protection scheme can be tampered with."
Does your employer enforce the use of anti-piracy software?
Further, every time I post about this subject, some one will remind me;
"First of all, no matter what kind of protection you'll employ,a truly dedicated cracker will, eventually, get through all of the protective barriers."
What's the best value for money c# code protection for a single developer
So not withstanding these two broadly true disclaimers, lets talk about "protection"!
I still feel that for smaller apps that are unlikely to warrent the time and attention of a skilled cracker, protection IS a worthwhile exercise.
It seems obvious that no matter what you do, if the cracker can switch the outcome of an IF statement (jmp) by patching the application, then all the passwords and dongles in the world anre not going to help.
So My approach has been to obfuscate the code with virtualization using products like:
http://www.oreans.com/codevirtualizer.php
I have been very happy with this product. To my knowledge it has neve been defeated.
I can even compress the executable with PEcompact
Does anyone else have experience with it?
Had nothing but problems with EXEcryptor
http://www.strongbit.com/news.asp
Even the site is a headache to use.
The compiled apps would crash when doing any WMI calls.
This approach allows you to surround smaller sections of code with the obfuscation and thus protect the security checking etc.
I Use the online authorization approach, as the application needs data from the server regularly so it makes no sense for the user to use it off line for extended periods. By definition, the app is worthless at that point, even if it is cracked.
So a simple encrypted handshake is plenty good. I just check it occasionally within the obfuscation protection. If the user installs the app on a different machine, a New ID is uploaded upon launch and the server disables the old ID and returns a new authorization.
I also use a hash of the compiled app and check it at launch to see if a single bit has changed, then open the app as a file (with a read LOCK) from within the app to prevent anyone changing it once launched.
Since all static strings are clearly visible in the .exe file, I try to be generic with error messages and so forth. You will not find the string "Authorization failed" anywhere.
To protect against memory dumps, I use a simple text obfuscation technique (like XOR every character) This makes plain text data in memory harder to distinguish from variables and so forth.
Then of course there is AES for any data that is really sensitive. I like counter mode for text as this results in no repeating sequences revealing underlying data like a sequence of white spaces.
But with all these techniques, if the Key or Initialization vector can be dumped from memory, or the IF statement bypassed, everything is wasted.
I tend to use a switch statement rather than a conditional statement. Then I create a second function that is basically a dead end instead of the function that actually performs the desired task.
Another idea is to code pointers with a variable added. The variable is the result of the authorization (usually zero). This will inevitable lead to a GPF at some point.
I only use this as a last resort after a few lower level authorizations have failed otherwise real users may encounter it. Then the reputation of your software is lowered.
What techniques do you use?
(this is NOT a thread debating the merits of implementing something. It is designed for those that have decided to do SOMETHING)
I disagree xsl.
We protect our code, not because we want to protect our revenue - we accept that those who would use if without a license probably would never pay for it anyway.
Instead, we do it to protect the investment our customers have made in our software. We believe that the use of our software makes them more competative in their market place and that if other companies have access to it without paying they have an unfair advantage - ie, they become as competative without having the overhead of the licensing cost.
We are very careful to ensure that the protection - which is home grown - is as unobtrusive as possible to the valid users, and to this end we would never consider 'buying in' an off the shelf solution that may impact this.
You don't need a few hundred users to get your software cracked. I got annoyed at having my shareware cracked so many times, so as an experiment I created a program called Magic Textbox (which was just a form with a textbox on it) and released it to shareware sites (it had its own PAD file and everything). A day later a cracked version of Magic Textbox was available.
This experience made me pretty much give up trying to protect my software with anything more than rudimentary copy protection.
I personally use the code techniques discussed here. These tricks have the benefit of inconveniencing pirates without making life more difficult for your legitimate end-users
But the more interesting question isn't "what", but "why". Before a software vendor embarks on this type of exercise, it's really important to build a threat model. For example, the threats for a low-priced B2C game are entirely different to those for a high-value B2B app.
Patrick Mackenzie has a good essay where he discusses some of the threats, including an analysis of 4 types of potential customer. I recommend doing this threat analysis for your own app before making choices about protecting your business model.
I've implemented hardware keying (dongles) before myself, so I'm not totally unfamiliar with the issues. In fact, I've given it a great deal of thought. I don't agree with anyone violating copyright law, as your crackers are doing. Anyone who doesn't want to legally aquire a copy of your software should do without. I don't ever violate software copyright myself. That being said...
I really, really dislike the word "protect" used here. The only thing you are trying to protect is your control. You are not protecting the software. The software is just fine either way, as are your users.
The reason that keeping people from copying and sharing your software is such an unholy PITA is that preventing such activites is unnatural. The whole concept of a computer revolves around copying data, and it is simple human nature to want to share useful things. You can fight these facts if you really insist, but it will be a lifelong fight. God isn't making humans any differently, and I'm not buying a computer that can't copy things. Perhaps it would be better to find some way to work with computers and people, rather than fighting against them all the time?
I, along with the majority of professional software developers, am employed full time by a company that needs software developed so that it can do its business, not so it can have a "software product" with artificial scarcity to "sell" to users. If I write something generally useful (that isn't considered a "competive advantage" here), we can release it as Free Software. No "protection" is needed.
From some of the links:
The concept I tried to explain is what I call the “crack spread”. It doesn’t matter that a crack (or keygen, or pirated serial, or whatever) exists for your application. What matters is how many people have access to the crack.
Where/when to check the serial number: I check once on startup. A lot of people say “Check in all sorts of places”, to make it harder for someone to crack by stripping out the check. If you want to be particularly nasty to the cracker, check in all sorts of places using inlined code (i.e. DON’T externalize it all into SerialNumberVerifier.class) and if at all possible make it multi-threaded and hard to recognize when it fails, too. But this just makes it harder to make the crack, not impossible, and remember your goal is generally not to defeat the cracker. Defeating the cracker does not make you an appreciable amount of money. You just need to defeat the casual user in most instances, and the casual user does not have access to a debugger nor know how to use one.
If you’re going to phone home, you should be phoning home with their user information and accepting the serial number as the output of your server’s script, not phoning home with the serial number and accepting a boolean, etc, as the output. i.e. you should be doing key injection, not key verification. Key verification has to ultimately happen within the application, which is why public key crypto is the best way to do it. The reason is that the Internet connection is also in the hands of the adversary :) You’re a hosts file change away from a break-once, break-everywhere exploit if your software is just expecting to read a boolean off the Internet.
Do not make an “interesting” or “challenging” protection. Many crackers crack for the intellectual challenge alone. Make your protection hard to crack but as boring as possible.
There are some cracks which search for byte patterns in search for the place to patch. They usually aren’t defeated by a recompile, but if your .EXE is packed (by ASProtect, Armadillo, etc) these kind of cracks must first unpack the .EXE.. and if you use a good packer such as ASProtect, the cracker will be able to unpack the EXE manually using an assembly level debugger such as SoftICE, but won’t be able to create a tool which unpacks the .EXE automatically (to apply the byte patches afterwards).
I have used .NET Reactor in the past with good results - http://www.eziriz.com/
What I liked about this product is that it did not require you to obfuscate the code in order to have pretty good protection.
xsl, that is a very narrow point of view with MANY built in assumtions.
It seems obvious to me that any app that relies on delivering something from a server under your control should be able to do a fairly good job of figuring our who has a valid account!
I am also of the belief that regular updates (meaning a newly compiled app with code in different locations) will make cracked vesrions obsolete quickly. If your app communicates with a server, launching a secondary process to replace the main executable every week is a piece of cake.
So yes, nothing is uncrackable, but with some clever intrinsic design, it becomes a moot point. The only factor that is significant is how much time are the crackers willing to spend on it, and how much effort are your potential customers willing to exert in trying to find the product of their efforts on a weekly or even daily basis!
I suspect that if your app provides a usefull valuable function then they will be willing to pay a fair price for it. If not, Competitive products will enter the market and your problme just solved itself.