Does MSCRM web-service support database transactions? - database

One would assume with any web-based data application that database transactions would be an integral part of the design. Looking around at CrmService, I can't find anything that suggests that transactional 'CRUD's are available. Is it the case that this is not supported/implemented in MSCRM?
If it is, and i have missed it, could someone please point me in the right direction. I fear coding a whole lot of 'repair code' to cater for errors/exceptions half way through a custom import/registration routine that I have coded.

No, there is no database-like transaction support in CRM. About the closest thing would be registering a plugin/callout that runs PreCreate of a record... if something in there fails, the record itself will not be created, but there could still be steps that did succeed before one that failed.

Well, Dynamics CRM 4.0 does not include transaction support.
But, fortunately, the 5.0 version will ... see: http://blogs.msdn.com/ukcrm/archive/2008/11/10/what-s-new-in-crm5.aspx

I have also inquired regarding this issue at the Dynamics CRM Forum
Unfortunately there is no transaction support for the current Dynamics CRM Web Services. This is quite dangerous, since our custom solution invoking several web services call to implement one holistic unit of work, and if one of the web service call encountered error during the execution, it will create data integrity issue
Regards
hadi teo

Related

spring distributed database transaction manager

This is quite a common question and yet the answers are unclear to me. I have 2 different databases on 2 different servers. One is a pure xml database and the other a traditional dbms (sql server). Can anybody point me to recent articles or their experience in dealing with transaction management. I have put together a 1pc strategy which works fine for runtime exceptions. However, I am not sure if it is bullet-proof. Secondly, using spring junit test how to specify a default rollback? It only rolls back the first transactionmanager's transactions. The other transactions are stored in the other database.
Sounds like you want to use a ChainedTransactionManager.
Spring has implemented one of these for neo4j, so you can take the code out of the project.
There was a good article on how to do this, but cant find it anymore. But perhaps this is enough to get you started..

Replication vs Sync Framework vs Service Broker

I've asked about each of these technologies separately, and really haven't found a suitable answer.
We have a server in our central office running SQL Server 2005 Enterprise that has several large (large in the sense that DSL is the limiting factor) databases that we need local copies of at each of our locations. We currently have a few dozen locations, and are needing to bring even more online. The total number of locations we'll need to sync these databases to will be in the several hundreds in the next 2 years.
We are trying to overcome issues with the WAN connection at each location. These are DSL lines and the wiring at the locations isn't always the best. We currently have issues with some of the locations going down as often as every hour. While we are working to resolve these issues with rewiring and assistance from the local telcos, it mainly highlights the problem at hand: we need a two-way sync that can handle being occasionally-connected.
We tried transactional replication for a while, and while it worked some of the time, it was too high maintenance for us, and it seemed to randomly error out often with no possible explanations, forcing us to reinitialize subscriptions (which could take upwards of 4 hours assuming the location would stay connected long enough to get the entire snapshot in one go). We've looked at rolling our own solution from scratch, but I don't feel this would be the best idea given the scale and reliability we are needing.
So far we've also looked at Sync Framework, and as suggested by someone else, Service Broker. Sync Framework seems a better fit, but I was told that Service Broker scales better and is more reliable? I can't find any empirical data on the overhead involved with Sync Framework or Service Broker, so it's proving impossible to compare the two in this regard.
What we really need is a two-way sync between the central office server and a remote client that can run autonomously and can report to an admin in the event of a failure that requires our intervention.
There are so many possible solutions to this problem, all involving completely different technologies, that I need a fresh eye on this.
What do you think would be the optimal solution for our situation, and why?
EDIT: Obviously, upgrading to SQL Server 2008 would solve this problem easily. However, we would like to try to less expensive options first.
I don't have any hard data to offer on this, but we used the sync framework on a project a while ago. My experience with it is really bad. It's slow (even when synchronizing relatively small tables across a LAN), scales terribly and requires a lot of work to manually handle error conditions (it'll happy produce larger packets than WCF can handle by default -- and is only able to split updates into batches when syncing one way, not the other.) And it only works with a few select databases (the client must use MS SQL Compact Edition, as I recall), unless you're willing to write your own SyncAdapter.
Overall, a lot of work just to get a fragile and inefficient solution to your problem. I wouldn't recommend it.
You can Use sync framwork with SQL express 2008 R1/R2 on one end and multitenant db SQl server enterprise on central end. Below is the sample application for n-tier sync over secure WCF channel.You could write windows service to sync data from backend:
http://www.rajneeshnoonia.com/blog/2012/03/n-tier-sync-framework/
It sould be capable enough to handle large number of clients (thousands).
I think we'll look into the SQL Server 2008 upgrade route. It seems the native change tracking support will be the easiest way to accomplish this.

Django database scalability

We have a new django powered project which have a potential heavy-traffic characteristic(means a heavy db interaction). So we need to consider the database scalability in advance. With some researches, the following questions are still not clear to us:
coarse-grained: how to specify one db table(a django model) to a specific db(maybe in another server)?
fine-grained: how to specify a group of table rows to a specific db(so-called sharding, also can in another db server)?
how to specify write and read to different db?(which will be helpful for future mysql master/slave replication)
We are finding the solution with:
be transparent to application program(means we don't need to have additional codes in views.py)
should be in ORM level(means only needs to specify in models.py)
compatible with the current(or future) django release(to keep a minimal change for future's upgrading of django)
I'm still doing the research. And will share in this thread later if I've got some fruits.
Hope anyone with the experience can answer. Thanks.
Don't forget about caching either. Using memcached to relieve your DB of load is key to building a high performance site.
As alex said, django-core doesn't support your specific requests for those features, though they are definitely on the todo list.
If you don't do this in the application layer, you're basically asking for performance trouble. There aren't any really good open source automation layers for this sort of task, since it tends to break SQL axioms. If you're really concerned about it, you should be coding the entire application for it, not simply hoping that your ORM will take care of it.
There is the GSoC project by Alex Gaynor that in future will allow to use multiple databases in one Django project. But now there is no cross-RDBMS working solution.
There is no solution right now too.
And again - there is no cross-RDBMS solution. But if you are using MySQL you can try excellent third-party Django application called - mysql_replicated. It allows to setup master-slave replication scenario easily.
here for some reason we r using django with sqlalchemy. maybe combination of django and sqlalchemy also works for your needs.

Front-End for MS Access migration? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Background
I work for a large organization which has thousands of MS Access applications floating around. I didn't write any of these - in fact, most of the original authors have long since left the company - but from time to time another Access app lands on my desk for support. I would soooo love to replace access with a different solution.
Requirement
I know that there are several good alternatives for the database part of MS Access (the Jet database), such as SQLite, MySQL, VistaDB, etc.
What I would like to know is: Is there anything that will replace the front end part of MS Access?
I.e. Something which can be used to build forms, write simple scripts and queries, etc?
Why?
#BracC asked "why replace access?" - A fair question indeed.
I want to get rid of access because:
it hides logic, leading to hard-to-support applications. Logic can be in lots of different places, none of which provide or encourage any structure:
macros
modules
queries
forms
its very nature encourages users to create "little" applications which become "not so little applications". Then the user leaves and I have to support a bunch of spaghetti. I know that access isn't the only culprit, but it's the leader in my organisation, and I would love to get rid of it completely.
For extra credit
what I would really love to find is something which can read in an MDB file and output something like C# which replicates the functionality. (Or any language - not fussy).
I hope this is all clear. If not, please post a comment and I'll re-write/add detail.
Update
#GuinnessFan makes some points I find interesting. I have added my comments to discuss those points.
What we have done since I asked the question:
Got users to give us a definitive list of access applications they use and need. (The understanding is that any MDB files not on the list can be deleted - hooray!).
Analysed the MDBs on the list, coming to the following conclusions:
Most of the "applications" consist of a single hard-coded query or a single linked table.
Many are a small number of queries with, perhaps, a date parameter or similar.
very few (if any) have any truly complex logic.
We are now working through the list, converting most of the apps to SSRS (SQL Server Reporting Services) packages.
Anything which can't be replicated using SSRS will become a hand-crafted web application. However, there aren't many of these.
May I say many thanks, to everybody who has given me helpful answers.
I switched the back-end on one application from MSacces to MSSQL a few years ago. Kept the front-end, because it worked well, and I didn't find anything as easy to use/modify.
I've never seen a MSAccess -> C# translator. However, you might be able to find a MSAccess to VB6 translator (their syntaxes are roughly similar), and from there there are VB6->VB.Net translators (and even VB.Net ->C# translators)
You could check out Oracle's Application Express. It's free and it's geared toward Access developers.
It has a migration assistant as well that you run your Access database through, it proccesses the data and the forms, migrates everything to an Oracle Database (this works with the free database, Oracle XE, and comes install by default) and builds web forms for your Access database.
So in the end you'll have your Access databases on the web, your data in Oracle and somewhat nice web front end for extending them.
As far as Oracle goes, the tool isn't half bad. You can sign up for a free instance to play around with here.
Here's the document that explains how you migrate Access Databases.
So, other than personnal distaste, why replace the Access front-end? May be easy to do for some (simple) databases, but most Access apps in the real world have a lot of complexity.
Lots of reasons for upgrading the back-end, of course (scalability, performance, db corruption, user-locking). Access even has a built-in "upgrade wizard" tool that allows you to split the forms and logic from the data, and upgrade the data to MS SQL server. If you want, use this wizard to upgrade the back-end to SQL Express, then manually migrate to another db platform.
Hope this is not too far off-topic, but sometimes all you need to do with Access is:
Upgrade the back-end (as we've already discussed)
Always make sure the front-ends are locked down (read-only)
If necessary, create different front-ends for different user roles (as a form of security).
If possible, have the front-ends copied locally on each workstation, for performance reasons. You may need to have a network script to check for new versions of the front-end.
I don't have any direct experience with it, but I did find an Access to ASP.Net converter tool called "Access Whiz" at http://www.microtools.us/
We used an internal app based on MS Access as a frontend to a MySQL database. We ran into lots of problems, and eventually rewrote the whole app in CodeGear Delphi 2007 for Win32. This has been a great success, although the migration did cost quite a lot of effort (training/hiring a couple of Delphi programmers, buying some third-party tools). I can wholeheartedly recommend Delphi, though. And AFAIK, integration with a MS Access back-end is certainly possible --- I once wrote a Delphi app that does just that, and it only cost me a couple of days to get a feature-complete version!
I realize this is a full programming solution, so you'd definitely loose some of the ease-of-use of MS Access for building front-ends. Then again, you can put together a database application in Delphi in 10 minutes without writing too much code --- no kidding! And since the 2009 release, the language is slowly becoming more mainstream again...
#BradC
I don't recomment MicroTools. I worked for a company a while back where we had the same problem. Unless MicroTools has made significant improvements to their product, it spits out garbage last I checked.
What we found was that pretty much any upgrade path will require significant amounts of coding changes. All these tools are good for is to maintain a similar GUI from the original application. Their code had no object structure, just a bunch of utility functions that were dumped on each page to simulate the way Access provides record navigation. If you have a large number of forms, pulling out their solution and implementing your own takes some work and a ton of find-and-replace operations.
We were so disappointed with MicroTools performance that we started writing our own converter. We were pumping out better ASP.NET forms than they were after a week of coding.
You won't find an server-class engine that also has the desktop interface design tools attached. The big server engines all expect you to use something like C++, C#, Java, or PHP to build your interface.
I, too, would love to see an upgrade tool for access that would spit out some basic C# forms and talks to an equivalent SQL Server database. It seems that would be a big money-maker for Microsoft because they could use it as a way to up-sell customers to a full SQL Server.
IIRC, there might be a way to tell an Access front end to talk to a SQL Server, or change the tables used by an Access front end to really be linked tables into a SQL Server, or something like that, but I've never had to use the feature myself.
I have a different perspective for you to consider. Your main issue is that it hides logic and there is data and applications scattered through the organization.
Unfortunately I don't know of a RAD (rapid application development) tool that is as easy as Access to create functional forms.
However I would recommend that you focus more on the possibility of centeralizing your data and logic and still allow Access as a front end. I support a database product called Advantage Database Server which supports RI (refferential integrity) rules, stored procedures, triggers, etc. that can all be managed on a centeral server thus bringing all of the logic to you. These Access front-ends could then link to the data backend using ODBC or OLEDB. If you switched to a solution like this then later down the road you would have flexibility to write other applications such as .NET, PHP, JDBC, etc. that tie into the same data while phasing Access front-ends out.
A good start would be to stop new Access development unless they're using this sort of data backend.
Out of the 1000's of Access files how many have you been asked to support? I'm guessing less than 100. Why rebuild an application that A) no one uses B) works fine just the way it is?
You need to begin a policy that it is an acceptable practice for a large organization to develop custom applications in a robust, scalable, reliable, yadda yadda yadda environment. Identify the Access applications you feel are critical or are being outgrown and just work on those.
Be prepared to handle the expectation of getting their quick and dirty little applications on a quick turnaround. You'll have to show them the benefits of your new apps.
I think you just need to be a resident expert and teach these users how to improve their application or get your input from the beginning to start them off right. The requirements to convert all of these files would otherwise be overwhelming.
Microtools offers Access Whiz, a set of Access conversion tools. It consists of Access to ASP .NET (VB/C#) converters, Access to VB6 converter, Access to WinForms (VB .NET/C#) converters and Access to Crystal Reports converter. More information and trial demos can be found at http://www.microtools.us.
You can also take a look at Firebird
Here is the way to migrate (you need Delphi)
I also find this MDB2FDB
Is there anything that will replace the front end part of MS Access?
Maybe Kexi?

Mobile/PDA + SQL Server data synchronization

Need a little advice here. We do some windows mobile development using the .NET Compact framework and SQL CE on the mobile along with a central SQL 2005 database at the customers offices. Currently we synchronize the data using merge replication technology.
Lately we've had some annoying problems with synchronization throwing errors and generally being a bit unreliable. This is compounded by the fact that there seems to be limited information out there on replication issues. This suggests to me that it isn't a commonly used technology.
So, I was just wondering if replication was the way to go for synchronizing data or are there more reliable methods? I was thinking web services maybe or something like that. What do you guys use for this implementing this solution?
Dave
I haven't used replication a great deal, but I have used it and I haven't had problems with it. The thing is, you need to set things up carefully. No matter which method you use you need to decide on the rules governing all of the various possible situations - changes in both databases, etc.
If you are more specific about the "generally being a bit unreliable" then maybe you'll get more useful advice. As it is all I can say is, I haven't had issues with it.
EDIT: Given your response below I'll just say that you can certainly go with a custom replication that uses SSIS or some other method, but there are definitely shops out there using replication successfully in a production environment.
well we've had the error occur twice which was a real pain fixing :-
The insert failed. It conflicted with an identity range check constraint in database 'egScheduler', replicated table 'dbo.tblServiceEvent', column 'serviceEventID'. If the identity column is automatically managed by replication, update the range as follows: for the Publisher, execute sp_adjustpublisheridentityrange; for the Subscriber, run the Distribution Agent or the Merge Agent.
When we tried running the stored procedure it messed with the identities so now when we try to synchronize it throws the following error in the replication monitor.
The row operation cannot be reapplied due to an integrity violation. Check the Publication filter. [,,,Table,Operation,RowGuid] (Source: MSSQLServer, Error number: 28549)
We've also had a few issues were snapshots became invalid but these were relatively easy to fix. However all this is making me wonder whether replication is the best method for what we're trying to do here or whether theres an easier method. This is what prompted my original question.
We're working on a similar situation, but ours is involved with programming a tool that works in a disconnected model, and runs on the Windows Desktop... We're using SQL Server Compact Edition for the clients and Microsoft SQL Server 2005 with a web service for the server solution.
TO enable synchronization services, we initially started by building our own synchronization framework, but after many issues with keeping that framework in sync with the rest of the system, we opted to go with Microsoft Synchronization Framework. (http://msdn.microsoft.com/en-us/sync/default.aspx for reference). Our initial requirements were to make the application as easy to use as installing other packages like Intuit QuickBooks, and I think that we have closely succeeded.
The Synchronization Framework from Microsoft has its ups and downs, but the only bad thing that I can say at this point is that documentation is horrendous.
We're in discussions now to decide whether or not to continue using it or to go back to maintaining our own synchronization subsystem. YMMV on it, but for us, it was a quick fix to the issue.
You're definitely pushing the stability envelope for CE, aren't you?
When I've done this, I've found it necessary to add in a fair amount of conflict tolerance, by not thinking of it so much as synchronization as simultaneous asynchronous data collection, with intermittent mutual updates and/or refreshes. In particular, I've always avoided using identity columns for anything. If you can strictly adhere to true Primary Keys based on real (not surrogate) data, it makes things easier. Sometimes a PK comprising SourceUnitNumber and timestamp works well.
If you can, view the remotely collected data as a simple timestamped, sourceided, userided log of cumulative chronologically ordered transactions. Going the other way, the host provides static validation info which never needs to go back - send back the CRUD transactions instead.
Post back how this turns out. I'm interested in seeing any kind of reliable Microsoft technology that helps with this.
TomH & le dorfier - I think that part of our problem is that we're allowing the customer to insert a large number of rows into one of the replicated table with an identity field. Its a scheduling application which can automatically multiple tasks up to a specified month/year. One of the times that it failed was around the time they entered 15000 rows into the table. We'll look into increasing the identity range.
The synchronization framework sounds interesting but sounds like it suffers from a similar problem to replication of having poor documentation. Trying to find help on replication is a bit of a nightmare and I'm not sure I want us to move to something with similar issues. Wish M'soft would stop releasing stuff that seems to have the support of beta s'ware!

Resources