SSDT Circular reference: Complex project - sql-server

I have a fairly complex setup with eight databases on a server each referencing each other (about every database referencing each other), giving way to quite a complex web. The design is far from ideal, but unfortunately this is something we have to work with.
We need to create a SSDT solution to facilitate CI/CD
The whole project needs to be deployed from scratch on a new instance and I am trying to get my head around this, as I have limited SSDT knowledge for a project this scale.
The approaches I consider are as follows:
1) Split objects into shared objects, and reference the shared objects. This seems to be a nightmare to implement, as we would require different layers because of the complex web of references. (shared object referencing other shared objects). Also how do we deploy such a project on a blank server?
2) Create stubs for each object in a project being referenced by other objects, and make a database reference to these. This seems to be the easiest option, although it seems that if the object the stub is based on gets changed, the stubs also needs to be maintained otherwise the project will break. Is this the right assumption?
3) Only create stubs for projects required to compile (eg. tables referenced by views in other databases), and ignore warning references. I am leaning towards this route as the stubs will be much smaller and project easier to maintain, but I hate to ignore referencewarnings..
If we deploy using the stubs option, do we need to deploy the stubs first and then delete them after successful deployment?
Another (more straightforward question). What is the best way to deploy logins, users and object permissions ?
Thanks for replying.

The question is too broad but these are few suggestions:
You can't do anything with circular reference. There are some ways to workaround it but all of them are "hacky" and most probably will introduce more problems than to solve your problem. So try to move objects in so manner that there is only one way dependency;
Use synonyms for ALL cross database objects, so there supposed to be no straight reference outside database;
I agree with Peter Schott that it is better to ignore logins and users for now as handling them in SSDT is a bit of pain and you need to have good expertise on SSDT to make it working properly.

Related

Organizing Apex Classes under Namespace

Is there any way in Salesforce to group apex classes under a package or namespace? Can we use managed package for internal organization purpose?
This is a limitation in the force.com stack that makes medium-large size projects painful, if not impractical. Using managed packages in order to get a package prefix doesn't really solve any problems, so it's not really worth the trouble.
I usually try to organize a project into one flat level of namespaces. In lieu of actual namespaces, I'll give each would-be-namespace a 3-5 character name, to be used as a prefix. Any class that belongs in the "namespace" gets prefixed. E.g., if I need a payroll namespace, I'd use a PYRL prefix. A class called PaycheckCalculator becomes PYRL_PaycheckCalculator.
The practical advantage of this type of convention is it helps prevent name clashes and classes are grouped by their "namespace" when viewed in a sorted list, such as in an IDE, or Setup > Develop > Apex Classes
Unfortunately, several basic OO principles are still fundamentally broken. Probably the most important one is every class forms an implicit dependency on every other class it has visibility to, which is all of them.
I'd love to hear how others have worked around this limitation.
Well, you can use managed packages, but as Jeremy mentioned it doesn't really buy you much. Of course managed packages are essential for developing publicly listed apps to sell on the AppExchange. But internally it's really an org-wide problem since once you create a managed package with a prefix, everything that touches any other part of it gets stamped with the same namespace prefix, including all custom objects. And worse, you can't access code in a managed package from outside the managed package (which is actually the whole point of them in the first place).
Although it's not the prettiest solution, what I personally do is maintain numerous named orgs with different purposes, applications and utility classes. When I need a utility class in one org, say I'm building a new app destined for the AppExchange, I'll do an Eclipse Export/Import from the utility org in question. It definitely seems strange but having a library of orgs is the best way I've managed to keep track of everything and to manage "internal" organization. But the end result is really just a glorified copy-paste operation between arbitrary code stores.
I faced similar challenges while working on big projects, wrote this blog post sometime back to share the approach I am following now : http://www.tgerm.com/2011/11/apex-class-naming-convention-suggestion.html

Tool to aid/assist in refactoring force.com code base (renaming custom objects)

We need to rename about 15 custom objects in a force.com.
In Java, this would be a right click and about 20 minutes work, but given the number of soql queries, classes, pages, profiles etc that use these object, we're looking at a week, two weeks... or more.
So, ideally, we're looking for a refactoring tool which will help us rename this objects and resolve any interdependencies.
Force.com IDE naturally, doesn't support this. Any ideas/tools/approaches?
We did that with two objects and it was a royal pain, I can image 15 poses quite a challenge. As you noticed by now salesforce constructs are highly interdependent with cross, even circular references being legit. THis on the other hand makes tear-down and core modification very difficult and virtually impossible to automate.
What you can do is following:
Use sandbox for modifications, do an inventory of all constructs using
affected objects. You can use Ctrl-H
to search the entire workspace in IDE
On sandbox, clone those 15 objects into their respective future names,
they'll be empty but who cares on
sandbox.
Now that you have objects in place, rename all mentions in all constructs
from #1 to use new objects
Just to make sure try to delete old objects
from sandbox, this will serve as a
sanity check that you didn't miss any
dependency.
Off work hours delete the entire inventory of #1 from production server, leaving just bare objects with their data
Now that the dependencies are gone, rename all 15 objects
In one session deploy the entire modified inventory from sandbox to
production, since the payload now uses
new object names the tests should
pass.
I dont think it should take you more than a day for all this.

I'm unsure as to what is the set-in-stone way to access databases

I have quite a deal of experience programing with VB6, VB.NET, C# so on and have used ADO, then SubSonic and now I am learning nHibernate since most of the prospective jobs I can go for use nHibernate.
The thing is, I have been programming based on what I have been taught, read or come to understand as best practice. Recently, someone through a spanner in the works and had me thinking. Up until now, I have been accessing the database(s) from both the core applcation and attached DLLs that I write.
What this persons said ends as follows and hence my question:
I can tell you
that you wouldn't normally want to do this - an external class library shouldn't have access to the database
What I was trying to do was to have a shared/static class for nHibernate sessions that could be consumed in both the global scope of the app and from any dll. This class was to be in a "core" DLL which all dlls and the application reference. Like I said I'm learning nHibernate so it may not be the way.
To say i'm questioning my database access methods is putting it lightly.
Can anyone put me straight on this?
Edit:
I suppose looking at what has been commented already, it depends on how the database is being accessed. I would tend never to put username/password credentials etc hardcoded in any DLLs for any means.
More specifically, my query is in relation to NHibernate's sessions. I have a static class, an helper class, which is called at application start and the new session is then created and attached to the current context, in the case of web applications, and then whenever I need the session I call "GetCurrentSession". This static class is in the "core" dll and can be accessed with any DLL etc that references. This behaviour is intended. My only question is is this ok? Should I be doing it another way?
A couple of reasons would be
Access to the database, how do you cover off username/password
sharing the DLL, a "bad" application may get hold of your DLL and link with it to get access to your database.
Saying this, if you have proper security on files, etc. then I would have thought using a DLL would probably be a reasonable way to go.
Assuming that the username and password are not stored directly in the DLL (but maybe passed as parameters, or passed as a complete connection object) this isn't so bad.
The possible bad practice here might be accessing the same database for the same purpose from different places - core app and DLL. This could get confusing quickly to a new developer, unless the separation is clear and logical.
Myself, I might try to move ALL (or almost all) data access to a DLL just for that purpose, then have the serious application logic (or as much as possible) in the core app or yet another DLL.

Version Controlling Database that is used by multiple projects

I'm currently working on a project that has several 'Visual Studio Solutions'. One is for the main application and the others are component-based projects which will be reused in other applications.
The problem is that all three solutions need to access data from the same database. Each component has its own set of views, functions and sprocs but the schema differs in places (one component may require a field that another component doesn't).
Basically, I don't want to have one solution break because of a change that I've made in another one.
The way I see it I have two options:
Create a new project that is referenced in all solutions that purely contains database scripts
Manage the Schema in the main application solution and the views, functions and sprocs in the other solutions (as appropriate) and be very, very careful when I do a build
Any suggestions would be greatly appreciated.
Thanks in advance,
Jason
Just thought I'd post my solution here just in case anyone else comes across the same problem...
I went with option one in the end. I created a new Database Project which is referenced in all 3 solutions. Not the prettiest solution in the world, but it works.
Thanks again to pranay for responding.

How to merge Drupal database changes

We currently use an SVN repository to ensure everyone's local environments are kept up-to-date. However, Drupal website development is somewhat trickier in that any custom code you write (for instance, PHP code written for a node body) is stored in the DB and the changes aren't recognized by the SVN working copy.
There are a couple of developers who are presently working on the same area of a Drupal site, but we're uncertain about how to best merge our local Drupal database changes together. Committing patches of database dumps seem clumsy at best and is most likely inefficient and error-prone for this purpose.
Any suggestions about how to approach this issue is appreciated!
Unfortunately, database deployment/update is one of Drupals weak spots. See this question & answers as well as this one for some suggestions on how to deal with it.
As for CCK, you could find some hints here.
As for php code in content, I agree with googletorp in that you should avoid doing this. However, if for some reason you absolutely have to do it, you could try to reduce the code to a simple function call. Thus you'd have the function itself in a module (and this would be tracked via SVN). But then you are only a little step from removing the need for the inline code anyways ...
If you are putting php code into your database then you are doing it wrong. Some stuff are inside the database like views and cck fields plus some settings. But if you put php code inside the node body you are creating a big code maintenance problem. You should really use the API and hooks instead. Create modules instead of ugly hacks with eval etc.
All that has been said above is true and good advice.. To answer your practical question, there are a number of recent modules that you could use to transport the changes done by the various developers.
The "Features" modules is a cure the the described issue of Drupal often providing nice features, albeit storing lots of configs and structure in the DB. This module enables you to capture a feature and output it as a pseudo-module (qualifies as a module with .info and code-files and all). Here is how it works:
Select functionality/feature to export
The module analyses the modules, files, DB content that is required to rebuild that feature elsewhere
The module creates a pseudo-module that contains the instructions in #3 and outputs everything (even SQL to rebuild the stuff in the DB) into a module package (as well as sets dependencies for other modules required)
Install the pseudo-module on your new site and enable it
The pseudo-module replicates the feature you exported rebuilding DB data and all
And you can tell your boss you did it all manually with razor focus to avoid even 1 error ;)
I hope this helps - http://drupal.org/project/features
By committing patches of database dumps, do you mean taking an entire extract of the db and committing it after each change?
How about a master copy of the database? Extract all tables, views, sps, etc... into individual files, put them into svn and do your merge edits on the individual objects?

Resources