I would like to know what are the industry standards or suggestion on how are you doing at your end for following situation.
I am creating multiple silverlight projects which get publised at different dates. All these projects uses varios shared code (common dlls). These shared code would be used in client side or server side. My question is, if the shared code changes would you recompile all the afftected project and release or recompile only when you are making change to the actual code which uses the shared component?
For now, client side, we create a assembly reference folder in each silverlight project and put the latest required dlls in it. By doing it, it has all required files in the XAP itself and it will not conflict with other projects and it works fine. With this approach I will not rebuild any other client side code just because common dll changed. If the common dll change is required for multiple projects then drop the latest copy in all affected projects and build them and distribute them.
On the other hand, the server side (Domain Services using EF), all the service code sit under bin folder of the web site. So if i would make a change to a common dll, then not only I need to publish the latest common dll for current project to work, but also recompile all the other services to use the new dll.
Would like to know your opinions and suggestions.
Thanks
There are two approaches possible:
Add Common Code to the solution and have a project reference
Get the build process to build to a folder and reference from there
I prefer first option. I always build and debug using the latest code and do not have to worry about stale references. I have used the second approach in the past and it is messy and can waste your team's time going after debugging bugs that do not exist (old version referenced). In fact I remember Visual Studio sometimes would not get a later version when it was available.
Another alternative for your Silverlight projects would be to use MEF to dynamically download a XAP file containing the common libraries. Then if the common libraries change, you could publish an updated "CommonLibraries.xap", and your Silverlight clients can pick up the refresh independently of the rest of the Silverlight application.
You could follow the same approach with other projects that use these common libraries. The applications could dynamically load the common libraries so that the common libraries can be refreshed independently.
If possible, consider consuming the "common library" code via WCF services.
Related
I started to interest in monorepo approach and Nx.js in particularly. Almost all articles talks that monorepo solve the problem of incompatibility of library versions and I don't quite understand the how. There I have few questions:
If i understood right, the idea of monorepo (in terms of shared code) that all shared code always the same version and all changes are happen in one atomic commit (as advertisement of monorepo states). So lets imagine monorepo with 100 of projects and all of them are depend on libA in the same repo. If I change smth in libA than I have to check changes in all dependent project. Moreover, I have to wait all codeowners to review my changes. So what is pros?
Lets imagine I have monorepo with following projects: appA, libC, libD and there are some third party library, let's call it third-party-lib. appA depends on libC and libD. At some time appA need third-party-lib-v3, BUT libC depends on third-party-lib-v1. https://monorepo.tools/#code-generation states that: "One version of everything
No need to worry about incompatibilities because of projects depending on conflicting versions of third party libraries.". But it is not. In world of Javascript it results in 2 different versions of third-party-lib in different node_modules. Angain what is pros?
I could be very naive in my questions because I never encountered problems with libraries, also I just started learning monorepo topic so I would be glad if someone help me to deal with it.
Having worked with shared code in a non-monorepo environment, I can say that managing internal packages without a monorepo like NX requires discipline and can be more time consuming.
In your example of 100 projects using 1 library, all 100 projects should be tested and deployed with the new version of the code. The difference is when.
In separate repos, you would publish the new version of your package, with all the code reviews and unit testing that go along with it. Next you would update the package version in all your 100 apps, probably one by one. You would test them, get code reviews, and then deploy them.
Now, what if you found an issue with your new changes in one of the apps? Would you roll back to the previous version? If it was in the app then you could fix it in that one app, but if it was in the library, would you roll back the version number in all your apps? What if another change was needed in your library?
You could find yourself in a situation where your apps are using different versions of your library, and you can't push out new versions because you can't get some of your apps working with the previous version. Multiply that across many shared libraries and you have an administrative nightmare.
In a mono-repo, the pain is the same, but it requires less administrative work. With NX, you know what apps your change is affecting and can test all those apps before you deploy your changes, and deploy them all at once. You don't block other changes going into your library because the changes aren't committed until they are tested everywhere they are used.
It is the same with third party libraries. When you update the version of a library, you test it in all applications that use it before your change is committed. If it doesn't work in one application, you have a choice.
Fix the issue preventing that application from working OR
Don't update the package to the new version
It means that you don't have applications that are 'left behind' and are forced to keep everything up to date. It does mean that sometimes updates can take so much time that they are difficult to prioritise, but that is the same for multi-repo development.
Finally, I would to add that when starting to work with NX you may find yourself creating large, frequently changing libraries that are used by all apps, or perhaps putting large amounts of code in the apps themselves. This leads to pain where changes frequently result in deployments of the whole monorepo. I have found that it is better to create app specific folders that contain libraries that are only used by that app, and only create shared libraries when it makes business sense to do so. Examples are:
Services that call APIs and return business domain objects that should not really be changed (changes to these APIs and responses generally result in a V2 of the API and a new NX library could be created to serve that V2 API, leaving the V1 unchanged).
Core, stable atomic UI libraries for each component (again, try not to change the component itself, but create a V2 if it needs to change)
More information on this can be found here NX applications and libraries
I am from a microsoft background where I always used to keep server and client applications in separate projects.
Now I am writing a client-server application with express as back-end and react js as front-end. Since i am totally a newbie to these two tools, I would like to know..
what is the general practice?:
keeping the express(server) code base and react(client) code base as separate projects? or keeping the server and client code bases together in the same project? I could not think of any pros & cons of either of these approaches.
Your valuable recommendations are welcome!.
PS: please do not mark this question as opinionated.. i believe have a valid reason to ask for recommendations.
I would prefer keeping the server and client as separate projects because that way we can easily manage their dependencies, dev dependencies and unit tests files.
Also if in case we need to move to a different framework for front end at later point we can do that without disturbing the server.
In my opinion, it's probably best to have separate projects here. But you made me think a little about the "why" for something that seems obvious at first glance, but maybe is not.
My expectation is that a project should be mostly organized one-to-one on building a single type of target, whether that be a website, a mobile app, a backend service. Projects are usually an expression of all the dependencies needed to build or otherwise output one functioning, standalone software component. Build and testing tools in the software development ecosystem are organized around this convention, as are industry expectations.
Even if you could make the argument that there are advantages to monolithic projects that generate multiple software components, you are going against people's expectations and that creates the need for more learning and communication. So all things being equal, it's better to go with a more popular choice.
Other common disadvantages of monolithic projects:
greater tendency for design to become tightly coupled and brittle
longer build times (if using one "build everything" script)
takes longer to figure out what the heck all this code in the project is!
It's also quite possible to make macro-projects that work with multiple sub-projects, and in a way have the benefits of both approaches. This is basically just some kind of build script that grabs the output of sub-project builds and does something useful with them in a combination, e.g. deploy to a server environment, run automated tests.
Finally, all devs should be equipped with tools that let them hop between discreet projects easily. If there are pains to doing this, it's best to solve them without resorting to a monolothic project structure.
Some examples of practices that help with developing React/Node-based software that relies on multiple projects:
The IDE easily supports editing multiple projects. And not in some cumbersome "one project loaded at a time" way.
Projects are deployed to a repository that can be easily used by npm or yarn to load in software components as dependencies.
Use "npm link" to work with editable local versions of sub-projects all at once. More generally, don't require a full publish and deploy action to have access to sub-projects you are developing along with your main React-based project.
Use automated build systems like Jenkins to handle macro tasks like building projects together, deploying, or running automated tests.
Use versioning scrupulously in package.json. Let each software component have it's own version# and follow the semver convention which indicates when changes may break compatibility.
If you have a single team (developer) working on front and back end software, then set the dependency versions in package.json to always get the latest versions of sub-projects (packages).
If you have separate teams working on front and backend software, you may want to relax the dependency version to be major version#s only with semver range in package.json. (Basically, you want some protection from breaking changes.)
Given the case that you have a basic GUI that must be extensible by plugins not known when the generate run of the main GUI is done. Contributable plugins may consist of some manifest, resources, localization, some code that is executable in the GUI environment and can provide custom widgets.
From what I see in the moment, it could be done by
Let a plugin developer build against the ordinary source, generating a part for the plugin. Then manually register a qx.io.part.Part with the generated parts to the GUI running on the non developer side.
Just load a combined source JS for that plugin, containing the resources and load them manually via eval.
I'd personally prefere the first one, as it already includes everything that might be used by a plugin. But it uses a method that is marked as internal.
Are there any experiences with that? Are there other, more elegant ways to achieve that?
I'm trying to divide my solution by three configurations:
Development
Testing
Release
All above will have different publishing location, so users can work with release, do their test in testing and see what is new in development release. All three versions will be build with different name postfixes and icons and installed on each user workstation.
For now I get :
Unable to install this application because an application with the
same identity is already installed. To install this application,
either modify the manifest version for this application or uninstall
the preexisting application."
I can't even install this more than once at one workstation.
So What can I do to achive this?
You can not install the same application multiple times unless you change the deployment. The easiest way to do this is by changing the assembly name. This article explains this.
As time past, I can now see that the solution was quite close, just required me to be able to specify my requirements first.
So, now I can tell that it mostly depends on number of such configurations:
if it is limited and low, i.e. live/test/dev, you can have each as separate project in solution, like AppLive, AppTest, AppDev, this requires refactoring to move everything that is common into separate projects, but it makes code and releases clearer and easier to manage.
if those configurations are unlimited, or number is high, than way to go is to load configurations from file and pick one from the pool based on custom logic.
Currently I'm using mix of both, as I want to be able to release test versions earlier than live, but also my application is used by multiple branches, and each of them has some unique styling, logos and such, so this is applied from embed xml file, and proper set is identified based on Active Directory entries.
Currently I have a working Silverlight application that uses .Net RIA Services.
It's structure:
Client-side
Application.Client.UI.dll (Xamls and
basic UI stuff)
Application.Client.BL.dll (Contains the Link to RIA and most of the business logic)
Server-side
Application.Server.Data.dll (Server-side dll that holds the Entity-model and it's generated domain service)
Application.Server.Web.dll (Only the ASP.net hosting container, which references the
Application.Server.Data.dll)
I placed most of the business logic on the client side (Application.Client.BL.dll) for better user-experience (fast reactions) and to free up server resources. My challenge is now to re-use this client-side dll including it's RIA data access capabilities, in a server-side windows service. I'm wondering, is that possible at all? Is the Application.Client.BL.dll still able to consume the existing RIA service, or does that dll require the Silverlight runtime to identify/locate it's service target, and therefore will not work anywhere else.
Curious for your answers
You really shouldn't put any business logic on the client, the guys in security and / or architecture will hate you for it ;-). Furthermore you can't use Silverlight assemblies in ASP.Net or Desktop projects and vice versa. If memory serves correctly, Silverlight uses an entirely different CLR altogether.
I encountered similar needs when working with compact framework assemblies I also wanted to compile for the full framework. I'll describe how I would work around this scenario.
If there exist any issues referencing the Silverlight assembly, consider building two projects as follows:
Project #1 would be your Silverlight library, and should contain all the source files you want to use on the client.
Project #2 would be your Windows Service. Instead of including source files directly, use the "Add Existing Item", find the original source file in project #1, then (and this is the magic), drop down the Add button to choose, instead, choose "Add as Link".
By including the source file as a link, you retain the ability to maintain your source code in one location, but add the ability to compile your code for multiple frameworks. As long as the code relies on assemblies available in both the Silverlight framework and the full .NET framework, then you're money.
Now, regardless of whether you choose a multi-project approach, know that domain context classes have additional constructors that allow you to specify contextual information, such as the URL, for the corresponding domain service. I use the following code in one application to construct a domain context for a domain service that provides personnel data:
var context = new PersonnelDomainContext(
new Uri(ConfigurationManager.AppSettings["PersonnelServiceUrl"]))
In this case, the URL looks something like:
http://website-url/Services/Hyphenated-Namespace-PersonnelDomainService.svc
Of course, when writing a Windows Service, nothing is stopping you from referencing the server-side domain service (not context) assembly directly. With the domain service in hand, you can instantiate a service instance without all the additional configuration and without the additional network XML payload. There are trade-offs to this approach, such as forfeiting centralized configuration management (such as connection strings), but depending on your circumstances, you may find the trade-offs to be worth it.
Happy coding!
Have you considered using fork-reuse? Take a look at:
http://sharednow.blogspot.com/2011/05/its-not-just-reuse.html