What is the best VS solution setup for DotNetNuke 4.8 inter-module communication development?
I currently have a solution with multiple Web Application projects in it for my DotNetNuke modules - and in each one of those have pages with the controls on them as a test harness. That all worked fine up until the point where I need the modules to start talking with each other using IModuleCommunicator and IModuleListener - but now that I'm doing inter module communication, debugging won't work out that way anymore.
I'm curious as to how other people handle this - is there a way to have your test pages mock a Nuke environment? Do you test right in a nuke website? My solution is in sourcecontrol using VSS, so I don't want to add the full nuke website as a project in my solution since that would force me to add it to source control - and I'd rather not have a full nuke site in source control.
I've been able to debug by attaching to the local IIS worker process, but that's kind of a pain. Does anyone have any suggestions as to how to ease the pain of debugging inter module communication?
Any suggestions would be greatly appreciated.
We tend to test in a development DotNetNuke site, usually just attaching to the IIS worker process for debugging (just because it's quicker than rebuilding with F5).
I think, in general, the more you're making use of what DNN provides, the less you'll be able to test outside of a DNN environment. Since IMC is a specifically DNN process, you can't have complete testing until you let DNN be the one to perform the process.
After lots of trial & error, here's what I ended up with - and seems to work well.
Created a Post-build event on the module project to copy to the local nuke site for debugging. Found under "Properties / Build Events / Post-build event command line" ... copy $(TargetDir)$(TargetName).* C:\Inetpub\wwwroot\bin* /y
Changed the web settings to start the localhost website by default. Found under "Properties / Web / Servers / Use Custom Web Server" - changed to "http://localhost/"
Created post-build events on supporting class library projects to copy file to local webserver as well. Could also have just changed the post-build event on the module project to include the other files.
Once those setting were in place, pressing F5 to run the project will start the browser and automatically attach to the IIS worker process.
Also, keep in mind that if you are running this on a machine with UAC (Vista, win 2008, win 7) you'll have to run VS as an administrator since both the copy to wwwroot and attaching to the worker process require elevated privileges.
Related
I'm currently working on a live project. The frontend part of the system is in ReactJS. We are using create-react-app as the starter kit.
We are facing some issues in deploying the application on live server. Earlier we followed the strategy of pushing the code on server and then creating the build on it. But we noticed that so long the build was generating, our site became unavailable. Which does not seem right. Hence we decide to create build folder in developer's local machine and push the build to the server. But now we are receiving a lot of change requests and feature requests, hence I'm planning to move to a robust git branching model. I believe this will create problem with the way we are currently handling our deployment strategy(which is to move the build to production).
It will be really helpful if some one can show us the right direction in handling deployment of ReactJS apps.
You can use Jenkins which can be configured to trigger the build as soon as a code in a branch is checked-in in GIT. I have not worked on Jenkins but surely, I have seen people using Jenkins for such things.
Jenkins will trigger the build in its own environment (or you can create a temp folder for the time being the build is getting generated if Jenkins operates on the server directly) which will generate the output bundle. So your code will not be removed from the server for that while and you can patch your new files to the actual folder (which can also be automated using Jenkins).
While our team is starting work on our first DNN 7 sites, we're running into a small impediment. It would seem that the development cycle for a skin or module is that for every single little change you make, you need to create a new package and upload it to DNN. Our engineers are worried that they'll get caught in a loop of:
Tweak CSS
Create zip for skin
Upload zip to DNN
Go to step 1 until skin is complete
Consider this to also be a metaphor for module development. Is there a better process to develop modules and skins? Should we create the initial skin package, tweak the installed version, and then update the original files?
Edit: It's our intention to keep the installable skin and module files under source control in TFS, and deploying packages as changes are made.
If you're developing those skins locally, running at a URL like http://dnndev.me/ you can make all the changes you want, without having to package/install the skins.
That is also the recommended approach for doing module development.
Here's a tutorial on setting up your local development environment:
http://www.christoc.com/Tutorials/All-Tutorials/aid/1
If you aren't doing local development, than you have to go through the hoops for packaging/deploying or uploading to the webserver via FTP/file system.
For modules, you can install the module only one time and then just recopy the dlls and the DesktopModules controls as a build. You can write batch files to automate the whole copy/paste process.
We use UI Automation and Nunit to create tests UI tests for WPF application.
We've created tests that work fine when you run them from a local machine. Those tests never run successfully on our build server (using TeamCity). Build always hang after opening application window. But if I am logged in (remote desktop), on our build server all UI Automation tests also run successfully.
So I am guessing that it probably has something to do with running active windows session. Any ideas how to convince our build server to create active windows session or any other solutions for making those tests run on build server?
You don't have many options. I will list the two I know, the most preferred option first:
Set up a virtual machine on your build server. Your builds execute in the virtual machine. You can lock the host (aka your buildserver) keeping things secure.
Keep someone logged on all the time. This offcourse creates a security problem. You can alleviate this problem a little by removing the mouse, keyboard and the screen and only access the buildserver through RDP or something similar.
Edit
Take a look at this TestComplete FAQ item: Can TestComplete execute scripts when the computer is locked?
OK, I'm just guessing here.
Try and run the TeamCity service with a local build server user instead of the system account.
Maybe you have to login with that account once, before starting a new build.
It definatley sounds like you need to run your tests with an interactive session as opposed to a service. Adding the "Allow Service to interact with desktop" might help, but this is not supported in Vista any more apparently.
If you can run your builds interactivley as a command line, not a serivice that should work too.
We used to run our UIAutomation tests using the visual studo 2008 load agent to distribute them, running as a command line tool on VM's with no problem.
I also agree that you probably should't be running UI tests on a build server a part of your daily build.
Build always hang after opening application window.
Tests that instantiate the UI? That's not going to work, e.g. if you get a modal dialog the build will hang. This is the reason the MVP pattern was invented, to isolate the active presentation code from a concrete view.
Are you using a mock view in your automated tests?
When creating an auto updating feature for a .NET WinForms application, how does it update the DLLs and not affect the currently running application?
Since the application is running during the update process, won't there be a lock on the DLLs (because those DLLs will have to be overwritten during the update).
Usually you would download the new files into a separate area. Then shutdown and restart and at startup you look for and use the new files if found. Always keeping a last known working version on the side so that the user can revert to something that definitely works if the download causes problems.
ClickOnce is a good technology from Microsoft that does this for you and you can use it directly from Visual Studio 2008.
You'll have to shutdown your application and restart it, as other people have already commented.
I wrote an open-source code to do just that in a transparent mode - including an external update application to do the actual cold update. See http://www.code972.com/blog/2010/08/nappupdate-application-auto-update-framework-for-dotnet/
The code is at http://github.com/synhershko/NAppUpdate (Licensed under the Apache 2.0 license)
I have a seperate 'launcher' application that checks for updates via a web service. If there are updates, it downloads them and then executes my application, which is in a seperate assembly.
The other alternatives are using things like ClickOnce, or downloading the files to a seperate area and restarting the app, as someone else mentioned.
Be warned about ClickOnce, though - it's not as flexible as it sounds. And if you deploy to a system that requires elevating your program to a higer security level to run, you might run into problems if you don't have a certificate for your app installed. I found it very difficult to get straight answers on the Internet to things like certificate management when it comes to ClickOnce. If you have a complex app, you may want to just roll your own updater, which is what I ended up having to do.
If you publish via ClickOnce, all of that tends to be handled for you. It has it's own pro's and con's but usually easier than trying to code it all yourself.
Both Wikipedia and 15seconds have decent info on using ClickOnce, how it works, etc.
As others have stated, ClickOnce isn't as flexible as rolling your own solution but it is a LOT less complicated. It has a small learning curve at first, but with pretty much everything bundled into Visual Studio and the use of Wizards, it usually doesn't take long to stumble onto a working solution.
As deployments get more complex (i.e. beyond than just having prerequisites or application code that needs updating) and you need to do a lot of post-install or pre-install tasks, there are things like WiX which give you somewhat of a hybrid solution between Windows Installer and ClickOnce, with the cost of flexibility being a much steeper learning curve.
The only reason I try to avoid custom installers is that you end up spending way too much time trying to get it just right to handle a bunch of different "What If" scenarios...
These days Windows can do such updates automatically for you with AppInstaller if your app is packaged in the MSIX package.
It downloads the new version of the app in another folder inside ProgramFiles\WindowsApps, then when a user runs the app via the start menu, the system knows what folder it should use. The previous version gets deleted when not in use.
If you want to know how to package your app this way I collected my findings in this answer.
Our team develops distributed winform apps. We use ClickOnce for deployment and are very pleased with it.
However, we've found the pain point with ClickOnce is in creating the deployments. We have the standard dev/test/production environments and need to be able to create deployments for each of these that install and update separate from one another. Also, we want control over what assemblies get deployed. Just because an assembly was compiled doesn't mean we want it deployed.
The obvious first choice for creating deployments is Visual Studio. However, VS really doesn't address the issues stated. The next in line is the SDK tool, Mage. Mage works OK but creating deployments is rather tedious and we don't want every developer having our code signing certificate and password.
What we ended up doing was rolling our own deployment app that uses the command line version of Mage to create the ClickOnce manifest files.
I'm satisfied with our current solution but is seems like there would be an industry-wide, accepted approach to this problem. Is there?
I would look at using msbuild. It has built in tasks for handling clickonce deployments. I included some references which will help you get started, if you want to go down this path. It is what I use and I have found it to fit my needs. With a good build process using msbuild, you should be able to accomplish squashing the pains you have felt.
Here is detailed post on how ClickOnce manifest generation works with MsBuild.
I've used nAnt to run the overall build strategy, but pass parameters into MSBuild to compile and create the deployment package.
Basically, nAnt calls into MSBuild for each environment you need to deploy to, and generates a separate deployment output for each. You end up with a folder and all ClickOnce files you need for every environment, which you can just copy out to the server.
This is how we handled multiple production environments as well -- we had separate instances of our application for the US, Canada, and Europe, so each build would end up creating nine deployments, three each for dev, qa, and prod.