I think there must be an easy solution to my problem but I can't figure it out. We're using NServicebus in a windows service and we have configured it to use log4net for logging, in code we have this:
SetLoggingLibrary.Log4Net(log4net.Config.XmlConfigurator.Configure);
Configure.With().Log4Net().....
So far so good. The problem is that NServicebus still creates it's own logfile, named "logfile" placed the same folder the application is run in, i.e. amoung the binaries. In our development and test environments where we reinstall and restart the service frequently this soon polutes the binaries folder with a lot of logfiles as a new is created each day (the old one from a previous date is renamed to for instance: logfile2012-02-28).
In the config-file of the service we have these lines:
...
<section name="Logging" type="NServiceBus.Config.Logging, NServiceBus.Core" />
<Logging Threshold="OFF" />
so all logfiles are empty but how do we stop them from being created or at least have them created in a separate log folder?
Thanks
Christian
Your calls to SetLoggingLibrary and .Log4Net() are conflicting with each other, and probably also with the profiles (if you're using NServiceBus.Host.exe).
Have you looked through the docs?
http://docs.particular.net/nservicebus/logging/
Related
I'm creating an app using grails that generates multiple files as a process output, and put them on some folders that later will be access by ftp by other people.
Everything works great except that in production the new created files are created with access only for the user that runs tomcat, meaning that when somebody connects to the folder using ftp the files can't be open because they don't have permission.
is there any way to set permissions from grails or configure tomcat in a way that every output can be access by other users?.
This might help. You can also look into executing shell commands but that may not be the best option.
I found out that there is actually a method for the class File that changes the permissions for an instance of file, I was trying to use it but I notice only changed the permissions for the owner, but with a slight change on the parameters you can tell it to apply to other users too.
File.setReadable(boolean readable)
File.setReadable(boolean readable,boolean ownerOnly)
So in my case file.setReadable true,false made the trick.
Check out the class methods for more info java.io.File
I have a TFS build process that drops outputs on sandbox which is another server in the same network. In other words, the build agent and sandbox are separate machines. After the outputs are created, a batch script defined within the build template does the following:
Rename existing deployment folder to some prefix + timestamp (IIS can now no longer find the app when users attempt to access it)
Move newly-created outputs to deployment location
The reason why I wanted to rename and move files instead of copy/delete/overwrite is the latter takes a lot of time because we have so many files (over 5500). I'm trying to find a way to complete builds in the shortest amount of time possible to increase developer productivity. I hope to create a windows service to delete dump folders and drop folder artifacts periodically so sandbox doesn't fill up.
The problem I'm facing is IIS maintains a handle to the original deployment folder so the batch script cannot rename it. I used Process Explorer to see what process is using the folder. It's w3wp.exe which is a worker process for the application pool my app sits in. I tried killing all w3wp.exe instances before renaming the folder, but this did not work. I then decided to stop the application pool, rename the folder, and start it again. This did not work either.
In either case, Process Explorer showed that there were still uncollected handles to my outputs except this time the owner name wasn't w3wp.exe, but it was something along the lines of unidentified process. At one point, I saw that the owner was System, but killing System's process tree shuts down the server.
Is there any way to properly remove all handles to my deployment folder so the batch script can safely rename it?
https://technet.microsoft.com/en-us/sysinternals/bb896655.aspx
Use windows systernal tool called Handle v4.0
Tools like Process Explorer, that can find and forcibly close file handles, however the state and behaviour of the application (both yours and, in this case, IIS) after doing this is undefined. Some won't care, some will error and others will crash hard.
The correct solution is to allow IIS to cleanly release locks and clean up after itself to preserve server stability. If this is not possible, you can either create another site on the same box, or set up a new box with the new content, and move the domain name/IP across to "promote" the new content to production
I'm trying to run an unattended install of SQL Server Express 2014 in a burn package chain and I keep running into problems so I'm looking for advice.
Right now I'm installing it by running the self extracting SQLEXPR_x64_ENU.exe with switches but there are two problems with this method the first being that the extract window doesn't appear in front of my custom bootstrapper UI and second that I have no way to specify the default extraction directory. There is the /X:"C:/Temp" switch but if I use this then the main Setup.exe isn't run upon extract completion.
I tried to resolve this problem by extracting it and including all the required files as a payload group. This works but the compile time and install time are unacceptably slow due to all the small files it has to extract and verify.
I also tried simply referencing the Setup.exe in the extracted folder and left it uncompressed as to have the files in a sub-directory in the root of the installer directory but this to was giving me some startup problems.
I've contemplated installing it with scripts but I feel this is an ugly approach to the problem and I'm avoiding it like the plague but I realize it's possible.
I'd be interested to hear how others have handled this and any advice would be greatly appreciated.
We have worked around this issue using our managed bootstrapper application as follows.
Sql Server 2014 Express SP1 resolved the issue they were having with /qs switch. We can use /qs with /x to specify the extract folder, and extraction proceeds with no user input.
However, as you noted, this just extracts the files, and doesn't start setup.exe. The good news is that the extracted files are still in the folder specified with the /x switch.
In our managed bootstrapper application, we handle the ExecutePackageComplete event. When the Sql Server package has completed (all it did was extract files), we use System.Diagnostics.Process.Start to run the Sql Server setup.exe.
When Setup completes we delete the extract folder.
This isn't what we thought we would do when we started, but at least it's working.
More Information:
As you also mentioned, the progress window for the extracting process opens behind the bootstrapper application window.
I'm not sure what in the bootstrapper is bringing the bootstrapper window back on top. Our UI has a progress bar, so perhaps a progress event is firing after the extracter has started.
We used a timer to give the bootstrapper time to process any events, then we enumerate Process.GetProcesses and look for ProcessName containing "extracting sql". When we find it we use SetWindowPos to bring it to the front.
OK, so I did the dumb thing and released production code (C#, VS2010) that targeted our development database (SQL Server 2008 R2). Luckily we are not using the production database yet so I didn't have the pain of trying to recover and synchronize everything...
But, I want to prevent this from happening again when it could be much more painful. My idea is to add a table I can query at startup and determine what database I am connected to by the value returned. Production would return "PROD" and dev and test would return other values, for example.
If it makes any difference, the application talks to a WCF service to access the database so I have endpoints in the config file, not actual connection strings.
Does this make sense? How have others addressed this problem?
Thanks,
Dave
The easiest way to solve this is to not have access to production accounts. Those are stored in the Machine.config file for our .net applications. In non-.net applications this is easily duplicated, by having a config file in a common location, or (dare I say) a registry entry which holds the account information.
Most of our servers are accessed through aliases too, so no one really needs to change the connection string from environment to environment. Just grab the user from the config and the server alias in the hosts file points you to the correct server. This also removes the headache from us having to update all our config files when we switch db instances (change hardware etc.)
So even with the click once deployment and the end points. You can publish the a new endpoint URI in a machine config on the end users desktop (I'm assuming this is an internal application), and then reference that in the code.
If you absolutely can't do this, as this might be a lot of work (last place I worked had 2000 call center people, so this push was a lot more difficult, but still possible). You can always have an automated build server setup which modifies the app.config file for you as a last step of building the application for you. You then ALWAYS publish the compiled code from the automated build server. Never have the change in the app.config for something like this be a manual step in the developer's process. This will always lead to problems at some point.
Now if none of this works, your final option (done this one too), which I hated, but it worked is to look up the value off of a mapped drive. Essentially, everyone in the company has a mapped drive to say R:. This is where you have your production configuration files etc. The prod account people map to one drive location with the production values, and the devs etc. map to another with the development values. I hate this option compared to the others, but it works, and it can save you in a pinch with others become tedious and difficult (due to say office politics, setting up a build server etc.).
I'm assuming your production server has a different name than your development server, so you could simply SELECT ##SERVERNAME AS ServerName.
Not sure if this answer helps you in a assumed .net environment, but within a *nix/PHP environment, this is how I handle the same situation.
OK, so I did the dumb thing and released production code
There are a times where some app behavior is environment dependent, as you eluded to. In order to provide this ability to check between development and production environments I added the following line to global /etc/profile/profile.d/custom.sh config (CentOS):
SERVICE_ENV=dev
And in code I have a wrapper method which will grab an environment variable based on name and localize it's value making it accessible to my application code. Below is a snippet demonstrating how to check the current environment and react accordingly (in PHP):
public function __call($method, $params)
{
// Reduce chatter on production envs
// Only display debug messages if override told us to
if (($method === 'debug') &&
(CoreLib_Api_Environment_Package::getValue(CoreLib_Api_Environment::VAR_LABEL_SERVICE) === CoreLib_Api_Environment::PROD) &&
(!in_array(CoreLib_Api_Log::DEBUG_ON_PROD_OVERRIDE, $params))) {
return;
}
}
Remember, you don't want to pepper your application logic with environment checks, save for a few extreme use cases as demonstrated with snippet. Rather you should be controlling access to your production databases using DNS. For example, within your development environment the following db hostname mydatabase-db would resolve to a local server instead of your actual production server. And when you push your code to the production environment, your DNS will correctly resolve the hostname, so your code should "just work" without any environment checks.
After hours of wading through textbooks and tutorials on MSBuild and app.config manipulation, I stumbled across something called SlowCheetah - XML Transforms http://visualstudiogallery.msdn.microsoft.com/69023d00-a4f9-4a34-a6cd-7e854ba318b5 that did what I needed it to do in less than hour after first stumbling across it. Definitely recommended! From the article:
This package enables you to transform your app.config or any other XML file based on the build configuration. It also adds additional tooling to help you create XML transforms.
This package is created by Sayed Ibrahim Hashimi, Chuck England and Bill Heibert, the same Hashimi who authored THE book on MSBuild. If you're looking for a simple ubiquitous way to transform your app.config, web.config or any other XML fie based on the build configuration, look no further -- this VS package will do the job.
Yeah I know I answered my own question but I already gave points to the answer that eventually pointed me to the real answer. Now I need to go back and edit the question based on my new understanding of the problem...
Dave
I' assuming yout production serveur has a different ip address. You can simply use
SELECT CONNECTIONPROPERTY('local_net_address') AS local_net_address
I am in the process of writing an offline-capable smartclient that will have syncing capability back to the main backend when a connection can be made. As a side note, I considered the Microsoft Sync Framework but since I'm really only going one-way I didn't feel it would buy me enough to justify it.
The question I have is related to SQLite vs. SQLCE and ClickOnce deployments. I've dealt with SQLite before (impressive little tool) and I've dealt with ClickOnce, but never together. If I setup an installer for my app via ClickOnce, how do I ensure during upgrades the local database doesn't get wiped out? Is it possible to upgrade the database (table structure, etc. if necessary) as part of the installer? Or is it better to use SQLCE for something like this? I definitely don't want to go the route of installing SQL Express or anything as the overhead would be far too high for what I am doing.
I can't speak about SQLLite, having never deployed it, but I do have some info about SQLCE.
First, you don't have to deploy it as a prerequisite. You can just include the dll's in your project. You can check this article which explains how. This gives you finite control over what version is being used, and you don't have to deal with installing it per se.
Second, I don't recommend that you deploy the database as a data file and let ClickOnce manage it. When you change that file, ClickOnce will publish it again and put it in the data directory. Then it will take the previous one and put it in the \pre subfolder, and if you have no code to handle that, your user will lose his data. So if you open the database file to look at the table structure, you might be unpleasantly surprised to get a phone call from your user about the data being gone.
If you need to retain the data between updates, I recommend you move the database to the [LocalApplicationData] folder the first time the application runs, and reference it there. Then if you need to do any updates to the structure, you can do them programmatically and control when they happen. This article explains how to do this and why.
The other advantage to putting the data in LocalApplicationData is that if the user has a problem and has to uninstall and reinstall the application, his data is retained.
Regardless of the embedded database you choose your database file (.sqlite or .sdf) will be a part of your project so you will be able to use "Build Action" and "Copy to Output Directory" properties of that file to control what happens with the file during the install/update.
If you choose "Do not copy" it will not copy the database file and if you choose "Copy if newer" it will only copy if you have a new version of your database file.
You will need to experiment a little but by using these two properties you can have full control of how and when your database file is deployed/updated...