I have a x86 application working on windows10 (64 bit environment).
One pf the app's features is to generate a lot of reports, so there is a lot of printing involved.
However, I noticed that every time I try to use call DefaultPrintTicket on the print queue the dllhost process (COM Surrogate) grows in memory.
I managed to isolate the code responsible and moved it to a test WPF app. When a button is clicked this code is being fired:
var localPrintServer = new LocalPrintServer();
var oneNotePrintQueue = localPrintServer.GetPrintQueues().FirstOrDefault(p => p.Description.Contains(OneNote));
var printTicket = oneNotePrintQueue?.DefaultPrintTicket;
The printing queue is irrelevant as I tried them all and the problem remains.
I am aware that this might be a duplicate to: PrintTicket DllHost.exe Memory Climbs
However, the solution provided there does not work as PrintTicked is not an IDisposable object.
I also tried some tweaks in the registry (i.e. finding AppId AA0B85DA-FDDF-4272-8D1D-FF9B966D75B0 and removing "AccessPermission", "LaunchPermission" and "RunAs") with no result.
I cannot rebuild the app as AnyCpu and I would like to avoid creating a separate 64bit process just for printing as it would be difficult to send a report generated in one app to another.
Any suggestions are greatly appreciated.
It seems that the topic is difficult.
Just want to share the solution I went with in case anyone else has the same problem.
In the end I created a separate x64 app which handles the printing.
Initially I wanted to go with a WCF service. However, I ran into problems with serializing of FixedDocuments and PrintQueue. Hence the separate app.
The solution if far from perfect and in my opinion is not nice at all. However, it solved the memory leak issue.
Related
I am trying to run a simple test on multiple browsers, here is a mock up of the code I've got:
String url = "http://www.anyURL.com";
WebDriver[] drivers = { new FireFoxDriver(), new InternetExplorerDriver,
newChromDriver() };
#Test
public void testTitle() {
for (int i = 0; i < drivers.length; i++) {
// navigate to the desired url
drivers[i].get(url);
// assert that the page title starts with foo
assertTrue(drivers[i].getTitle().startsWith("foo"));
// close current browser session
drivers[i].quit();
}// end for
}// end test
For some reason this code is opening multiple browsers seemingly before the first iteration of loop is completed.
What is actually happening here? and what is a good/better way to do this?
Please understand that I am by no means a professional programmer, and I am also brand new to using Selenium, so if what I am attempting is generally bad practice please let me know, but please don't be rude about it. I will respect your opinion much more if you are respectful in your answers.
No it's not.
In fact, most of the test frameworks have convenient ways to handle sequential/parallel executions of test. You can parametrize test class to run the same tests on multiple browsers. There is an attribute in TestNG called Parameters which can be used with setting.xml for cross browser testing without duplicating the code. An example shown here
I would no do that.
Most of the time it is pointless to immediately run your test against multiple browsers. Most of the problems you run into as you are developing new code or changing old code is not due to browser incompatibilities. Sure, these happens, but most of the time a test will fail because, well, your logic is wrong, and it will not just fail on one browser but on all of them. What do you gain from getting told X times rather than just once that your code is buggy? You've just wasted your time. I typically get the code working on Chrome and then run it against the other browsers.
(By the way, I run my tests against about 10 different combinations of OS, browser and browser version. 3 combinations is definitely not good enough for good coverage. IE 11 does not behave the same as IE 10, for instance. I know from experience.)
Moreover, the interleaving of tests from multiple browsers just seems generally confusing to me. I like one test report to cover only one configuration (OS, browser, browser version) so that I know if there are any problems exactly which configuration is problematic without having to untangle what failed on which browser.
I am working on a project where we were asked to "patch" (they don't want a lot of time spent on development as they soon will replace the system) a system implemented under ExtJS 4.1.0.
That system is used under a very slow and non-stable network connection. So sometimes the stores don't get the expected data.
First two things that come to my mind as patches are:
1. Every time a store is loaded for the first time, wait 5 seconds and try again. Most times, a page refresh fix the problem of stores not loading.
Somehow, check detect that no data was received after loading a store and, try to get it again.
This patches should be executed only once to avoid infinite loops or unnecessary recursivity, given that it's ok that some times, it's ok that stores don't get any data back.
I don't like this kind of solutions but it was requested by the client.
This link should help with your question.
One of the posters suggests adding the below in an overrides.js file which is loaded in between the ExtJs source code and your applications code.
Ext.util.Observable.observe(Ext.data.Connection);
Ext.data.Connection.on('requestexception', function(dataconn, response, options){
if (response.responseText != null) {
window.document.body.innerHTML = response.responseText;
}
});
Using this example, on any error instead of echoing the error in the example you could log the error details for debugging later and try to load again. I would suggest adding some additional logic into this so that it will only retry a certain number of times otherwise it could run indefinitely while the browser window is open and more than likely crash the browser and put additional load on your server.
Obviously the root cause of the issue is not the code itself, rather your slow connection. I'd try to address this issue rather than any other.
I've recently been asked by my supervisor to prepare a solution in which multiple pieces of logic throughout our application can be reverted back to an earlier piece of code while the application is live. In effect, I need to prepare something like a flag or an indicator that can be dynamically activated to switch all instances of code in our application, from the new version back to the old one.
The new logic was prepared by a new member of our team and we are concerned about memory leaks that will emerge once the code goes to production, and we want a solution in place that will allow us to turn those changes off and return to the original code if necessary.
if (new_code == ON)
{
New Logic
}
else
{
Old logic
}
This project was originally meant to help get rid of build and compile warnings during our build process so it affects code ranging from function arguments to variable declarations, so there's no one single type of code that will be affected. We are running off a tuxedo stack but implementing a tuxedo config file to effect this change isn't recommended according to one of our senior developers. I'm not aware of a similar solution, though.
Any ideas? Thanks!
Would it work? Sure. Is it a good idea? No. You now have the risk of the new code, plus the risk of bugs in your switch code, plus the risk of what happens if you switch from one to the other in mid-run. You shouldn't be doing this, its far more likely to cause trouble than just deploying the changes directly.
What you should do- if you're really concerned about it, don't deploy it. Put it through additional testing until you're comfortable with it. Then when you do deploy it, have a plan to roll back to a previous version without these changes if something slips through testing.
call the function using function pointers.
make an API to change the function pointer to old or new depending on your need.
i have a wpf application with login window before displaying the mainwindow.
i use mef to load all modules/parts. before the mainwindow start i check the user login data against the parts which i display then. the parts a Shared and NonShared.
[ImportMany]
private IEnumerable<Lazy<IComponent, IComponentMetadata>> _components;
[ImportMany("Resourcen", typeof(ResourceDictionary))]
private IEnumerable<ResourceDictionary> _importResourcen;
var catalog = new AggregateCatalog();
catalog.Catalogs.Add(new AssemblyCatalog(Assembly.GetExecutingAssembly()));
catalog.Catalogs.Add(new DirectoryCatalog(AppDomain.CurrentDomain.BaseDirectory));
_mefcontainer = new CompositionContainer(catalog);
_mefcontainer.ComposeParts(somepartwithaSharedExport, this);
this all works fine. but now i tried the "relogin".
_mefcontainer.Dispose();
_mefcontainer = null;
//here the stuff that works from above
first i thought it works, but it seems that the parts i create the first time still exist in memory and i have no chance to "kill" them. so i got OutOfMemory Exception when i relogin enough times.
that why i use this approach now
System.Diagnostics.Process.Start(Application.ResourceAssembly.Location);
App.ShutDown();
i dont feel happy with this.
is there a way to cleanup the Compositioncontainer and create a new one?
You could try to call _mefcontainer.RemovePart(somepartwithaSharedExport). More details here: http://mef.codeplex.com/wikipage?title=Parts%20Lifetime
For the non-shared part you can call CompositionContainer.ReleaseExport:
_mefcontainer.ReleaseExport(nonSharedExport);
For more info have a try the sample code from this answer.
As far as I know, the shared parts cannot be released without disposing the container. If you go with that path, then you will also have to make sure that no references to these objects are kept to allow for the GC to collect them. The documentation reference from mrtig's answer provides a lot of useful details concerning the lifetime of parts and you should probably study it along with the answer by weshaggard to a similar question. It also explains what happens to disposable parts.
For our senior design project my group is making a Silverlight application that utilizes graph theory concepts and stores the data in a database on the back end. We have a situation where we add a link between two nodes in the graph and upon doing so we run analysis to re-categorize our clusters of nodes. The problem is that this re-categorization is quite complex and involves multiple queries and updates to the database so if multiple instances of it run at once it quickly garbles data and breaks (by trying to re-insert already used primary keys). Essentially it's not thread safe, and we're trying to make it safe, and that's where we're failing and need help :).
The create link function looks like this:
private Semaphore dblock = new Semaphore(1, 1);
// This function is on our service reference and gets called
// by the client code.
public int addNeed(int nodeOne, int nodeTwo)
{
dblock.WaitOne();
submitNewNeed(createNewNeed(nodeOne, nodeTwo));
verifyClusters(nodeOne, nodeTwo);
dblock.Release();
return 0;
}
private void verifyClusters(int nodeOne, int nodeTwo)
{
// Run analysis of nodeOne and nodeTwo in graph
}
All copies of addNeed should wait for the first one that comes in to finish before another can execute. But instead they all seem to be running and conflicting with each other in the verifyClusters method. One solution would be to force our front end calls to be made synchronously. And in fact, when we do that everything works fine, so the code logic isn't broken. But when it's launched our application will be deployed within a business setting and used by internal IT staff (or at least that's the plan) so we'll have the same problem. We can't force all clients to submit data at different times, so we really need to get it synchronized on the back end. Thanks for any help you can give, I'd be glad to supply any additional information that you could need!
I wrote a series to specifically address this situation - let me know if this works for you (sequential asynchronous workflows):
Part 2 (has a link back to the part1):
http://csharperimage.jeremylikness.com/2010/03/sequential-asynchronous-workflows-part.html
Jeremy
Wrap your database updates in a transaction. Escalate to a table lock if necessary