I'm a beginner in VTune. But I have some experience in AQTime 8. Now I'm using Intel VTune Amplifier XE 2013. In my opinion, it has many advantages over AQTime. There is a interesting question.
In AQtime I can choose modules of my interest to profile them. It's very useful, because I have to profile only one dll from the big project. Is there such a possibility in Intel VTune Amplifier XE 2013?
I have tried to find an answer, but found only this (Is it possible to use vtune on certain code snippets in a binary and not an entire binary?).
Please advice me
VTune Amplifier can limit data collection to a specific process and, optionally, any child processes, but it collects data in all modules of that process.
Then, you use the display options to limit data to the "module of interest".
You can either select a module from the filter bar at the bottom of the Bottom Up view or select the "Module / Function / Call stack" grouping in the same view.
You can use the Pause/Resume APIs to limit data collection to a "region."
It will not limit profiling to a module - any code executed in the process between the __itt_resume() API and the __itt_pause() API will be sampled.
Even though your query is resolved, I am posting this to help others with similar issues. :)
Related
I am currently in the process of trying to launch a database that has a VB6 front end connected to an access 2000 database. On certain computers we are experiencing a problem where the data being pulled from the database does not show up or does not show up correctly.
The computers that work seem to have the same dao360.dll date modified in both the system 32 and microsoftshared/dao while the one that are not working do not have the same date modified.
Is this whats causing the error? How can I correct this? Or is it something else that is happening?
There shouldn't be two copies of the DLL on the system. It sounds like a poorly designed install of some application had been previously done on these systems. There is no telling what the full extent of this has been.
Packaging as an isolated application can insulate your programs from these kinds of bad installs that create DLL Hell. Sadly MDAC/DAC and related components are very difficult to isolate.
This is another reason to have moved to ADO back in 1998, if not in the time since then. While you can't isolate the ADO-related parts of MDAC/DAC any more than you can DAO, those libraries are now shipped as part of Windows. You don't need to deploy them and they are protected from bad installers by the increasingly better system file protection mechanisms in Windows.
However providing specific assistance will probably require a more specific and detailed description of what is going on than "does not show up or show up corectly."
I'd create a minimal test case using DAO to begin exploring where (and what) the problems really are. To begin with perhaps just a simple query displaying the returned rowset without data binding.
I suggest installing the latest version of MDAC and Jet. While Jet used to be a part of the MDAC, I'm pretty sure they dropped it into its own installl/update/service pack at this point. Perhaps start here: http://support.microsoft.com/kb/239114
Stumped here. Posted a similar question before. We have a pretty large WPF app that on some machines runs great, but on others, all of a sudden, one of the CPU cores gets pinned at 100% (just one core) and the app freezes. It usually seems to happen when showing a context menu or a combobox drop-down (i.e. Popup controls) which is why we can't debug this since no user code is executing at that time. It's driving us crazy because again, on most machines it runs fine, but on a few, it freezes.
The odd thing is when we run it in a VM, it runs great there too! Crazy! Not sure what's causing this, or more importantly, where to even begin to look because as I said, no user code is running.
This happens on only about 10% of our machines, but it consistently happens on those machines. All are clean (i.e. relatively fresh OS installs, no crazy apps, etc.) and mostly identical machines spec-wise: similar CPUs, similar RAM, same video drivers and service packs.
So as I stated in the title, can anyone suggest possible reasons why a WPF app would pin the CPU and lock the app on some computers but not others? We're just stumped!
Found it!! Turns out there's a bug in .NET 4.0 regarding UI Automation and the changes MS introduced. Here's the info, and the fix! (Note: Even if you call MS, they will send you a link, but it's always a broken link. I managed to track this down manually.)
Note: Their article talks about a specific case that causes this behavior, but if you google around, you'll see tons of issues around hangs related to those DLLs. The latest is they're promising a fix in the .NET 4.5 runtime (from a MS post on this issue.)
Here's the KB article...
http://support.microsoft.com/kb/2484841/en-us
...and here is the actual hotfix.
http://archive.msdn.microsoft.com/KB2484841/Release/ProjectReleases.aspx?ReleaseId=5583
Crappy video driver? Pull two machines - one where it happens, one where not, and start analyzing differences. Could be hardware defects, bad video drivers, anything in that area. WPF uses the GPU to render if one is there.
Since you seem quite to lack options, i would advice to make a new project with just most basic ComboBox in the Window, doing almost nothing. This should work (check :-) ). Then you add features one by one in the ComboBox and test, for instance when you add command, start with empty one. Do this until it 'breaks'. So you know which feature is the culprit.
You didn t say if all was working with software rendering.
During localhost testing of modular Prism-based Silverlight applications, the XAP modules download too fast to get a feel for the final result. This makes it difficult to see where progress, splash-screens, or other visual states, needs to be shown.
What is the best (or most standard) method for intentionally slowing down the loading of XAP modules and other content in a local development set-up?
I've been adding the occasional timer delay (via a code-based storyboard), but I would prefer something I can place under the hood (in say the Unity loader?) to add a substantial delay to all module loads and in debug builds only.
Suggestions welcomed*
*Note: I have investigated the "large file" option and it is unworkable for large projects (and fails to create XAP with really large files with out of memory error). The solution needs to be code based and preferably integrate behind the scenes to slow down module loading in a local-host environment.
****Note: To clarify, we are specifically seeking an answer compatible with the Microsoft PRISM pattern & PRISM/CAL Libraries.**
Do not add any files to your module projects. This adds unnecessary regression testing to your module since you are changing the layout of the module by extending the non-executable portion. Chances are you won't do this regression testing, and, who knows if it will cause a problem. Best to be paranoid.
Instead, come up with a Delay(int milliseconds) procedure that you pass into a callback that materializes the callback you use to retrieve the remote assembly.
In other words, decouple assembly resource acquisition from assembly resource usage. Between these two phases insert arbitrarily random amounts of wait time. I would also recommend logging the actual time it took remote users to get the assembly, and use that for future test points so that your UI Designers & QA Team have valuable information on how long users are waiting. This will allow you to cheaply mock-up the end-user's experience in your QA environment. Just make sure your log includes relevant details like the size of the assembly requested.
I posed a question on StackOverflow a few weeks ago about something related to this, and had to deal with the question you posed, so I am confident this is the right answer, born from experience, not cleverness.
You could simply add huge files (such as videos) to your module projects. It'll take longer to build such projects, but they'll also be bigger and therefore take longer to download locally. When you move to production, simply remove the huge files.
There is a quite big LOB silverlight application and we wrote a lot of custom controls which are rather heavy in drawing.
All data is loaded by RIA service, processed and bound (using INofityPropertyChanged interface) to the view.
The problem is that first drawing takes a lot time. Following calls to the service (server) and redrawing is quite fast.
I used Equatec profiler to track the problem. I saw that processing takes a couple of miliseconds only so my idea is that the drawing by SL engine is slow.
I'm wondering if it is possible to profile somehow processes inside SL to check which drawing operations are taking too much time. Are there any guidelines how to implement faster drawing of complex custom controls?
Short answer is - No, there's no super easy way of figuring out why your application is slow.
Long Answer:
I have never used Equatec profiler for Silverlight but it seems similar to dotTrace. Either way, they both end up showing the same information as xPerf.
Basically the information you should have in front of you is saying which methods and classes took up the most time to execute.
If that information points back to Silverlight framework graphics engine (agcore.dll and npctrl.dll), you'll have to start a slow process of figuring out what you did wrong.
At this point I strongly recommend that you'll watch every single talk Seema Ramchandani gave about Silverlight performance. Specifically PDC08, Mix09 and Mix10.
Step #1 of perf optimization: Measure. Measure. Measure.
Have a clear baseline of what you're trying to improve, and set a numeric expectation to when performance is good enough.
That way you can verify that your changes are having a positive impact on performance.
Step #2 of perf optimization: Start removing stuff.
In your case, I'd start commenting out controls out off the form. When perf massively improves, you've found your culprit.
Step #3 of perf optimization: Try to fix the weak link.
That's how I would go about solving this issue.
Sincerely,
-- Justin Angel
Try profiling with the Visual Studio profiler in order to get a good measure of your managed code and the native code executing within Silverlight. The profiler will help point you to where you're spending the most of your time (what the hot path's are) and whether or not your spending it in framework (SL) code or your own code.
The basics of profiling are:
Open a Visual Studio Command Prompt (as admin), 'cd' to the directory where your DLL and PDB files are (typically your "Debug" folder)
VSPerfClrEnv /sampleon
VSPerfCmd -start:sample -output:somefile.vsp VSPerfCmd -globalon VSPerfCmd -launch:"c:\Program Files (x86)\Internet Explorer\iexplore.exe" -args:""
VSPerfCmd -shutdown
VSPerfClrEnv /off
You can find detailed instructions on using the profiler on my blog: http://www.nachmore.com/2010/profiling-silverlight-4-with-visual-studio-2010/
If you find that you are spending time within Silverlight, track down the code path to see where your code is triggering the expensive calls, so that you can then investigate specific solutions based on the cause of the slow down.
Hope that helps,
Oren
TiddlyWiki is a great idea, brilliantly implemented. I'm using it as a portable personal "knowledge manager," and these are the prize virtues:
It travels on my USB flash memory stick and runs on any computer, regardless of operating system
No software installation is needed on the computer (TiddlyWiki merely uses the Internet browser)
No Internet connection is needed
In terms of data retrieval functionality, it mimics a relational database (use of tags and internal links)
Set up and configuration are so simple as to be almost zero. This would also mean dependencies are so minimal as to be transparent, or nearly so.
Let's say I've got a million words of prose in 4,000 tiddlers (posts). I'm still testing, but it looks like TiddlyWiki gets very slow.
Is there an app like TiddlyWiki that keeps all the virtues I listed above, and allows more storage? (or rather, retrieval!)
NOTE: Separation of content and presentation would be ideal. It's nifty that TiddlyWiki has everything in a single HTML document, but it's unhelpful in many ways. I don't care if a directory of assorted docs is needed (SQLite, XML?), as long as it's functionally self-contained.
After some time and serious consideration, I will post my own answer.
There is nothing that matches TiddlyWiki.
As for voluminous information, TW can pretty much handle it. (My early discouragements were due to malformed code.) Difficulty accessing information through the interface becomes an issue before any speed problems. This isn't to fault the interface -- it could be more powerful, but that would sacrifice lightness.
Indeed TiddlyWiki can work with VERY large tiddler stores, they don't need to be in the current TiddlyWiki document either.
See "import tiddler" and friends over at http://tiddlytools.com
Before creating Rails, David Heinemeier Hansson wrote a wiki app called Instiki. Like TiddlyWiki, you don't run it from a separately running server*, so it's easy to run locally and move around on a USB drive (exporting the entire content to a zip file with all the html files or all the files in Textile markup). The entire Instiki tgz download is less than 5mb and the app has only one external dependency: Ruby.
So you can run Instiki anywhere you can run Ruby (for instance, on a Nokia N900 phone).
I never built any Instiki sites as large as you describe, but it ought to handle 1 million words in 4,000 pages a lot easier than TiddlyWiki handles 4,000 tiddlers.
Roger_S
* Oh, not to confuse anyone: Instiki uses the embedded webserver WEBrick
You could try installing Portable Apps on your USB drive and adding the XAMPP Package which has Apache, PHP, MySQL all installed and running MediaWiki or other Wiki software on top of it.
http://tiddlyweb.peermore.com/wiki/ maybe exactly what you are looking for.
You can use any TiddlyWiki variant and the data can be delivered via a server and on-demand.
I have recently discovered DokuWikiStick which runs a version of MicroApache. Recommended by LifeHacker... Starting size is about 10MB.
you probably already know this but there's a new version of tiddlywiki out that is still in beta but has been rewritten to allow a more robust environment for the future.
http://tiddlywiki.com/
2020 answer, from 2017
Check out liddly, it's a local tiddlywiki server written in go that fits all your requirements and can run off a USB. It stores tiddlers in a SQLite database, albeit without relational links, making the tiddlywiki interface (presentation) separate from your data(content). It was last updated in 2017 but it still works with the latest tiddlywiki5, you will just have to compile it yourself.