I want to change a particular entry in a trace file. How can I do it?
I have received a set of trace files which were run on the prod server. From these I was trying to get a set of RML files to replay the load on a different environment. To change the .trc files to RML files I ran readtrace.exe
However, readtrace did not output RML files. Looking at the logs I see the following error
06/06/12 15:43:20.914 [0X0000060C] SPID: 118 Seq: 50736293 [Error: 110003][State: 0][Abs Char: 233][Seq: 0] SYNTAX ERROR: String is
missing proper closing quote near (Char Pos: 0x139 Byte Pos: 0x272)
It then clearly shows the entry which is causing this error. While I have sent the bug to the dev team and the fix will go out in the next release, I need to use the current trace file to generate and replay the load. Thus I want to fix the particular entry in the trace file which is causing this error.
Is it possible? I tried opening the trace file in wordpad but wordpad crashed, not surprising that given the trace file is 250MB. Trying to install Vim and see if I can open and change the trace file there, but was wondering if anyone knew of an easier method to do this.
I could not find a way to do this, so finally going the expected way of requesting for a new server side trace, which is taken after the fix has been put.
Before that I did try to load the trace files into a table, then load in profiler and then change the values which I needed and generate a new trace file, but crticial events needed for RML generation are lost when you load to profiler. I also opened a smaller trace file in wordpad and found that I could not edit the contents as there was binary content as well and it wasnt clear what text editing would achieve.
Thus in the end, the only way to get a trace which can generate RML files was to fix the bad query in the code and push the fix and then take the trace once again.
Related
We have the omnimark script that takes 2gb sgml file size as input and output the file which is around 2.2 gb.The script is called from unix shell script and we are facing issues that sometimes script runs successfully and sometime it just aborted with no error....any idea or suggestions how to debug this?
I have seen this type of issue before in running OmniMark v5.3 when the script bombs due to lack of server resources/memory.
If you've specified writing to a log file, e.g. using -log logfilename.txt, then you would see something like an error code #3000 "insufficient memory error".
http://developers.omnimark.com/docs/html/error/3000.htm
If no log file, then initial step would be to run the script in a console session so that any such abort message is visible.
Stilo have a page listing fixes in various versions of OmniMark
http://developers.omnimark.com/docs/html/concept/806.htm
This mentions a variety of memory-related issues in various versions of the software (e.g. use of certain translate rules) which may help some investigation.
Alternatively, you could add to the script writing to a debug log file (with a global switch to activate debug on or off (so you don't waste further I/O resources when you don't need to)). Debug log file should be unbuffered. At certain breakpoints in the script add a message. The more verbose the better at narrowing down where/when the error is, but with the size of file I suggest it's a I/O or memory error.
Also depends what version of OmniMark you're using.
I want to load data into a greenplum database with gpload.py (Windows Server). But I only get a weird output:
|ERROR|A gload control file processing error occured. The gpload:input:source(1):file entry must be a YAML sequence
I tried to use gpload with linux and it worked fine. So my yaml file and my input data should be correct.
Does anyone know how to fix that problem?
You should post the yaml config file you are using to make sure there are no other problems. But I would guess since you said that it works on linux but not on windows that you have a line ending problem.
YAML files are line and whitespace sensitive. Try editing the file with a local editor on the Windows machine.
I am running scripts in Sql server management studio and something is causing SSMS to reset. After some investigation and reading the message that pops up, I run the application from the command line and use the /log switch. reviewing the log I find this error :
PkgDef encountered data collision in section
'HKEY_CURRENT_USER\Software\Microsoft\SQL Server Management
Studio\11.0_Config\CLSID{00a2c8fe-3844-41be-9637-167454a7f1a7}' for
value 'Assembly'
this is just one example...there are many. So I look online for pkgdef troubleshooting tips and found one site...http://blogs.msdn.com/b/visualstudio/archive/2010/03/22/troubleshooting-pkgdef-files.aspx
this told me the following:
Issue: Registry Value Collisions Sometimes the same registry value is
being set by more than one pkgdef file. In other cases, a registry
value is being set, but it’s not clear which pkgdef file is doing it.
You can either scan all of the pkgdef files and discover where the
value is being set. Or use /log again. Remedy: Use /log to discover
which pkgdef(s) set a value To discover all pkgdef files that are
setting a value, use a simple trick: explicitly set that same value by
temporarily changing the master pkgdef (C:\Program Files\Microsoft
Visual Studio 10.0\Common7\IDE\devenv.pkgdef) to explicitly set the
value before all of the other pkgdef files are processed. This will
require Administrator rights to edit the file, but is otherwise
straightforward. Make a backup copy of devenv.pkgdef, then bring it up
in an editor. Below the [$Initialization$] section, add a new section
for the parent key of the value. Then add the key value below it, save
it, and run devenv.exe with /log. The pkgdef loader will log all of
the additional writes to that value, along with the path to the
offending pkgdef.
I found the devenv.pkgdef file but i really don't understand how to modify the file as it suggests.
I would like to know how to modify the file(with example) to find the problem and then to fix it.
Thanks for your patience
Themes.res file is not opening.
What do i do ?
However, the app is working.
The exception that I am getting on the console while opening the file is:
java.lang.NullPointerException
at com.codename1.ui.util.Resources.createImage(Resources.java:936)
at com.codename1.ui.util.EditableResources.createImage(EditableResources.java:2332)
at com.codename1.ui.util.Resources.loadFont(Resources.java:1119)
at com.codename1.ui.util.EditableResources.loadFont(EditableResources.java:1932)
at com.codename1.ui.util.EditableResources.openFileWithXMLSupport(EditableResources.java:426)
at com.codename1.designer.ResourceEditorView$LoadResourceFileAction.exectute(ResourceEditorView.java:4112)
at com.codename1.ui.resource.util.BlockingAction.run(BlockingAction.java:88)
at java.lang.Thread.run(Unknown Source)
Is there any way I can recover this file / data ?
First verify that the file isn't a 0 size file, if it got corrupted to that level you will need to restore from backup. This hasn't happened for us in years as far as I know but its always a risk.
Next make sure you didn't remove/rename any ttf fonts that might be used by the theme, this is a common cause for failures in the designer.
Next we need to see the actual error and to do that we need to run the designer from command line using:
java -jar ~/.codenameone/designer_1.jar
(code for Mac/Linux replace ~ with your home directory and reverse the slashes for Windows).
Now try to open the resource file and see if you get an exception in the console. Assuming you do we will know more about it and might be able to help you recover your data.
We are currently migrating to new XML based format which should become the default soon.
I am using the SSIS Foreach Loop Container to iterate through files with a certain pattern on a network share.
I am encountering an kind of unreproducible malfunction of the Loop Container:
Sometimes the loop is executed twice. After all files were processed it starts over with the first file.
Have anyone encountered a similar bug?
Maybe not directly using SSIS but accessing files on a Windows share with some kind of technology?
Could this error relate to some network issues?
Thanks.
I found this to be the case whilst working with Excel files and using the *.xlsx wildcard to drive the foreach.
Once I put logging in place I noticed that when the Excel was opened it produced an excel file prefixed with ~$. This was picked up by the foreach loop.
So I used a trick similar to http://geekswithblogs.net/Compudicted/archive/2012/01/11/the-ssis-expression-wayndashskipping-an-unwanted-file.aspx to exclude files with a ~$ in the filename.
What error message (SSIS log / Eventvwr messages) do you get?
Similar to #Siva, I've not come across this, but some ideas you could use to try and diagnose. You may be doing some of these already, I've just written them down for completeness from my thought processes...
log all files processed. write a line to a log file/table pre-processing (each file), then post-process (each file). Keep the full path of each file. This is actually something we do as standard with our ETL implementations, as users are often coming back to us with questions about when/what has been loaded. This will allow you to see if files are actually being processed twice.
perhaps try moving each file after it is processed to a different directory. That will make it more difficult to have a file processed a second time and the problem may disappear. (If you are processing them from an area that is a "master" area (and so cant move them), consider copying the files to a "waiting" folder, then processing them and moving them to a "processed" folder)
#Siva's comment is interesting - look at the "traverse subfolders" check box.
check your eventvwr for odd network events, or application events (SQL Server restarting?)
use perf mon to see if there is anything odd happening in terms of network load on your server (a bit of a random idea!)
try running your whole process with files on a local disk instead of a network disk, if your mean time between failures is after running 10 times, then you could do this load locally 20-30 times and if you dont get an error it may be a network error
nothing helped - I implemented following workaround: script task in the foreach iterator which tracks all files. if a file was alread loaded a warning is fired and the file is not processed again. anyway, seems to be some network related problem...