get dump using nanodump and parse in mimikatz - c

i am using nanodump for dumping lsass.exe.
everything is ok, but when i get to mimikatz by following command,got error:
mimikatz.exe "sekurlsa::minidump <path/to/dumpfile>" "sekurlsa::logonPasswords full" exit
mimikatz error:
ERROR kuhl_m_sekurlsa_acquireLSA ; Memory opening
i use "x64 nanodump ssp dll", and AddSecurityPackage winapi for attaching to lsass
when i was testing all way's, detect that nanodump specified dump file size(default=>report.docx),is different from procmon.exe Full and Mini dump output.
my test:
procmon full = 71 MB
procmon mini = 1.6 MB
nanodump = 11 MB
what can i do for dump by nanodump,compatible with mimikatz::logonpasswords?

this was for invalid file signature dumped by nano ssp module, this probled solved by this command:
[nano git source]/scripts/restore_signature.exe

Related

Memory failure when running gem5 SE RISCV code

When I try to run a simulation in SE mode in gem5 I get the following output:
warn: No dot file generated. Please install pydot to generate the dot file and pdf. build/RISCV/mem/mem_interface.cc:791: warn: DRAM device capacity (8192 Mbytes) does not match the address range assigned (512 Mbytes) 0: system.remote_gdb: listening for remote gdb on port 7000 build/RISCV/sim/simulate.cc:194: info: Entering event queue # 0. Starting simulation... build/RISCV/sim/mem_state.cc:443: info: Increasing stack size by one page. build/RISCV/sim/mem_state.cc:99: panic: Someone allocated physical memory at VA 0x4000000000000000 without creating a VMA! Memory Usage: 619616 KBytes Program aborted at tick 2222000
I'm using the ELF-linux cross compiler. Compiling with the Newlib-ELF cross compiler simulates just fine, but the thing is that I need to use pthreads(openmp) and the Newlib compilation doesn't support it. To get a grip on things I tried to simulate in x86, and found out that it wont work either with a simple gnu/gcc compilation. Then I complied replicating what the test-progs folder did with docker and then it worked fine. Is this the way to go? Since the error says there are problems with physical memory, would compiling with docker help out, or am I missing an obviuos fix? How would go about compiling RISCV with docker (I couldn't find examples of docker+RISCV)?

How to track down sql dump error?

How can I track down this dump error?
And most important, which process is causing it?
What are the consequences?
It happens almost every weekend:
See sql dump output below:
*Current time is 23:26:40 11/05/17.
=====================================================================
BugCheck Dump
=====================================================================
This file is generated by Microsoft SQL Server
version 13.0.4446.0
upon detection of fatal unexpected error. Please return this file,
the query or program that produced the bugcheck, the database and
the error log, and any other pertinent information with a Service Request.
Computer type is Intel(R) Xeon(R) CPU E5-2698B v3 # 2.00GHz.
Bios Version is VRTUAL - 5001223
Intel(R) Xeon(R) CPU E5-2698B v3 # 2.00GHz
4 X64 level 8664, 10 Mhz processor (s).
Windows NT 6.2 Build 9200 CSD .
Memory
MemoryLoad = 96%
Total Physical = 32767 MB
Available Physical = 994 MB
Total Page File = 39679 MB
Available Page File = 5602 MB
Total Virtual = 134217727 MB
Available Virtual = 134132460 MB
**Dump thread - spid = 0, EC = 0x000001DE6E277240
***Stack Dump being sent to C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\LOG\SQLDump0006.txt
* *******************************************************************************
*
* BEGIN STACK DUMP:
* 11/05/17 23:26:40 spid 38
*
* Latch timeout
*
*
* *******************************************************************************
* -------------------------------------------------------------------------------
* Short Stack Dump*
You can use SQL Server Diagnostics (Preview) available as an extension from SSMS 17.1 onwards to check for any potential causes and any available resolutions
After installing you will find a screen like below and after uploading dump, you can find potential solutions or patches which may help you.Ensure you upload DUMP to a location near to you
You also can load the dump using windbg and play with it if you have right symbols..Further event logs can show you more info
Most of the times Stackdumps are dumped due to bugs..Best way to proceed with them is to raise a ticket with microsoft.
Elapsed time: 0 hours 0 minu
tes 6 seconds. Internal database snapshot has split point LSN = 00014377:000000a5:0001 and first LSN = 00014377:
000000a3:0001.
repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (master.
**Dump thread - spid = 0, EC = 0x0000022824F95600
You nee check dump file where stack dump detail provides.
We are sharing small piece of code for analysis.

Error uploading files CKAN

I am trying to upload data to CKAN, and I can do that for smaller files (I have uploaded 4 kB successfully), however for bigger files (with a file of 18 MB I already got this error), I get Error 500 An internal server error occurred.
In the command prompt where I am running CKAN I get
Error - <type 'exceptions.WindowsError'>: [Error 32] The file is already being used by another process: u'C:\\src\\ckan\\ckan\\resources\\a3d\\19a\\ba-7f3f-42fc-
a02e-09f50aae0924~'
URL: http://localhost:5000/dataset/new_resource/test1
I don't know what that file is, but I am pretty sure this error is the reason why I can't upload larger files, as it is the only error I get.
Important to say that I can successfully add resources from URL and from small files, but when trying with larger files, I get this error.
Does anyone have any idea on what could be wrong here?
Many thanks!
I can't explain that Windows-Error, but generally CKAN has an upload size limit of 10MB for resources by default. You can raise that in your ini with ckan.max_resource_size = XX, for example ckan.max_resource_size = 100 (which means = 100MB).

Core Dump - non-existing file

I've made a program with a segmentation fault and i want to get the core dump file but it seems that the file is not on the current directory. I've read and follow these instructions:
core dumped - but core file is not in current directory?
but I'm still unable to get the core file.
I've tried this:
ulimit -c unlimited
ulimit -S -c unlimited
I've also edited /etc/security/limits.conf the line:
* soft core 10000
(it was 0 the default value)
And as my system runs apport so I've searched /var/crash and the file I wanted (that should've generated) wasn't in there.
More useful information:
$ cat /proc/sys/kernel/core_pattern
|/usr/share/apport/apport %p %s %c
So what did I miss? I still don't get the core file after the segmentation fault or if I do I don't know where he is going to.

libVLC failes to create video output

When I compile the example from http://wiki.videolan.org/LibVLC_Tutorial on OS 10.8 with
gcc -I /Applications/VLC.app/Contents/MacOS/include/ -L /Applications/VLC.app/Contents/MacOS/lib/ -lvlc vlc.c
it all works fine but when I try to execute it I get the following error message:
[0x7ff389e094e0] main window error: corrupt module: /Applications/VLC.app/Contents/MacOS/plugins/libmacosx_plugin.dylib
[0x7ff38d002af0] main video output error: video output creation failed
[0x7ff38b0010f0] main decoder error: failed to create video output
It fails to create a video output but manages to create audio output.
Is this specific to the OSX build of VLC or what is the reason for this behavior?

Resources