Corrupted processing file? - file

I have a project I've been working on for a long time and a few hours ago when I tried to pick it up from where I left last night I realized that one of the processing files may be corrupted.
Last time I opened this file yesterday it had over 500 lines of code, now it's completely blank. The weird part is that in Properties its memory is displayed to be around 17 KB. I tried opening it with Notepad, but I only got a txt file full of spaces.
Does anyone have any idea how I can recuperate my work? I can't imagine having to rewrite everything

I'm not really sure how we can help you.
Processing doesn't have auto-backup or anything like that. I'd say make sure you don't have the file open in multiple editors. You could also try sending the file to another computer to see if that works. Other than that, the best thing you can do is check whether your computer has backups of some kind.

Related

Snowflake - snowsql PUT command upload very slow

I've been having issues with the PUT command using Snowflake's SnowSQL Client. My network's upload speed is 100 Mbps, but the PUT command takes almost a minute and a half for a ~30MB file. I have already gzipped the file, so the compression isn't taking this long. Does anyone know how to solve this problem?
PUT command speed
There is always encryption occurring on the local machine, so whether the file is compressed or not, the machine the PUT is running on will still affect the overall performance of the PUT. If the machine is under-powered or busy, can affect the performance. If you are watching the PUT happen, you can tell how long this is taking by viewing when the bytes transferred actually starts moving.
Try increasing the number of threads by setting PARALLEL to a value >1.
It looks like whatever the issue was has been resolved. I've been getting about 10-12 MB/s upload again. Not sure what the issue was, but I submitted a ticket to support a while back and it looks like they were able to fix it.
Thanks for the suggestions, but it looks like it was just something on their (or Azure's) end.

Lost code lines when Notepad++ crashed

I was working on a .js file this morning on Notepad++, as usual, when the program just crashed. So I ended it, and re-opened it to see that all my code lines in my .js file, had disappeared, and now all I have left is the file with a size of 0kb because there's nothing left in it. How the hell is that even possible ? It erased everything I typed and saved the file as if there's nothing in it.
Do you know a way to get my code back ? Or did something like this ever happened to someone ? :/ I'm kinda worried because there was a lot of work there and I don't feel like re-typing it all...
When that option is enabled (and it is by default), Notepad++ keeps a backup copy of files you edit.
You can find the backups in the directory %APPDATA%\Notepad++\backup under the format filename#datetime.
I lost four unsaved files when my Notepad++ crashed, I searched through net and found another way to retrieve unsaved files other than backup folder
C:\Users\{username}\AppData\Roaming\Notepad++\backup
Hope it can help others who face the same problem as me. You can try to locate the dump files at
C:\Users\{username}\AppData\Local\Temp\N++RECOV
There will be .dump file inside, this is where I found my unsaved files. You can open the dump file with Notepad++, and see your unsaved works.
This has happened to me a few times lately and I've found a few solutions that make it possible to recover the lost code.
For Windows 7 and probably other modern Windows versions:
Find the file in File Explorer. The size will be 0KB. Right click on it and then choose Properties from the context menu.
Choose the Previous Versions tab. There's a good chance you will find a fairly recent version saved during the latest Restore point. If it's a bit old, it's still probably better than the current 0KB.
Click the Restore button.
My personal optimal solution:
Since this happened to me a few times and the Windows Previous Version was not always up to date, I looked for a different solution that could always give me the latest, most up-to-date version before the 0KB crash.
I discovered that I already had the solution installed on my computer. I have a SugarSync account that always backs up my work files to the cloud. The great part of the service is that SugarSync always keeps the last 5 versions in the cloud, so while the current version will be 0KB, you can download the next to last version and update your file on your computer.
If you have some other backup program, you can check if that service also keeps different versions that you can recover.
I've used notepad++ without any backup for years. One day this happen to me, too.
Here is what I've found as possible solutions:
http://buffernow.com/notepad-plus-crash-recover-your-lost-file/
(similar to Indrajit answer) - not much help. I didn't find my file there.
I have recovered my file one week ago as per answer of Hvck
Same problem here. Same answers:
https://superuser.com/questions/390204/how-to-restore-a-previous-version-of-file-in-notepad
One lesson learned: Use the backup!
use a plugin:
http://www.ilovefreesoftware.com/12/windows/two-plugins-auto-save-files-notepad-auto-save-autosave2.html
use notepad++ backup
http://allinworld99.blogspot.ca/2015/01/notepad-backup-files.html
** UPDATE **
It happened to me again!!! Another way to recover my file: View source on my browser and resave the file to other location. If you are lucky enough to have the file loaded in your browser :) It worked for me for a css file and for a js file.
Comparing the files: what N++ backed-up and the file from browser and files are matching. OMG!
GO to Taskbar click and run %APPDATA%, click to Notepad++ click to backup.
Your files saved by filename.extensionname#year-month-date_time
I know it's too late to answer this, but maybe my answer will help others.
I've encountered the same problem recently. Then it became regular. I did not find a solution to the problem itself, plus it may be caused by many different reasons, so probably no universal solution exists.
However, there is a way to save your files while notepad++ is not closed. Even if the backups folder is empty.
First of all, do not close notepad++. Open a taskmanager, find the notepad++ process and locate the Dump or Create dump file option. Click it. This will create a whole memory dump of the process. This dump will contain the documents you had opened. However you may need to find this data, plus it may be in a different encoding. I guess it is in UTF8 most of the time, but I am not completely sure.
The dump file can be examined with simple programs like notepad++ itself or with a hex editor.
Notepad doesn't auto-save its open file so unfortunately you have lost your work.
On next time you can use Notepad++ it retrieves file automatically
Refer this link may it help you Notepad++ recovery
I tried all the above solutions but nothing was working for me.
But luckily I have opened files one day earlier in eclipse as well. Eclipse and all other IDEs maintain a cache of all the files. You can get the copy of the file from that cache. To get copy from the eclipse.
Right Click on the file Name
Go to Replace With - Previous from the Local History
This answer applies to more recent versions of NotePad++:
Go to the folder of the file, see if there is a subfolder called nppBackup. Recently I've found sometimes the backup wasn't created in %AppData%/Notepad++/backup but it always seems to get created here, with the following file name format [original file name][date stamp]_[time stamp].bak

Trying to find information on how to build a simple file version controll system

Im want to build a file system for non-tecks( dont care about old versions of the file so no merging or svn/git). The thougt is that a user should be able to download a file, in the same instance the file should be locked for other users. When the first user is done editing the, the file should then automaticaly upload to the server. When he closes the file, the lock should den be opend.
Is this even possible? Im thingking a sort of browser plugin, but I cant find anywone that has done the same thing. (besides microsoft, but who want to go down that road)
That would be: Sharepoint, Alfresco, (almost every WIKI), ...
Actually that is a basic feature of most document management systems. Even SVN has that already and IIRC you can set that up with mod_dav_svn without a line of code (considering configuration is not code).
Also the interesting question is, IMHO, not TheHappyCase where the described unit of work goes well but what about this*:
I Checkout 50 random documents you need
(get some popcorn and wait for your stresslevel to go up)
?????
I get bored and forget about it (everything still being checked out)
*: Points (1) and (2) may change order

extract .bin files

So I have an old dictionary on my pc, pretty old that I cannot find
any track of it's developer or the website (I guess it hasnt even been released
as an official software). I have a personal project of mine and I might need some
of this words translated (about 200-300) and I see that inside the data folder that
contains the database/list of files but Im unable to extract or read this files.
Is there any way to extract or convert these .bin files to a text format or something
readable. I've used some tools like (alcohol 120%, isobuster, magiciso, Izarc) but with
no luck. I keep getting and error message saying it is not a valid cd image file.
So I'm thinking maybe this type of .bin files are not like .bin or .iso cd files that
you can mount and read and something else might be in this case.
If you have any information kindly reply with
your suggestions.
Thank you alot.
You can try using the strings utility to extract the strings out of the file. It comes with any Linux distribution and if you are on Windows, you can get it from Windows Sysinternals.
If you are lucky and the words are not encoded, you may be able to get at the data you are looking for.
.bin is one of those extensions that has been way overused, and could be anything... What did the file come from originally? Do you need to convert these words and store them back in the original file (in their transformed form), and then expect the original app to work correctly?

Foreach Loop Container with Foreach File Enumerator option iterates all files twice

I am using the SSIS Foreach Loop Container to iterate through files with a certain pattern on a network share.
I am encountering an kind of unreproducible malfunction of the Loop Container:
Sometimes the loop is executed twice. After all files were processed it starts over with the first file.
Have anyone encountered a similar bug?
Maybe not directly using SSIS but accessing files on a Windows share with some kind of technology?
Could this error relate to some network issues?
Thanks.
I found this to be the case whilst working with Excel files and using the *.xlsx wildcard to drive the foreach.
Once I put logging in place I noticed that when the Excel was opened it produced an excel file prefixed with ~$. This was picked up by the foreach loop.
So I used a trick similar to http://geekswithblogs.net/Compudicted/archive/2012/01/11/the-ssis-expression-wayndashskipping-an-unwanted-file.aspx to exclude files with a ~$ in the filename.
What error message (SSIS log / Eventvwr messages) do you get?
Similar to #Siva, I've not come across this, but some ideas you could use to try and diagnose. You may be doing some of these already, I've just written them down for completeness from my thought processes...
log all files processed. write a line to a log file/table pre-processing (each file), then post-process (each file). Keep the full path of each file. This is actually something we do as standard with our ETL implementations, as users are often coming back to us with questions about when/what has been loaded. This will allow you to see if files are actually being processed twice.
perhaps try moving each file after it is processed to a different directory. That will make it more difficult to have a file processed a second time and the problem may disappear. (If you are processing them from an area that is a "master" area (and so cant move them), consider copying the files to a "waiting" folder, then processing them and moving them to a "processed" folder)
#Siva's comment is interesting - look at the "traverse subfolders" check box.
check your eventvwr for odd network events, or application events (SQL Server restarting?)
use perf mon to see if there is anything odd happening in terms of network load on your server (a bit of a random idea!)
try running your whole process with files on a local disk instead of a network disk, if your mean time between failures is after running 10 times, then you could do this load locally 20-30 times and if you dont get an error it may be a network error
nothing helped - I implemented following workaround: script task in the foreach iterator which tracks all files. if a file was alread loaded a warning is fired and the file is not processed again. anyway, seems to be some network related problem...

Resources