Windows 2008 R2 - Kernel (System Process PID=4) is locking files and folders - file

Windows 2008 R2 - Kernel (System Process PID=4) is locking files and folders for a long time.
For example when deleting a file, the file may remain locked for 1 minute or more and only after that be deleted.
On another occasions I encountered files or folders I could not delete. ProcMon showed that the System Process was holding a handle to those resources for a couple of minutes and then released them
None of the resources I mentioned were system resources, only files and folders installed be me and handled by my applications.

As Dani has already mentioned in the comment:
It's a bug in Windows 7 and likely in Windows Server 2008 (possibly 64bit versions only). It surfaces when you disable Application Experience service.
Re-enabling this service has fixed this problem for me.
A bit more info here as to why it's causing a problem.
List of other SO questions which seem to be related:
Visual Studio output file permissions?
Under which circumstances does the System process (PID 4) retain an open file handle?

Files accessed through a share will be locked by the system process (PID 4).
Try opening compmgmt.msc -> System Tools -> Shared Folders -> Open Files to see if the locked file is listed there
See also the sysinternals forum for a way to replicate this.
Not all applications lock files when they are opened, Excel however does...

In my case, it was fixed by a simple command in the command line:
net session /delete
I hope that helps.

Hope this helps others.
open windows run and lauch mmc.exe
File -> Add or Remove Snap-ins --> Shared Folders --> localcomputer
Select Open Files scroll down to the directory or file and right click to close.
You can also get the username that has the lock and go to sessions and right click --> close session.
In my case it was MacOS 10.13 holding file locks open...
https://support.apple.com/en-us/HT208209

I had this issue when trying to rename a folder. I had to stop the server service while performing the rename. Just restarting didn't help, as the system process re-locked the folder as soon as the server service restarted.

Make this and resolve the problem:
Go to Services and activate Application Experience.

Tried all these...
Even copying the file, deleting the original, renaming copy to original name (all on server) would immediately tell me the user had it locked.
In the end -
used Unlocker to clear the file locks.
Copied the file OFF THE SERVER to a desktop.
Deleted the original file off the server.
Changed the filename of the copy on the desktop.
Renamed it back to the original name on the desktop.
Put the file back into the original location ON THE SERVER.
HTH, YMMV... :)

Had this issue just now whilst trying to replicate data to a new file server (both source & destination servers running Windows 2008 R2).
PID 4 was found locking the file (using procexp as above), but Application Experience has never been installed on either server & the file was not shown in the list of open files.
Fortunately we use scheduled shadow copies on this server (to enable users to self-serve most file recoveries). I just used the Previous Versions option (available through Properties of the containing folder), selected the most recent copy of the file & copied it to somewhere else, then deleted and replaced the problem file.
You might need to delete the containing folder to delete the file - which could be a problem if lots of files in use obviously (this wasn't an issue for me given this was the only file in the folder).
For a one-off issue like I had (single locked file for the whole server drive), this worked without any disruption to the server or users.
Given you are talking about a server & that Shadow Copies are using VSS - you should be able to recover the locked file from your backups (presumably you have these) if you don't use Shadow Copies. Otherwise there are some useful utils like ShadowSpawn (https://github.com/candera/shadowspawn) around that might help.

Related

How do i restart PostgreSQL service after putting back the original Data folder?

I'm writing some batch scripts for doing incremental backups of a PostgreSQL cluster on a Windows Server.
I copied the Data folder to a different folder, ran my backup scripts, stopped the service, deleted the Data folder, and tried recovering the database from the WAL files and such.
This didn't work, because i copied the wrong log files, and i couldn't get the service started again, so i tried copying back in the original Data folder, but i still can't start the service.
The first script i ran called:
pg_basebackup -Fp -D %BACKUPDIR%\full_%CURRENTDATE%
This was the only line which actually ended up interacting with the data, but not the original Data folder, which i copied beforehand.
When trying to start the service again i get the following error message:
The postgresql-x64-10 - PostgreSQL Server 10 service on Local Computer started and then stopped.
Some services stop automatically if they are not in use by other services or programs.
I have gotten this before, when making a typo in the conf file, so i'm guessing that's just the standard error message for when something is missing.
Found out that i had to redo the folder permissions.
This is done the following way:
5. Change permissions for the new data directory
For the new data-dictionary folder: Right-click on it and click Properties. Under the Security Tab click “Edit...” and then “Add...”. Type “Network Service” and then click “Check Names”, make sure it has Modify and Full Control permissions and then click OK. Equally important PostgreSQL needs to be able to “see” the data-directory (see my ServerFault.StackEx question), i.e. it needs to have read access to the parent directories above it. So Right-click on the pg_db folder and under the Security Permissions add Network Services again, but this time it only needs Read & Execute as well as List folder contents permissions.
The full post is a nice checklist to go through, for anyone else facing similar issues:
https://radumas.info/blog/tutorial/2016/08/08/Migrating-PostgreSQL-Data-Directory-Windows.html

How to Correctly Setup the Backup and Restore App?

Each time I run the Microsoft Windows Backup and Restore App that is left over since the Microsoft Windows 7 Operating System, I get an error that some Files are missing and the Backup Process fails.
The Files are actually Folders. I have uninstalled some Apps in the meantime and now there is only one missing Folder that the Backup App does not find.
I have tried to run a Batch File within the CMD.EXE Command-Line Processor App with System Administrator Rights:
#ECHO OFF
SET DIR1="C:\Windows\System32\config\systemprofile\OneDrive\Pictures\Saved Pictures"
MKDIR %DIR1%
PAUSE
The Folder does get created well and nice, but the Backup App is still failing.
Could it be a Rights Dead-Lock?
I am creating the Folder using System Administrator Privileges because it is not possible otherwise.
I suspect that the Backup App is run with Normal Rights. However, the User Account that I am using is also part of the Administrators Group.
Please advise.
I could not reproduce this Issue.
The reason why I guess that this is happening is the following one:
The Microsoft Windows Insider Program is constantly rewriting the whole C:\Windows Folder on each Update, therefore the Folders that are missing have to be constantly recreated.
Earlier, I might have manually started the Microsoft Windows Backup and Restore Application and forgot to run the Batch File. The Application might have started to work on the Files and Folders to back up. Then, I might have manually run the Batch File that correctly created the Files and Folders, but that might have been too late - that is, after the Application already considered them as missing. Therefore, the error was happening.
I do not know for sure whether this is the cause for this error since I have encountered it a number of times, not only once, and I do not feel that it was possible to have manually run the Batch File later than needed each time.
Anyway, a possible workaround for this Issue might be the following one:
Create a Scheduled Task that first runs the Batch File and then runs the Microsoft Windows Backup and Restore Application. I do not know yet how to tell the Scheduler to automatically run the Application, but I can imagine that it might not be difficult to achieve this goal.
Then, whenever the manual Backup is needed to be performed, one can simply manually run the Scheduled Task. This way, this Issue might not reoccur, at least because the previously suspected behavior should be avoided.
I need to perform the backup manually because I am using Removable Disks as a Third Backup Solution. The First One is the ASUS Web Storage Cloud Provider and Synchronizer Application and the Second One is the File History Application run on an External Winchester Hard Disk Drive.
If anybody has a better solution for this Issue, then please let me know.

Temporarily attach/connect to SQL Server LocalDB file

I am trying to programmatically create and connect to an application specific LocalDB database. I would like to do this by specifying the file name of a .MDF file only, ideally without specifying an instance name or a name for the database that gets registered anywhere.
The database is to be accessed from some unit tests so it will only be used for a brief time before being deleted. My current approach creates the .MDF file correctly but also registers the name with the default instance which I would like to avoid given the temporary and 'non-singleton' nature of the database instances.
Is it possible to do what I am trying to do, or have I misunderstood how LocalDB works?
LocalDB automatic instance with specific data file
Server=(localdb)\v11.0;Integrated Security=true;
AttachDbFileName=C:\MyFolder\MyData.mdf;
Update
This can be used with the Deployment area in your .testsettings file. You just need to check 'Enable deployment' and add both the .mdf and .ldf files to 'Additional files and directories to deploy'.
You can then simply use the connection string above, and the test runner will take care of moving your data files to an appropriate temp folder for you.
Chrisb's answer got me on the right lines to solve this, but I noticed that the database remained attached to the default instance in LocalDB even after the connection had been closed. I read that this might eventually be purged after a few minutes but in my case this was too long as the file was located in a temporary directory used by MSTest and had to be closed in time for the cleanup at the end of the test run.
The solution was to use a connection string similar to https://stackoverflow.com/a/26712648 and a detach process similar to https://stackoverflow.com/a/6646319 immediately after I had finished using the connection.
Creating the MDF file in the first place could be accomplished by connecting to the automatic LocalDB instance, executing CREATE DATABASE and then using the same detach method. By using the file name for the database name, which is allowed in LocalDB due to the much longer names permitted, I ensured beyond reasonable doubt that the database name will not clash with anything else on the computer even during the short period it stays attached.

ZIP files not working in SSIS (server level issue?)

I'm posting this rather odd issue here in the remote chance that someone has come across this before, or possibly just has an idea or two about what I could try or check next because I'm stumped.
Summary: SQL 2008 SSIS package tasks that attempt to create files with .zip extension fail with
"Access to the path is denied"
Detail: This first occurred in a test environment with a package that works fine in Dev (and Prod). The part that makes this problem odd is that it is all about the File Extension, not security. I mention this now to curb replies about checking the security (SSIS Account, Directory Level permissions etc.) :- it's not that, 100%.
So, I've built an SSIS package as a proof of behavior, that takes 3 files (a.txt, b.txt, c.txt) and respectively for
(a) uses CozyRoc Zip to Create a Zip,
(b) uses a script task to create a .zip (using GZipStream - I know this creates a GZIP not a ZIP but bear with me...) and
(c) native SSIS File System Task copies the file from c.txt to c.zip (yes, creating a .zip file that is not really a zip file).
All Three fail with the above message - the .ZIP files are created for (a) and (b), but remain at 0 length. (For (c) just the error message).
Now, I edit the SSIS package and change the extensions of the destinations (to .ZOP or .ZIP2 or .GZ or .ANYTHING), and all 3 work perfectly. And this is obviously how I know that it's the .ZIP extension not a "normal" security issue.
So I've initially assumed this is a one-off on this test server because it was the only place it happens, but I've found another box (build rehearsal) on which exactly the same problem exists. I've tried associating .ZIP with various different programs (Windows Explorer, WinZip, 7Zip, WinRar & "no program") and nothing works, and I've googled the problem to death with no luck yet.
I've tried creating .ZIP files with the various installed archive programs using their GUIs and they all work fine. Existing .ZIP files can be unzipped using CozyRoc. Existing .GZ (GZIP) files renamed to .ZIP can be unzipped using the script GZipStream decompress. And I can rename files to and from .ZIP using SSIS or Explorer/CMD. It's just SSIS (specifically SSIS) creating a file with extension .ZIP (specifically .ZIP) throws this error.
I'm starting to suspect it might have something to do with SSIS thinking that .ZIP is an archive "folder" not a ".ZIP File" but I don't know where to go with this idea, proving it or fixing it.
Any ideas at all? - at my wits end!!
Thanks in advance
P.S. The "obvious" answer of using .ZIP2 and renaming is not an option, there are (literally) hundreds of packages running in production that create .ZIP files and packages need to move from Test to Prod without modification. I really need a solution, not a workaround, in this instance if there is one.
This turned out to be a RedGate tool (HyperBac) having a file association with .ZIP extension files (amongst others). Hyperbac's monitoring of .ZIP files appears to have clashed with SSIS's attempt to write to the .ZIP file, as procmon reported shared file access violations, causing a spurious ACCESS DENIED error to be reported by the package.
Since use of the tool is necessary on our environments, I was able to solve the problem by deleting the .ZIP association using the GUI ("Hyperbac Configuration Manager" > "Extensions" > Ext=.ZIP, Delete)

Recover postgreSQL databases from raw physical files

I have the following problem and I need to know if there´s a way to fix it.
I have a client who was cheap enough to decline buying a backup plan for his postgreSQL databases on the main system that runs his company and as I thought it would happen some day, some OS files crashed during a blackout and the OS needs to be reinstalled.
This client didn't have any backups of the databases but I managed to save the PostgreSQL main directory. I read that the databases are stored somehow inside the data directory of the postgres main folder.
My question is: Is there any way to recover the databases from the data folder only? I am working in a windows environment (XP service pack 2) with PostgreSQL 8.2 and I need to reinstall PostgreSQL in a new server. I would need to recreate the databases in the new environment and somehow attach the old files to the new database instances. I know that's possible in SQL Server because of the way that engine stores the databases but I have no clue in postgres.
Any ideas? They would be much appreciated.
If you have the whole data folder, you have everything you need (as long as architecture is the same). Just try restoring it on another machine before wiping this one out, in case you didn't copy something.
Just save the data directory to disk. When launching Postgres, set the parameter telling it where the data directory is (see: wiki.postgresql.org). Or remove original data directory of the fresh installation and place the copy in its place.
This is possible, you just need to copy the "data" folder (inside the Postgres installation folder) from the old computer to the new one, but there are a few things to keep in mind.
First, before you copy the files, you must stop the Postgres server service. So, Control Panel->Administrative tools->Services, find Postgres service and stop it. When you're done copying the files and setting permissions, start it again.
Second, you need to set the permissions for the data files. Because postgres server actually runs on another user account, it will not be able to access the files if you just copy them into the data folder, because it will not have permissions to do so. So you need to change the ownership of the files to the "postgres" user. I had to use subinacl for this, install it first, and then use it from command prompt like this (first navigate to folder where you installed it):
subinacl /subdirectories "C:\Program Files\PostgreSQL\8.2\data\*" /setowner=postgres
(Changing ownership should also be possible to do from the explorer: first you must disable "Use simple file sharing" in Folder options, then a "Security" tab will appear in the folder Properties dialog, and there are options there to set permissions and change ownership, but I wasn't able to do it that way.)
Now, if the server service can't start after you start it manually again, you can usually see the reason in the Event viewer (Administrative tools->Event viewer). Postgres will throw an error event, and inspecting it will give you a clue about what the problem is (sometimes it will complain about a postmaster.pid file, just remove it, etc.).
The question is very old, but I want to share an effective method that I found.
If you have not got a backup with "pg_dump" and your old data is folder, try the following steps.
In the Postgres database, add records to the "pg_database" table. With a manager program or "insert into".
Make the necessary check and change the following insert query and run it.
The query will return an OID after it has worked. Create a folder with the name of this number. Once you have copied your old data into this folder, the use is now ready.
/*
------------------------------------------
*** Recover From Folder ***
------------------------------------------
Check this table on your own system.
Change the differences below.
*/
INSERT INTO
pg_catalog.pg_database(
datname, datdba, encoding, datcollate, datctype, datistemplate, datallowconn,
datconnlimit, datlastsysoid, datfrozenxid, datminmxid, dattablespace, datacl)
VALUES(
-- Write Your collation
'NewDBname', 10, 6, 'Turkish_Turkey.1254', 'Turkish_Turkey.1254',
False, True, -1, 12400, '536', '1', 1663, Null);
/*
Create a folder in the Data directory under the name below New OID.
All old backup files in the directory "data\base\Old OID" are the directory with the new OID number
Copy. The database is now ready for use.
*/
select oid from pg_database a where a.datname = 'NewDBname';
As shown by move database to another hard drive. All we need to do is to modify the registry table and file permissions. By modifying registry table(shown in image 1), postgresql server know the new location of data.
modify registry
If you have issues with permissions or with stuff like icacls during installation to old data folder then try my solution from sister website.
https://superuser.com/a/1611934/1254226
I do so but the most tricky part was to change the owner permission:
go to services from administative tools
find postgres service and double click on it
at log on tab change to local system
then restart

Resources