Treesize free (https://www.jam-software.com/treesize_free) is a file / folder analysis tool that quickly scans a PC and sorts folders and files in order of size to quickly show what is using up disk space.
It used to work just fine but sometime in the last few months (I've got a new PC so might be just since having this) I've noticed it has stopped working for OneDrive folders. We use OneDrive for business at work and all my docs / downloads / desktop etc are backed up on OneDrive, and these folders are all marked to keep offline ("Always keep on this device"), so they are saved locally.
However, Treesize doesn't show these files, apparently I only have 4GB in OneDrive.
If I right click the OneDrive folder and go to properties, I can see that it is about 60GB.
Any ideas what's going wrong, or how I could analyse disk space that is used by OneDrive?
I have the latest version of Treesize and have also tried an older version
I've tried starting Treesize as admin and as standard
Weird but just if I select an individual folder within Onedrive, it will scan fine. Just not the whole thing. So then I tried scanning the full C drive and then go to the Onedrive folder and "Update this branch" and it finished scanning it just fine:
and updates correctly after that:
Bit annoying though. Doesn't explain why it's doing this in the first place
Related
I made a React project using npx create-react-app inside my OneDrive folder on Windows 10.
OneDrive complained about one of the folders being named '~' (it was a folder with Node config files made automatically by create-react-app).
This was honestly a huge nightmare:
I could not rename, move or delete the folder myself. Windows Explorer didn't allow it because there was a 'sync error'
OneDrive would just completly stop syncing until the issue is fixed. I could not do anything manually, so my only option was to use their 'rename' button in the error message (which is supposed to automatically rename the file and fix the error). This did not work. I would try again and again over the span of a few days and it would do nothing, the error persisted.
Ultimately I copied my project outside OneDrive, but now I wasn't able to delete my old folders. I tried everything I could think of: pause sync, try to delete them with Windows in Safe Mode, uninstalled OneDrive in the end. I managed to delete most of the contents (with a lot of effort), but there were stil some Node directories that would not be deleted. I was getting a 'reparse point buffer' error which I solved by running chkdsk /f /x
Partly, I'm posting this hoping that my experience would help someone that has simmilar issues with OneDrive, but I also want to know how to keep React projects in my OneDrive.
I like having everything on my laptop in my OneDrive folder so it is synced: I want to be able continue my work when I move to my computer.
I'm having the exact same problem. I figured out if I moved the react folder into another folder and then deleted the other folder that worked try it. My onedrive still kept trying to sync something for a while but it finally quit and now everything is okay. That is until I do another react project and onedrive will be messed up again for sure.
After deleting the folder press Window + R and run this command to reset onedrive
%localappdata%\Microsoft\OneDrive\onedrive.exe /reset
This question has been asked around several time. Many programs like Dropbox make use of some form of file system api interaction to instantaneously keep track of changes that take place within a monitored folder.
As far as my understanding goes, however, this requires some daemon to be online at all times to wait for callbacks from the file system api. However, I can shut Dropbox down, update files and folders, and when I launch it again it still gets to know what the changes that I did to my folder were. How is this possible? Does it exhaustively search the whole tree in search for updates?
Short answer is YES.
Let's use Google Drive as an example, since its local database is not encrypted, and it's easy to see what's going on.
Basically it keeps a snapshot of the Google Drive folder.
You can browse the snapshot.db (typically under %USER%\AppData\Local\Google\Drive\user_default) using DB browser for SQLite.
Here's a sample from my computer:
You see that it tracks (among other stuff):
Last write time (looks like Unix time).
checksum.
Size - in bytes.
Whenever Google Drive starts up, it queries all the files and folders that are under your "Google Drive" folder (you can see that using Procmon)
Note that changes can also sync down from the server
There's also Change Journals, but I don't think that Dropbox or GDrive use it:
To avoid these disadvantages, the NTFS file system maintains an update sequence number (USN) change journal. When any change is made to a file or directory in a volume, the USN change journal for that volume is updated with a description of the change and the name of the file or directory.
I have a TFS build process that drops outputs on sandbox which is another server in the same network. In other words, the build agent and sandbox are separate machines. After the outputs are created, a batch script defined within the build template does the following:
Rename existing deployment folder to some prefix + timestamp (IIS can now no longer find the app when users attempt to access it)
Move newly-created outputs to deployment location
The reason why I wanted to rename and move files instead of copy/delete/overwrite is the latter takes a lot of time because we have so many files (over 5500). I'm trying to find a way to complete builds in the shortest amount of time possible to increase developer productivity. I hope to create a windows service to delete dump folders and drop folder artifacts periodically so sandbox doesn't fill up.
The problem I'm facing is IIS maintains a handle to the original deployment folder so the batch script cannot rename it. I used Process Explorer to see what process is using the folder. It's w3wp.exe which is a worker process for the application pool my app sits in. I tried killing all w3wp.exe instances before renaming the folder, but this did not work. I then decided to stop the application pool, rename the folder, and start it again. This did not work either.
In either case, Process Explorer showed that there were still uncollected handles to my outputs except this time the owner name wasn't w3wp.exe, but it was something along the lines of unidentified process. At one point, I saw that the owner was System, but killing System's process tree shuts down the server.
Is there any way to properly remove all handles to my deployment folder so the batch script can safely rename it?
https://technet.microsoft.com/en-us/sysinternals/bb896655.aspx
Use windows systernal tool called Handle v4.0
Tools like Process Explorer, that can find and forcibly close file handles, however the state and behaviour of the application (both yours and, in this case, IIS) after doing this is undefined. Some won't care, some will error and others will crash hard.
The correct solution is to allow IIS to cleanly release locks and clean up after itself to preserve server stability. If this is not possible, you can either create another site on the same box, or set up a new box with the new content, and move the domain name/IP across to "promote" the new content to production
This question is almost a reverse of those questions wanting to force the refresh of XAP files.
We have an application that uses Silverlight 4. There is a policy in place though that deletes temporary internet files every time users close IE. Unfortunately all the xap files are deleted too and need to be downloaded next time a user opens Internet Explorer to access the application again.
At the slower sites the download of these 20 MB of files can take some time.
Is it possible to have XAP files downloaded / stored elsewhere on the user's computer rather than the temporary internet files directory?
Can this be programmed into the Silverlight Application or is it an Internet Explorer issue.
What we really need to do is minimise the download restraints without compromising the clean-up of the temporary internet files directory.
Thank you in advance
Out of Browser is certainly the way to go.
The XAP files are handled independently from the browser, and you can even push the installation out with a script.
We have had multiple DNN sites running for quite a few months now without any issues. Twice in the last 3 days our sites have gone offline by the addition of the app_offline.htm file in the root dir.
There is only one developer with access to the sites at a coding / directory viewing level and the file is generated at weird times times when he is NOT accessing our network.
We are not publishing anything to the server ( and have not published any .net code in days ), upgrading, changing code, or even modifying content. Has anyone run into this issue?
It sounds like someone is messing with your server. Can you view the event logs to see who is accessing your server? Do you have the ability to change the passwords on the box?
Mark