Using a script to organize music library - file

Not sure if this is the correct section or whether I should try SuperUser, but here goes.
I have a ton of music (around 70GB) in my library that I would like to organize, rename, etc. Is it possible to do this using some kind of script? I am not familiar with this type of thing at all, just to put things in perspective.
I have the metadata organized (artist name, album name, track no, track name, etc.) already so if I could write a script to organize my folders and files that would be great.

The answer to your question is that: of course it's possible. You simply need to use a tool that helps you read existing tags in music files. This might depend on the format of the music files, and of course which operating system you are using.
Following on from that, all OSes have the ability to move files, so it's a case of running the metadata query on the file then using the outputs to rename.
For MP3s you can use, for example, id3 on Linux.
You might also want to look at beets - http://beets.radbox.org/

Related

Export Databases of DOS Clipper Application

Our current system database system is a clipper DOS application. The database inside its folder is fragmented/divided into many parts. I want to decrypt the database so that I will have only one database in all and avoid reshuffling of data. I'll attached the file folder Screenshot.. the database is on .DBF format
VScreenshot of files
Often you can decompile the CLIPPER exe file to source code and work from the .prg I've done it many times. The program to use is called WALKYRIE.
In Clipper and Fox Pro for DOS .dbf file is a simple table file.
If You want to use as data base with many tables in one unit.
You can import these tables in MS SQL data base and/or part of a MS Access database.
I see that you got several answers. Most are partially right. Let's address these one at a time:
All those files essentially comprise the "database" for the application you're using. They could be used by other applications as well. Besides having a lot of files, what is the problem you're trying to solve?
People mentioned indexes. You can generally ignore these. There are there primarily to make access to the data files faster. Any properly written clipper application will recreate these if they're missing or corrupted. You could test this by renaming one, running the app, and seeing what happens. If it doesn't recreate it you can name it back. Not replacing missing index files would be unusual behavior.
The DBF file format is binary, but barely. Most of what's in a DBF is text and is readable with an editor. But there's no reason to do so - I'm sure there are several free DBF utilities out there to to read DBF files. Getting the structure of the files could be very helpful.
Getting the data out of the files would also be fairly simple with a utility. If you look up the DBF format you could even write one fairly easily in Clipper, any other language that uses DBF files, or in something like Python. Any language that can open and write files, really. It's not hard - any competent developer could do this in a matter of hours. Must less if you're using Clipper or another language that natively reads DBX files.
Most people create dBase/Clipper programs with relational data, like SQL Server. Where SQL Server has tables that relate to each other dBase/Clipper has a file for each "table." This isn't a requirement, but it was almost certainly done this way.
Given that, if you get the table structures through a utility or by reading the headers in an editor (don't save them from an editor!) you could quite likely recreate the database schema (i.e. the map of the data). Once you have that it's fairly trivial to get the data into another type of database (SQL Sever, Access, or whatever you like to use.) If non of the files are too large it's conceivable to put all the files into Excel sheets. It really depends on what you want to do with it.
As others have said, you may be able to get the code by Valkyrie. Some people have used it very successfully. I don't know where you get it and I've never used it. Why do you not have the code? If this is a commercial application you likely should not have it. If it's a custom app who ever wrote it or paid to have it written should have the code.
Again, it's not clear to me what problem you're trying to solve. But there are many options for doing something with those DBF files. Fortunately they are one of the easier to read data formats you could be working with.
Let me know if you have any questions. Apologies for the typos that are no doubt scattered throughout this reply.
You sort of can get an idea of how they relate to each other by opening the index files they use (.NTX files). If you have the DBU utility (executable) around, you can open the DBF and load the index (NTX). LibreOffice Calc is also able to open DBFs (haven't tested .NTX).
If you open the .NTX on a text editor you will see the indexes in the beginning.
I open with Access, but I can save the data using a PrintFill Program.

Is there a standard for protecting application files from user interference outside of application

Sorry if I didn't express myself precisely in the title, I'll try to explain what i meant to say here.
My application uses a lot of small files like DB files, xml files, fonts, etc. There is folder and file presence check when application starts, but I would like to make sure that user can not accidentally change or delete some important file from disk.
Only thing that comes to mind is archiving files in few archives by usage frequency, changing archive extension to something unfamiliar and hiding those archives.
But compressing and uncompressing those files all the time through application doesn't seem like efficient solution.
Is there some standard procedure for keeping those important files from tampering?
Only thing that comes to mind is archiving files in few archives by usage frequency, changing archive extension to something unfamiliar and hiding those archives
That is security through obscurity, which is not a recommended practice.
Instead, use the file security mechanisms built-in to your operating system. Allow appropriate file access only to a specific group/role or user, and ensure your application runs in that group/role or as that user.

File Management for Large Quantity of Files

Before I begin, I would like to express my appreciation for all of the insight I've gained on stackoverflow and everyone who contributes. I have a general question about managing large numbers of files. I'm trying to determine my options, if any. Here it goes.
Currently, I have a large number of files and I'm on Windows 7. What I've been doing is categorizing the files by copying them into folders based on what needs to be processed together. So, I have one set that contains the files by date (for long term storage) and another that contains the copies by category (for processing and calculations). Of course this doubles my data each time. Now I'm having to create more than one set of categories; 3 copies to be exact. This is quadrupling my data.
For the processing side of things, the data ends up in excel. Originally, all the data was brough into excel. Then all organization and filtering was performed in excel. This was time consuming and not easily maintainable over the long term. Later the work load was shifted to the file system itself, which lightened the work in excel.
The long and short of it is that this is an extremely inefficient use of disk space. What would be a better way of handling this?
Things that have come to mind:
Overlapping Folders
Is there a way to create a folder that only holds the addresses of a file, rather than copying the file. This way I could have two folders reference the same file.
To my understanding, a folder is a file listing the memory addresses of the files inside of it, but on Windows a file can only be contained in one folder.
Microsoft SQL Server
Not sure what could be done here.
Symbolic Links
I'm not an administrator, so I cannot execute the mklink command.
Also, I'm uncertain about any performance issues with this.
A Junction
Apparently not allowed for individual files, only folders in windows.
Search folders (*.search-ms)
Maybe I'm missing something, but to my knowledge there is no way to specify individual files to be listed.
Hashing the files
Creating hash tags for all the files, would allow for the files to be stored once. But then I have no idea how I would handle the hash tags.
XML
Maybe I could use xml files to attach meta data to the files and somehow search using them.
Database File System
I recently came across this concept in my search. Not sure how it would apply Windows.
I have found a partial solution. First, I discovered that the laptop I'm using is actually logged in as Administrator. As an alternative to options 3 and 4, I have decided to use hard-links, which are part of the NTFS file system. However, due to the large number of files, this is unmanageable using the following command from an elevated command prompt:
mklink /h <source\file> <target\file>
Luckily, Hermann Schinagl has created the Link Shell Extension application for Windows Explorer and a very insightful reading of how Junctions, Symbolic Links, and Hard Links work. The only reason that this is currently a partial solution, is due to a separate problem with Windows Explorer, which I intend to post as a separate question. Thank you Hermann.

Efficiency of searching files in a directory?

I am building a website with a user authentication system allowing each user to upload images to their account, essentially I am doing this as an experience in web development so please forgive my ignorance on the topic.
My question involves the efficiency of placing files into a directory. Is it more efficient to create a deeper directory structure or to place all files into one folder? The former seems obvious, but does it not depend on the search algorithm implemented by the file system?
For example:
root/user/2012/----------------A/
/2013/---------- A/ B/
/2014/------A/ B/ C/
B/ C/ D/
C/ D/
D/
Or dump all files into a single folder?
root/user/
When an image is retrieved, for example by an <img> tag, which way provides a more efficient result? I have searched Google for information on the topic, but couldn't find anything definitive or at my level of understanding.
Accessing a single file should be roughly equivalent. A single directory or multiple choice really depends on how you are trying to use the file listing. If you expect the user to have thousands of files and you only display a single year at a time, it may make sense to break up the directory structure into multiple sections to keep file listings manageable. If you always show all the files, I suspect the single folder may be faster, since you will have to run through the whole directory listing doing multiple file listings. I would do a few tests based on what you expect your app to have to deal with. My guess would be a single directory should be fine, unless you expect large numbers of files and you can break the listing down.
i dont know what OS you intend to run on, but i'd go with the multiple directories approach as some FSs (NTFS on windows, for example) slow down horribly when dealing with 10000+ files in a single directory

What is a database file system?

I have a very little idea about what database file system is.
Can somebody out here explain to me what actually a database file system is, and what its applications are?
How is it different from a conventional file system?
How I can build it?
Typical file systems (*nix, ms-dos, etc) organize files hierarchically. For example,
c:\ represents the top of a hierarchy
c:\foo is the next level in the hierarchy
c:\foo\bar is a sub-node of \foo
etc..
Each file exists in one and only one location in this hierarchy.
By contrast, a database file system organizes files by metadata attributes. For example, topic, type, author, etc.. Rather than existing in one particular place in a hierarchy, the file exists in multiple "places" depending on its attributes.
The last question you ask is unanswerable.
Found some good links
DBFS (This one is really good)
Towards A Single Folder Filesystem
It's a file system where files have significant amounts of metadata. For example, the iTunes library might count as a database file system; not only do you have files on disk and know where they are, but you have tags (genres) and other metadata like author (artist).
It's a file system that stores files as blobs in a database, rather than in a hierarchy of directories. Imagine a web-site with no "directory-like" hierarchy in the URL - just loads of tags and categories and a big "search" field - something like that, only on your hard-drive.
Pros & cons? Ask yourself, how many database filesystems have I ever seen? Do you need to ask more?

Resources