I've been searching Google as well as the OpenVMS System Administrator's Guide and User Guide, and still can't find anything regarding listing the directories present on an OpenVMS volume. I can't see how this could taken for granted in the docs, since everything else is very specific, so either I'm failing to see it or it can't be done. If it can't be done, then I'm missing some incredibly large chunk of the picture in regards to using VMS. Any suggestions are appreciated.
TIA,
grobe0ba
By "listing", I assume you mean via a command such as Dir...
To see all directories on a volume I would do something like,
$ dir volumeid:[000000...]*.dir
Of course, you need enough privilege to be able to see all the directories on the volume.
For a quick overview of all the directories you may also check out the /TOTAL option for 'directory'.
$ DIRE /TOTAL [*...]
Add /SIZE for effect (and slowdown)
You can of course post process to your hearts content...
$ pipe dir /total data:[*...] | perl -ne "print if /^Dir/"
Directory DATA:[CDC]
Directory DATA:[CDC.ALPHA]
Directory DATA:[CDC.ALPHA.V8_3]
$ pipe dir /total data:[*...] | searc sys$pipe "ory "
Directory DATA:[CDC]
Directory DATA:[CDC.ALPHA]
Directory DATA:[CDC.ALPHA.V8_3]
$ pipe dir /total data:[*...] | perl -ne "chomp; $x=$1 if /^Di.* (\S+)/; printf qq(%-60s%-19s\n),$x,$_ if /Tot/"
DATA:[CDC] Total of 7 files.
DATA:[CDC.ALPHA] Total of 1 file.
DATA:[CDC.ALPHA.V8_3] Total of 11 files.
Finally, if you are serious about playing with files and directories on OpenVMS, be sure to google for DFU OPENVMS ... download and enjoy.
Unfortunately I do not have the reputation required for commenting so I have to reformulate the answer.
#ChrisB
This answer while voted is not correct generally speaking. Directories are always files ending with .DIR and having a version of 1. Renaming a directory to *.DIR;x with x>1 will render the directory not traverseval. The DIR file however retains its directory characteristics and renaming it back to ;1 will return its normal behavior.
So one may add a ;1 to the DIR command
$ dir volumeid:[000000...]*.dir;1
But again this is not valid because any one may create *.DIR files which are not directories (ex. EDIT TEST.DIR), and there are applications out there doing so.
#Hein
So the second answer from Hein, which at this time does have 0 votes, is the corretc one. The one that does exactely the requested operation without 3rd party tool is:
$ PIPE DIR /TOTAL volume:[*...] | SEARCH SYS$PIPE "ory "
This command will only show valid directories
Related
I have tried to find an answer to my question looking at similar topics but didn't succeed. Maybe I have overlooked. Any help is appreciated!
So, I have hundreds of folders in my current directory named from folder1000 to folder1500. In each folder, I have one .fastq file with a different name (Lib1.fastq, Lib2.fastq, etc). I want to proceed each of these files in one loop command by running a shell script.
Here is my shell script (script.sh) for one file (it creates outputs which further proceeded) which I run in my Terminal:
#!/bin/sh
bowtie --threads 4 -v 2 -m 10 -a genome Lib1.fastq --sam > Lib1.sam
samtools view -h -o Lib1.sam Lib1.bam
sort -k 3,3 -k 4,4n Lib1.sam > Lib1.sam.sorted
# ...etc
Here is the loop I am trying to make as well in a shell script (here I have started only with a simple checking "head" command, and only with 5 first folders) which I run from my current directory where all folders are located:
#!/bin/sh
for file in ./folder{1000..1005}
do
head -10 *.fastq
done
But as a result I get:
head: *.fastq: No such file or directory
head: *.fastq: No such file or directory
head: *.fastq: No such file or directory
head: *.fastq: No such file or directory
head: *.fastq: No such file or directory
So, even a simple checking command does not work for me in a loop. Somehow I can not see the file. But if I run the command directly in one of the folders:
MacBook-Air-Maxim:folder1000 maxim$ head -10 *.fastq
then I get the correct result (the first 10 lines of the file displayed).
Could anyone suggest the way to process all files in the most convenient way?
Thanks a lot and very sorry, I am just learning.
Well, you are traversing through the folders using the variable $file, but you are not using this variable in the loop body. Just use it:
#!/bin/sh
for file in ./folder{1000..1005}
do
head -10 $file/*.fastq
done
There are other issues in the overall problem, but this is the answer to the point that is stopping you. Let's tackle the problems one by one :-)
So I have a very big folder full of more folders which hold files that all have their regular extension, but then with ,v after it (like .xml,v)
Is there a quick way/program to make it go through all of the folders and when it finds a ,v it'll remove the ,v from it?
Thanks
EDIT: I am running Windows 7 (64-bit). Also please remember than I am an idiot :P
Use find to list the files ending ,v. Pipe the output to a shell loop that renames the files.
${f%%,v} matches the file name without the ,v suffix.
find . -name \*,v | while read f; do mv $f ${f%%,v} ;done
Not clear, Where you have the files? (In your computer / on a server).
What is the platform (Windows / Linux) ...
There are multiple ways to solve it based on scenario (like a tiny batch file can do it in a flash if the folder is in your local computer with windows platform) ...
In my program, I have to make a file hidden in order to avoid removal or modification of the file.
PATH=/etc/
NAME = file
Is there a function in C that will allow me to do that?
You can just add a . to the front of the file name. Having said that if your goal is to not allow modification of the file change the permissions to something that can't be modified. Something like:
chmod 444 fileName
First: others argue with security arguments here. For those: Hidden files have nothing to do with security nor will it prevent somebody from deleting a file if he has propper permission and wants to do that.
Hidden means only that tools like ls, bash globs or a graphical file managers will not display the files with their default settings. This can be useful to prevent from accidents (see explanation below) or just to keep directory listings more clean. You may try the commands ls -l $HOME and ls -al $HOME in order to see the differences.
On GNU/Linux systems and UNIXs it is by convention that files which's name begins with a dot . will not being displayed by default meaning they are hidden. Like $HOME/.bashrc
Solution: Prefix the file name with a dot:
.file
About accidents. Hiding a file can prevent you from accidently removing it when you type something like:
rm *
The glob above will not list hidden files so they won't get deleted.
In LINUX Hidden file are start with .(DOT)
if you create files with starting .(DOT), those files are hidden.
You can use chmod to set permissions to the file.
if you set only read only then those cannot be modified in program
chmod 444 filename
if you want to use this from C-language use system() function to execute this command
if You use simple ls -alF you can see those files.
the below files are hidden files In LINUX
-rw------- 1 root root 27671 Sep 17 11:40 .bash_history
-rw-r--r-- 1 root root 3512 Jul 23 16:30 .bashrc
There are no hidden files on Linux. Some tools don't show files starting with . as others already mentioned.
Anyway, you can experiment with putting control characters like new-line into the filename. See Control characters in filenames are a terrible idea:
Some control characters, particularly the escape (ESC) character, can cause all sorts of display problems, including security problems. Terminals (like xterm, gnome-terminal, the Linux console, etc.) implement control sequences. Most software developers don’t understand that merely displaying filenames can cause security problems if they can contain control characters. The GNU ls program tries to protect users from this effect by default (see the -N option), but many people display filenames without getting filtered by ls — and the problem returns. H. D. Moore’s “Terminal Emulator Security Issues” (2003) summarizes some of the security issues; modern terminal emulators try to disable the most dangerous ones, but they can still cause trouble. A filename with embedded control characters can (when displayed) cause function keys to be renamed, set X atoms, change displays in misleading ways, and so on. To counter this, some programs modify control characters (such as find and ls) — making it even harder to correctly handle files with such names.
Your requirements are a bit vague: the program creates a file, wants to prevent its removal or modification. Do you expect other users (of your program? in general?) to be able to read it, but not find it easily, or modify or delete it?
Keep in mind that Unix-like systems don't really do hidden when the resource involved needs to remain visible (readable, presumably), as others have noted. Prepending a '.' to a file name helps in some important contexts (default ls(1) behavior and shell * globbing in particular) but only goes so far. But a few techniques might help obscure what and where your app is saving things, if that matters.
Consider two users doing some shell commands like the following in a directory with its sticky bit set (say /tmp). (Sorry to not write C, but I think the scenario is easier to demonstrate out in the shell.)
As Bob:
$ umask 066
$ mkdir /tmp/.hidden
$ umask 022
$ echo xyzzy > /tmp/.hidden/mysecret.txt
$ ls -la /tmp/.hidden
total 28
drwx--x--x 2 bob users 4096 Sep 17 11:19 .
drwxrwxrwt 27 root root 20480 Sep 17 11:26 ..
-rw-r--r-- 1 bob users 6 Sep 17 11:19 mysecret.txt
As Alice. Notice that attempts to search in /tmp/.hidden fail, but if she knows the name of a file in a directory with only execute but not read permissions set, she can read the file. She can't do much to mess with /tmp/.hidden, once it's properly created. If she'd been forced to guess the name of the secret file, that could also be a challenge depending on how the name is created.
$ ls /tmp | grep hidden
$ ls -a /tmp | grep hidden
.hidden
$ file /tmp/.hidden
/tmp/.hidden: directory
$ ls /tmp/.hidden
ls: cannot open directory /tmp/.hidden: Permission denied
$ echo /tmp/.hidden/*
/tmp/.hidden/*
$ file /tmp/.hidden/mysecret.txt
/tmp/.hidden/mysecret.txt: ASCII text
$ cat /tmp/.hidden/mysecret.txt
xyzzy
$ rm -f /tmp/.hidden/mysecret.txt
rm: cannot remove '/tmp/.hidden/mysecret.txt': Permission denied
$ mv /tmp/.hidden /tmp/Hidden_No_More
mv: cannot move '/tmp/.hidden' to '/tmp/Hidden_No_More': Operation not permitted
$ rm -rf /tmp/.hidden
rm: cannot remove '/tmp/.hidden': Permission denied
In this scenario, the presence of the hidden directory can be obscured, but ls -a reveals its name. Carefully chosen directory permissions prevent non-root and non-Bob users from listing or altering its contents. The use of a sticky-bit directory like /tmp prevents non-Bobs from renaming or removing the "hidden" directory. Anyone who knows the name of the "secret" file within the hidden directory can read it. But only Bob and root can change these "secret" files or the "hidden" directory.
You can do all the above in a C program; equivalents exist as library and system calls - see things like chmod(2), mkdtemp(3), umask(2), the mode argument to open(2), etc.
If you use a kernel >= 3.11, you might want to try the O_TMPFILE-flag. This kernel have been released on the 14.09.2013. Debian Jessie uses Kernel 3.16. so this feature should be available on all recent popular distributions.
The news about this sounds promising. The file will be unreachable from the outside. No other process or may access this file .. neither read nor write. But the file will be lost as soon as the handle gets closed. Or link it to a regular file. But then, it will be accessible as any other file.
If this is not an option for you (e.g. your file needs to be persistent): bad luck. There is no real "hidden" file in linux. You can hide your persistent files as secure as files on windows with the hidden attribute: prepend the name with a dot. As stated by others: ls -a will show them nevertheless.
Also, you can create a user specifically for your use and make the file read- and writable only for this user or put it in a folder, where only your user have rw-access. Other users may see this file but wont be able to access it. But if root comes along and want to look into it, you have lost.
Sure,you have to add '.' before filename and your file wouldn't be seen by user(except user will turn the hidden files show option on). You could change the attrybutes (chmod) to 755 and only user could rwx and others could rx.
hek2mgl - partially yes - it has. Try to remove via rm -rf * manner all of directory content. That's why for example .htaccess is hidden.
I have a folder with a few files in it; I like to keep my folder clean of any stray files that can end up in it. Such stray files may include automatically generated backup files or log files, but could be a simple as someone accidentally saving to the wrong folder (my folder).
Rather then have to pick through all this all the time I would like to know if I can create a batch file that only keeps a number of specified files (by name and location) but deletes anything not on the "list".
[edit] Sorry when I first saw the question I read bash instead of batch. I don't delete the not so useful answer since as was pointed out in the comments it could be done with cygwin.
You can list the files, exclude the one you want to keep with grep and the submit them to rm.
If all the files are in one directory:
ls | grep -v -f ~/.list_of_files_to_exclude | xargs rm
or in a directory tree
find . | grep -v -f ~/.list_of_files_to_exclude | xargs rm
where ~/.list_of_files_to_exclude is a file with the list of patterns to exclude (one per line)
Before testing it make a backup copy and substitute rm with echo to see if the output is really what you want.
White lists for file survival is an incredibly dangerous concept. I would strongly suggest rethinking that.
If you must do it, might I suggest that you actually implement it thus:
Move ALL files to a backup area (one created per run such as a directory containing the current date and time).
Use your white list to copy back files that you wanted to keep, such as with copy c:\backups\2011_04_07_11_52_04\*.cpp c:\original_dir).
That way, you keep all the non-white-listed files in case you screw up (and you will at some point, trust me) and you don't have to worry about negative logic in your batch file (remove all files that _aren't of all these types), instead using the simpler option (move back every file that is of each type).
In our FreeBSD-environment where we have one server that acts as a file-server, we have a problem that our system administrator says can not be fixed.
All our files resides in a directory and we all have access to that directory, its sub-directories and files. The problem is that once a user in our group creates a file or directory, we have to chmod that directory or file to change the rights so that others in our group can access, read, write and delete. These are not files or sub-directories inside our home-directories, but in a directory where we are supposed to work with them on a daily basis.
Finding it difficult to believe that there is no good solution, I would request that someone assist me with a solution.
I think what you want is a setgid bit on the directories and umask. Then newly created there files and directories will have proper group and proper permissions to let others read abd write them.
find /your-files-are-rooted-here -type d -print0 | xargs -0 chmod ug+rw,g+s
and set umask to 002 (or whatever is appropriate). And, of course, you may want to fix permissions for existing files (the command above only takes care of directories).
One place to but the umask setting is "/etc/bashrc". Find "umask". Change "umask = 022" to "umask = 002". After doing this, when a new file created, every one in the same group with the file owner can write in this new file.
Note that this only works for files created from the shell, specifically bash.