Hello I'm trying to get Perforce syntax to obtain (for example using "fstat") list of files only in given folder (depot), without rubbish from all sub-folders. But I was not able to find anything in the docs, nothing related when using Google, even experimenting with ".", ".../." etc. lead me to nowhere...
Is that because it's not possible at all? I can't understand why... Isn't that a performance back hit?!
Thanks in advance.
Seb.
A single '*' expands to "all files in this directory" in p4 (no subdirectories). So, e.g. at a Unix shell prompt, in the correct directory in a perforce client:
$ p4 fstat '*'
You need to quote or escape the * to avoid the shell expanding it, of course;-).
Ah finally.
It was partially my own fault - I'd set ExceptionLevel to ExceptionOnBothErrorsAndWarnings... I needed full debug... Unfortunately:
When exception was raised - there was no Response object created, and I could not read the warning message, which wasn't part of the exception message (or object).
Using '//depot/Folder1/Folderx/*' thrown warning "No such file(s)!" - what is not something that developer might expect... As not being any special case...
It seems that I have still much to learn on the Perforce though :-/
Thank you guys for your posting.
Seb.
Related
I am confused with the following code from tcl wiki 1089,
#define TEMPBUFSIZE 256 /* usually enough space! */
char buf[[TEMPBUFSIZE]];
I was curious and I tried to compile the above syntax in gcc & armcc, both fails. I was looking to understand how tcl handle to file pointer mechanism works to solve the chaos on data logging by multiple jobs running in a same folder [log files unique to jobs].
I have multiple tcl scripts running in parallel as LSF Jobs each using a log file.
For example,
Job1 -> log1.txt
Job2 -> log2.txt
(file write in both case is "intermittent" over the entire job execution)
Some of the text which I expect to be part of log1.txt is written to log2.txt and vice versa randomly. I have tried with "fconfigure $fp -buffering none", the behaviour still persists. One important note, all the LSF jobs are submitted from the same folder and if I submit the jobs from individual folder, the log files dont have text write from other job. I would like the jobs to be executed from same folder to reduce space consumption from repeating the resource in different folder.
Question1:
Can anyone advice me how the tcl "handles" is interpreted to a pointer to the memory allocated for the log file? I have mentioned intermitent because of the following, "Tcl maps this string internally to an open file pointer when it is time for the interpreter to do some file I/O against that particular file - wiki 1089"
Question2:
Is there a possiblity that two different "open" can end up having same "file"?
Somewhere along the line, the code has been mangled; it looks like it happened when I converted the syntax from one type of highlighting scheme to another in 2011. Oops! My original content used:
char buf[TEMPBUFSIZE];
and that's what you should use. (I've updated the wiki page to fix this.)
Because of my slightly obsessive personality, I've been losing most of my productive time to a single little problem.
I recently switched from Mac OS X Tiger to Yosemite (yes, it's a fairly large leap). I didn't think AppleScript had changed that much, but I encountered a problem I don't remember having in the old days. I had the following code, but with a valid filepath:
set my_filepath to (* replace with string of POSIX filepath, because typing
colons was too much work *)
set my_file to open for access POSIX file my_filepath with write permission
The rest of the code had an error which I resolved fairly easily, but because the error stopped the script before the close access command, and of course AppleScript left the file reference open. So when I tried to run the script again, I was informed of a syntax error: the file is already open. This was to be expected.
I ran into a problem trying to close the reference: no matter what I did, I received an error message stating that the file wasn't open. I tried close access POSIX file (* filepath string again *), close access file (* whatever that AppleScript filepath format is called *), et cetera. Eventually I solved the problem by restarting my computer, but that's not exactly an elegant solution. If no other solution presents itself, then so be it; however, for intellectual and practical reasons, I am not satisfied with rebooting to close access. Does anyone have insights regarding this issue?
I suspect I've overlooked something glaringly obvious.
Edit: Wait, no, my switch wasn't directly from Tiger; I had an intermediate stage in Snow Leopard, but I didn't do much scripting then. I have no idea if this is relevant.
Agreed that restarting is probably the easiest solution. One other idea though is the unix utility "lsof" to get a list of all open files. It returns a rather large list so you can combine that with "grep" to filter it for you. So next time try this from the Terminal and see if you get a result...
lsof +fg | grep -i 'filename'
If you get a result you will get a process id (PID) and you could potentially kill/quit the process which is holding the file open, and thus close the file. I never tried it for this situation but it might work.
Have you ever had the Trash refuse to empty because it says a file is open? That's when I use this approach and it works most of the time. I actually made an application called What's Keeping Me (found here) to help people with this one problem and it uses this code as the basis for the app. Maybe it will work in this situation too.
Good luck.
When I've had this problem, it's generally sufficient to quit the Script editor and reopen it; a full restart of the machine is likely excessive. If you're running this from the Script Menu rather than Script Editor, you might try turning off the Script Menu (from Script Editor) and turning it back on again. The point is that files are held by processes, and if you quit the process it should release any lingering files pointers.
I've gotten into the habit, when I use open for access, of using try blocks to catch file errors. e.g.:
set filepath to "/some/posix/path"
try
set fp to open for access filepath
on error errstr number errnom
try
close access filepath
set fp to open for access filepath
on error errstr number errnom
display dialog errnum & ": " & errstr
end try
end try
This will try to open the file, try to close it and reopen it if it encounters and error, and report the error if it runs into more problems.
An alternative (and what I usually do) is that you can also comment out the open for access line and just add in a close access my_file to fix it.
I'm trying to implement the ls command with wildcard, *.
I have just learned the fact that most shells convert ls-argument containing * to the corresponding entries when performing ls command.
For example, The directory foo consist of a.file, b.file, and directory bar.
Then, the directory bar has c.file, d.file, and e.file.
and assume that current directory is the directory foo.
the argument */* is converted is to the following entries.
"bar/c.file", "bar/d.file", "bar/e.file"
How can program perform this? I don't know where to start from. And
there are many possible cases.
*/../*, ../../*, */*/*, etc.
Any advice would be awesome. Thank you..
You can of couse use glob() to do a lot of this work.
Such patterns are called globs, for some reason I won't dig up now. :)
POSIX provides glob(3) for programmatic wildcard path expansion.
I have an application developed in C. This application is supported across multiple platforms. There is one functionality where we are transferring files via file transfer protocol to different machine or to any other directory on local machine. I want to include a functionality where I can transfer the file with some temporary name and once the transfer is complete, I want to rename the file with the correct name (the actual file name).
I tried using simple rename() function. It works fine in Unix and Linux machines. But it does not work on Windows. It is giving me an error code of 13(Permission denied error).
First thing, I checked in msdn to know the functionality of rename if I have to grant some permissions to the file etc.
I granted full permissions to the file (lets say it is 777).
I read in few other posts that I should close the file descriptor before renaming the file. I did that too. It still gives the same error.
Few other posts mentioned about the owner of the file and that of the application. The application will run as a SYSTEM user. (But this should not affect the behavior, because I tried the same rename function in my application as follows:
This works fine from my application:
rename("C:/abc/aaa.txt","C:/abc/zzz.txt");
but
rename(My_path,"C:/abc/zzz.txt");
doesn't work, where My_path when printed displays C:/abc/test.txt.
How can I rename a file? I need it to work on multiple platforms.
Are there any other things I should be trying to make it work.?
I had this same problem, but the issue was slightly different. If I did the following sequence of function calls, I got "Permission Denied" when calling the rename function.
fopen
fwrite
rename
fclose
The solution was to close the file first, before doing the rename.
fopen
fwrite
fclose
rename
If
rename("C:/abc/aaa.txt","C:/abc/zzz.txt");
works but
rename(My_path,"C:/abc/zzz.txt");
does not, in the exact same spot in the program (i.e. replacing one line with another and making no changes), then there might be something wrong with the variable My_path. What is the type of this variable? If it is a char array (since this is C), is it terminated appropriately? And is it exactly equal to "C:/abc/aaa.txt"?
(I wish I could post this as a comment/clarification rather than as an answer but my rep isn't good enough :( )
I've been tasked with mirroring a site onto a new server. The old site has a few Perl scripts that, as far as I can see internally (i know nothing about Perl, though I have a pretty good understanding of coding generally, and specifically PHP/js/etc) aren't reliant on the old server. That said, when I try to run this script, which looks through a database file to find the appropriate article file, it doesnt retrieve anything.
Basically, this is a rudimentary old CMS, as I explain it, where it searched the PAG file for the filename and displayed it. I am a little bit lost here. Is there a reason why the mirroring doesn't work on the new site? I've checked the permissions, I've checked that Perl is installed in the same /usr/etc directories. I think it uses dbm because, according to another article, if I see commands like these:
dbmopen( %ARTS, $art_dbm, 0644 );
$entry = $ARTS{$article_id};
dbmclose( %ARTS );
it must be dbm, right?
On a related note, is there any way to merge that PAG file's info with the original files without an incredibly sophisticated Perl script; i.e., recreate the 100 text files with that info in the file itself, rather than stored separately?
EDIT: thanks for the 1st answer below. can you explain what that HASH may be, and the mask? I've doublechecked that the .pag file (the database name) is indeed in the place where its defined earlier in the .pl file, and that it was transferred in binary. yet somehow I cant get it to open it correctly!
EDIT 3: Ok, sorry, final editing here: I used the die code below (Shwern) and found that it is not finding that DB file, despite it being there (two files articles.pag and articles.dir, but the variable only references "articles" without extensions) in the right directory and with the right permissions... So, the question here is now what the hell is going on? are these different versions of perl? or am i just doing something basic and stupid? for the record (yes, its terrible) i dont have shell access just yet, though i'm working on it... I was asked to do this because of my "new web" skills, and I'm certainly not the appropriate person for things like perl and dbm, though i can read the files and understand them. As a final suggestion, does anyone know how (a script or the like) I could ask the original server people (who are NOT the coders) to do an ASCII dump of this, or would that be out of line? I need to get this into CSV and back into the file so I can reuse it in another db... ugh what a nightmare!
If I read your question correctly, you're having difficulty opening the database on a new machine. Does the database exist there?
The documentation for the dbmopen method is available on the command line via perldoc -f dbmopen (and at this link for the latest stable perl release, 5.10.1).
As you can see from the docs, the second argument to dbmopen contains the filename being opened. In the code you pasted, that's contained in the scalar variable $art_dbm. So what you need to do is look for some earlier declaration of this variable (perhaps it is loaded in from a configuration file, or it could be hard-coded). Then once you've found that DB, all that should be necessary is transferring that file over to your new machine.
If you need more help deciphering the code, feel free to edit your question with a code snippet and we can go from there.
(Now, if you've found the database but you just can't open it, you've got some other problem.. It's been a long time since I dealt with PAG files however.)
Do you still have access to the original machines?
Although you are using a DBM files, that actual functionality can come from one of several implementations, some of which are not compatible. I'd dump the file with the same perl that created it, then recreate it with the new perl.
There's a few things which could be going wrong. The most obvious one is that the dbmopen() call isn't opening the file. If the DBM file doesn't exist, rather than failing dbmopen() just makes a new one which could be why it appears empty.
To eliminate that possibility, make sure the DBM file does exist and is readable. You also want to check if the dbmopen() succeeded, it will (usually) error out if its the wrong format.
die "$art_dbm does not exist" unless -e $art_dbm;
die "Cannot read $art_dbm" unless -r $art_dbm;
dbmopen( %ARTS, $art_dbm, 0644 ) or die "dbmopen of $art_dbm failed: $!";
Unfortunately dbmopen() is too clever for its own good. If you give it "foo" it might create "foo.db" instead. Depends on the implementation. See below.
The other possibility is that your two Perls are trying to open the file with two different DBM implementations. Perl can be compiled with different sets of DBM implementations on your different machines. dbmopen() will use the first one in a hard coded (and historically barnacled) list. Its actually a wrapper around AnyDBM_File. You can check which implementation is being used with...
use AnyDBM_File;
print "#AnyDBM_File::ISA\n";
Make sure they're the same. If not, load the DBM library in question before using dbmopen. perldoc -f dbmopen explains.
Here's a demonstration. First we see what dbmopen() will default to.
$ perl -wle 'use AnyDBM_File; print "#AnyDBM_File::ISA"'
NDBM_File
Then create and populate a dbm file.
$ perl -wle 'dbmopen(%foo, "tmpdbm", 0644) or die $!; $foo{23} = 42; print %foo'
2342
Now demonstrate we can read it.
$ perl -wle 'dbmopen(%foo, "tmpdbm", 0644) or die $!; print %foo'
2342
And try to read it using a different DBM implementation.
$ perl -wle 'use GDBM_File; dbmopen(%foo, "tmpdbm", 0644) or die $!; print %foo'
Nothing in the file, but no error either. Turns out it made a file called tmpdbm whereas ndbm was using tmpdbm.db. Let's try Berkeley DB.
$ perl -wle 'use DB_File; dbmopen(%foo, "tmpdbm", 0644) or die $!; print %foo'
Inappropriate file type or format at -e line 1.
At least that gives an error.
Your best bet is to figure out what DBM implementation the original machine is using and use that module before the dbmopen() call. That will make the situation static.
PS The Unix file utility will also give you a good idea what type of DBM it is.
$ file tmpdbm
tmpdbm: GNU dbm 1.x or ndbm database, little endian
$ file tmpdbm.db
tmpdbm.db: Berkeley DB 1.85 (Hash, version 2, native byte-order)
And hope to $diety its not a byte-order issue, less common now that almost everything is x86.
PPS As you can see, using DBM files is a bit of a mess. Strange considering its supposed to be just a hash-on-disk.