What is the proper way to create new projects with hgsubversion, svn, and mercurial? - hgsubversion

I have switched my development to mercurial, but I still have to talk with a subversion server using hgsubversion. I am up and running locally with hg for all of the projects that already existed in subversion.
My question is this: what is the best workflow for creating a new project that will ultimately live in subversion?
I tried to get a new project off the ground with hg init, followed by pushes to subversion. But I just got lost, and couldn't get it to work. I decided that the best workflow would be to create the project straight into subversion, completely ignorant of mercurial's existence, and then clone with hgsubversion. But I want to know if there is a better way.
For what its worth, we are using the classic trunk/tags/branches directory structure in subversion. Other developers are still using svn directly.

I decided that the best workflow would be to create the project straight into subversion, completely ignorant of mercurial's existence, and then clone with hgsubversion.
It's not only best, but only one possible. Init'ed hg repo with added later extension and Subversion repo in [path] will not work for pull/push
Addition (in reply to comment of #bigh_29)
Yes, starting work with empty Subversion repo (only default dirmap created) from Mercurial give me also unexpected and unpredictable results.
Initial state:
Created local (file:///) SVN-repo and trunk/branches/tags tree, using TSVN (tool doesn't matter)
svn log file:///Z:/SVN
------------------------------------------------------------------------
r1 | Badger | 2013-01-09 12:00:10 +0600 (Ср, 09 янв 2013) | 1 line
Imported folder structure
------------------------------------------------------------------------
Cloned repo (from repo-root) to Mercurial
hg clone file:///Z:/SVN z:\HG
[r1] Badger: Imported folder structure
no changes found
updating to branch default
0 files updated, 0 files merged, 0 files removed, 0 files unresolved
(OK, Mercurial doesn't track empty dirs, so we have nothing to store in changeset)
hg log -R z:\hg produce empty output
Usual workflow testing - add, commit, push (TortoiseHG latest)
% hg add --repository Z:\HG Z:\HG\Intro.txt
[command completed successfully Wed Jan 09 12:17:31 2013]
% hg commit ...
Intro.txt
committed changeset 0:0a3fc4a9213d
[command completed successfully Wed Jan 09 12:17:31 2013]
% hg --repository Z:\HG push file:///Z:/SVN
pushing to file:///Z:/SVN
searching for changes
no changes found
[command completed successfully Wed Jan 09 12:18:02 2013]
"no changes found" - bad, very bad news. Got diferent history in Mercurial and upstream Subversion
>hg log
changeset: 0:0a3fc4a9213d
summary: Added file
svn log file:///Z:/SVN
------------------------------------------------------------------------
r1 | Badger | 2013-01-09 12:00:10 +0600 (Ср, 09 янв 2013) | 1 line
Imported folder structure
------------------------------------------------------------------------
Try to add file into Subversion from SVN WC, added
>svn log file:///Z:/SVN
------------------------------------------------------------------------
r2 | Badger | 2013-01-09 12:22:38 +0600 (Ср, 09 янв 2013) | 1 line
Added main file
------------------------------------------------------------------------
r1 | Badger | 2013-01-09 12:00:10 +0600 (Ср, 09 янв 2013) | 1 line
Imported folder structure
------------------------------------------------------------------------
Pull (really fetch) from SVN
hg --repository Z:\HG fetch --verbose file:///Z:/SVN
pulling from file:///Z:/SVN
[r2] Badger: Added main file
A trunk/Topic.txt
Topic.txt
committed to "default" as fc8bf55ea98f
pulled 1 revisions
updating to 1:fc8bf55ea98f
resolving manifests
removing Intro.txt
getting Topic.txt
1 files updated, 0 files merged, 1 files removed, 0 files unresolved
merging with 0:0a3fc4a9213d
resolving manifests
getting Intro.txt
1 files updated, 0 files merged, 0 files removed, 0 files unresolved
Intro.txt
new changeset 2:98c16d1829d8 merges remote changes with local
Got ugly local history
and still can't push ("Sorry, can't find svn parent of a merge revision."): our 0 doesn't have parent on origin
But, if I'll add file to tree in SVN (empty .hgignore, it must appear in Mercurial repo anyway)
> svn log file:///Z:/SVN -q -v
------------------------------------------------------------------------
r2 | Badger | 2013-01-09 13:44:47 +0600 (Ср, 09 янв 2013)
Changed paths:
A /trunk/.hgignore
------------------------------------------------------------------------
r1 | Badger | 2013-01-09 13:43:27 +0600 (Ср, 09 янв 2013)
Changed paths:
A /branches
A /tags
A /trunk
------------------------------------------------------------------------
after clone I get
>hg log
changeset: 0:71c7bc7bce68
tag: tip
user: Badger#1d57b098-00df-af47-a2e3-c1451e4b2f8d
date: Wed Jan 09 07:44:47 2013 +0000
summary: Added needed for successful cloning hgignore
and added|commited file was pushed to Subversion without any headache
% hg --repository Z:\HG push file:///Z:/SVN
pushing to file:///Z:/SVN
searching for changes
[r3] Badger: File from Mercurial
pulled 1 revisions
nothing to rebase
0 files updated, 0 files merged, 0 files removed, 0 files unresolved
svn log file:///Z:/SVN -q -v -r "HEAD"
------------------------------------------------------------------------
r3 | Badger | 2013-01-09 13:59:15 +0600 (Ср, 09 янв 2013)
Changed paths:
A /trunk/Test.txt
------------------------------------------------------------------------
Note: I failed in testing of using Mercurial branches with SVN-origin: created (named) branch with saved in this branch changeset disappeared after push to Subversion (and changeset appear in trunk instead of expected /branches/BRANCHNAME)

Related

Nagios Tool, total running time for the tool

Is there any way to check for how long the NAGIOS TOOL runs? I mean when the tool started running and the time up till now.
Nagios is running in a remote machine, in which I have access (through ssh). I have both credentials for accessing the machine and credentials just to see the Stats from Nagios on this machine
I tried System->Process Info, but I do not have privileges to view such information.
Is there any other way, through terminal?
You can use nagiostats to check the uptime of a Nagios instance. See: https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/3/en/nagiostats.html
[nagios#lanman ~]# /usr/local/nagios/bin/nagiostats -c /usr/local/nagios/etc/nagios.cfg
Nagios Stats 3.0prealpha-05202006
Copyright (c) 2003-2007 Ethan Galstad (www.nagios.org)
Last Modified: 05-20-2006
License: GPL
CURRENT STATUS DATA
------------------------------------------------------
Status File: /usr/local/nagios/var/status.dat
Status File Age: 0d 0h 0m 9s
Status File Version: 3.0prealpha-05202006
Program Running Time: 0d 5h 20m 39s <------------
Nagios PID: 10119
Used/High/Total Command Buffers: 0 / 0 / 64
Used/High/Total Check Result Buffers: 0 / 7 / 512
...
Find the nagios.log file, it's likely in the var directory under the Nagios installation. Then..
grep "Nagios.*starting" nagios.log | tail -1
Grab the epoch time (first field), and convert it to local.
date -d #1580045430
Sun Jan 26 07:30:30 CST 2020
All in one, assuming the nagios.log is in the current directory.
date -d #$(grep "Nagios.*starting" nagios.log | tail -1 | awk '{print $1}' | sed 's/\[//;' | sed 's/\]//;')
Sun Jan 26 07:30:30 CST 2020

How to check last access ClearCase vob?

Can anyone help me how to find time and date when the ClearCase (UCM) VOBS are last accessed ?
I remember using cleartool lshistory to check the last events date occurred on a vob.
Something like:
cleartool lshis -fmt "%Xn\t%Sd\t%e\t%h\t%u \n" -since 01-Oct-2015 -all <vobname>| grep -v lock | head -1 | grep -o '20[0-9][0-9]-[0-9][0-9]-[0-9][0-9]'
That would give the events on the last 6 months (like "create version", "create branch", ...).
If there are none, the VOB has not been accessed recently (and I then consider archiving it).
This apply for any VOB (UCM or non-UCM).
'lshistory' will certainly give you the most recent change to the PVOB. If you are interested in the last-accessed time, you can look at the DB files for the PVOB. For instance,
% ls -ltur <pathname_to_VOB_storage_directory>/db
That will sort by last-accessed-time of each file and the latest of those files, which would be the last listed because of the '-tr' flags, should include the time close to the last time the PVOB was accessed. For example:
-rw-r--r-- 1 vob_owner vob_group 94830592 Mar 28 2016 vob_db.d05
This PVOB was last accessed on March 28, 2016.

fatal: Unsupported command: Users.Ref "KPLUS"

I'm migrating several Perforce projects to Git. One is failing though at 18% of the process with:
fatal: Unsupported command: Users.Ref "KPLUS"
It looks like git fast-import is trying to execute the text in the file where it should be printed (I think)
The fast-import crash report shows me
fast-import crash report:
fast-import process: 28327
parent process : 28325
at Fri Sep 11 14:34:26 2015
fatal: Unsupported command: Users.Ref "KPLUS"
Most Recent Commands Before Crash
---------------------------------
....
....
commit refs/remotes/p4/master
committer USERNAME <EMAIL> 1175609377 +0100
data <<EOT
* Users.Ref "KPLUS"
Active Branch LRU
-----------------
active_branches = 1 cur, 5 max
pos clock name
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1) 714 refs/remotes/p4/master
Inactive Branches
-----------------
refs/remotes/p4/master:
status : active loaded
tip commit : 307170cc21264c58ab1943c16f1d2943c1a44f45
old tree : 2f45d5c6d9cbe56e5f335f92b21316ad272f3504
cur tree : 2f45d5c6d9cbe56e5f335f92b21316ad272f3504
commit clock: 714
last pack : 0
Marks
-----
-------------------
END OF CRASH REPORT
The text is in a xml file that doesn't seem to be well formatted, but I would assume this shouldn't matter.
Found the cause in the commit messages. There were "EOT" lines in the message causing git-p4 script to interpret this as an End of Transaction. All next lines were interpreted as executable lines. Changing the git-p4 script from using EOT to EOM solved the issue.

Import old apache access logs to webalizer - ignoring records

I installed webalizer on my apache 2 webserver yesterday and came across the problem, that all the old access logs are not used. The directory list looks like that:
/var/log/apache2/
access.log
access.log1
access.log.10.gz
access.log.11.gz
...
How can I import all my files at once?
I tried several things, but it was telling me, that the records were ignored.
Hope somone can help. Thanks!
I ran into the same problem. I had just installed webalizer, and changed it to incremental mode (here are the relevant entries from my /etc/webalizer/webalizer.conf):
LogFile /var/log/apache2/access.log.1
OutputDir /var/www/htdocs/w
Incremental yes
IncrementalName webalizer.current
And then I ran webalizer by hand, which initialized the non-gz files in my logs directory. After that, any attempt to manually import an older gz logfile (by running webalizer /var/log/apache2/access.log.2.gz for instance) resulted in all of the entries being ignored.
I suspect this is because the entries found in the gz logs were older than the last import- I had to delete my webalizer.current file (really I cleared the whole dir- either way should work). Finally, in reverse order (oldest first), I could import the old gz files one at a time:
bhs128#home:~$ cd /var/log/apache2
bhs128#home:/var/log/apache2$ sudo rm -rf /var/www/htdocs/w/*
bhs128#home:/var/log/apache2$ ls -1t /var/log/apache2/access.log*gz | grep -o [0-9]* | tail -n1
52
bhs128#home:/var/log/apache2$ for i in {52..2}; do webalizer /var/log/apache2/access.log.$i.gz; done
I just had the same problem, and I took a look into the webalizer.current file:
$ head -n 2 webalizer.current
# Webalizer V2.21-02 Incremental Data - 11/05/2019 22:29:02
2019 11 5 22 29 2
The second line seems to contain the timestamp of the last run, so I just changed the year to 2018. After that, I was able to import older log files than the last imported ones, without having to delete all the data first.

copy SVN modified files including directory to a another directory

I have a list of files in my current working copy that have been modified locally. There are about 50 files that have been changed.
I am using the following command to copy files that have been modified in subversion to a folder called /backup. Is there a way to do this but maintain the directories they are in? So it would do something similar to exporting a SVN diff of files. For example if I changed a file called /usr/lib/SPL/RFC.php then it would copy the usr/lib/SPL directory to backup also.
cp `svn st | ack '^M' | cut -b 8-` backup
It looks strange, but it is really easy to copy files with tar. E.g.
tar -cf - $( svn st | ack '^M' | cut -b 8- ) |
tar -C /backup -xf -
Why not create a patch of your changes? That way you have one file containing all of your changes which you can timestamp in the name - something like 2012-05-28-17-30-00-UnitTestChanges.patch, one per day.
Then you can roll up your changes to a fresh checkout once you're ready, and then commit them.
FYI: Subversion 1.8 should have checkpointing / shelving (which is what you seem to want to do), but that's a long way off, and might only be added in Subversion 1.9.

Resources