I'm making use of an open source project that is changing quite frequently. It is necessary for me to always have the latest version with all changes and bug fixes.
The source code has been adjusted to make it do what I need. So it now contains my own code as well. Whenever sth changes, I currently manually read what changed in the changelog or compare files and then copy and paste everything into my own files. This is quite time consuming.
So now I was thinking about using a different approach:
Instead of long code snippets, only insert function calls and keep all of them in a separate file. Add this file to the make system.
If the source code changes, download it and re-insert all the changes automatically
Recompile, done
This way I can now compare old and new (untouched) versions with the original source code and see what has changed between the state of the code that I used and the new one.
My question is for step 2:
Line numbers might change if additional code is added. How can I find the right positions to inject my own functions?
Do as the Jonathan says. Use source control.
Related
I've recently been asked by my supervisor to prepare a solution in which multiple pieces of logic throughout our application can be reverted back to an earlier piece of code while the application is live. In effect, I need to prepare something like a flag or an indicator that can be dynamically activated to switch all instances of code in our application, from the new version back to the old one.
The new logic was prepared by a new member of our team and we are concerned about memory leaks that will emerge once the code goes to production, and we want a solution in place that will allow us to turn those changes off and return to the original code if necessary.
if (new_code == ON)
{
New Logic
}
else
{
Old logic
}
This project was originally meant to help get rid of build and compile warnings during our build process so it affects code ranging from function arguments to variable declarations, so there's no one single type of code that will be affected. We are running off a tuxedo stack but implementing a tuxedo config file to effect this change isn't recommended according to one of our senior developers. I'm not aware of a similar solution, though.
Any ideas? Thanks!
Would it work? Sure. Is it a good idea? No. You now have the risk of the new code, plus the risk of bugs in your switch code, plus the risk of what happens if you switch from one to the other in mid-run. You shouldn't be doing this, its far more likely to cause trouble than just deploying the changes directly.
What you should do- if you're really concerned about it, don't deploy it. Put it through additional testing until you're comfortable with it. Then when you do deploy it, have a plan to roll back to a previous version without these changes if something slips through testing.
call the function using function pointers.
make an API to change the function pointer to old or new depending on your need.
I have the following question:
The calendar text file and binary file should have a name that with a fixed part and a variable part. Use the time function (in time.h) or some other automatic mechanism to make sure that, when you write the files back out after updating the calendar, you do not overwrite the files you read in but you write a new version of the file that is clearly more recent.
Knowing that I have a program that manages a calendar.
Is it possible to to create a file with a fixed part and a variable part using the time.hlibrary ?
Thank you in advance!
Your question is vague, so the answer could only be similar.
From your specification, I guess you need a filename, f.e. "calendar-YYYYMMDDhhmmss.bin" and "calendar-YYYYMMDDhhmmss.txt"
When you "man time.h", you can see, that the time-"library" provides all these data. At the bottom of the man-page you see some related functions like "time()" and "strftime()", which help you to get a timestamp and to format a time to your needs.
If you "http://www.whathaveyoutried.com" and are stuck again, please update your question, and we will help you further.
EDITH (to the comment):
That depends on whether you should have a lot of files with each containing one "calendar" and the most recent dateded file is the actual calendar and the olde ones are backups; or you have one calendar-file with a new section for each "calendar", then you have to define (for yourself) how to organise these actual and historical sections.
as a matter of fact i would prefere the first solution, so each time you update your calendar, you call "fopen(path_filename_timestamp_txt, "w");". In the second case you would call "fopen(path_filename_txt, "a");" and "fwrite(timestamp);" your section-header;
Please show us, what you have done so far! (as short as possible, according to http://sscce.org/)
Is there a way to do some checks before allowing a merge in Mercurial?
I have found the pre-update hook and have a script that runs before an update is allowed, by adding the following to ~/.hg/hgrc:
[hooks]
pre-update = ~/hg_pre_update.sh
But I'd like to run the check before allowing a merge as well, and currently it just allows the merge to go through without running my checks.
Background
In case there are alternative ways to solve the problem...
We have been having a number of problems with 'lost' edits under Mercurial. I've tracked down most of them now to the same underlying cause: someone has vim edit sessions open while either they or someone else does a hg update or merge. The editor warns the file has changed externally, the user ignores the warning and saves their changes.
When these changes are committed, for Mercurial there is nothing controversial. The user has simply reverted all the changes brought in with the last update and put in their own changes.
Some time later, we notice the code has gone walkabouts. Cue assorted insults flung the way of mercurial...
Set vim to autoreload changes if no local changes where done. (otherwise ask, or force a merge)
that's how I avoid such issues in any editor...
Sorry just worked out there is a pre-merge hook that works just the same as pre-update. I tried it before asking the question, but now just looking at my hgrc I realise I put the script being called for that hook to ~/hg_pre_merge.sh which doesn't exist.
I can't find the existence of pre-merge documented anywhere but still feeling like a bit of a muppet now.
I am involved in using the C API to interact with Lotus Notes and Lotus Domino. I'm running into issues when reading existing Notes out of an NSF. Specifically, reading TYPE_OBJECT fields and even more specifically, $FILE fields (though I'm sure all TYPE_OBJECT fields would fail if I had any others).
I'm using NSFItemInfo to get the summary data on the $FILE field (so I don't need the saved file, I need information about it such as its size, name, etc...).
If I create the Note in memory, Commit it, then read the $FILE field, everything works. If I change my unit test to read an existing Note (instead of creating it in memory), Lotus PANICS with an Invalid Handle Lookup message.
So I'm left feeling like there is something different about loading those fields when I create a Note from scratch Vs opening one already created. Even reading in an already created Note that my own code created gives me the same error, so I think I'm creating the Notes correctly.
I've explored the NSFNoteOpenExt's flags options and have attempted to open the Note with every possible flag described in OPEN_xxx and I always get the panics except when I open the Note with OPEN_ABSTRACT or OPEN_NOOBJECTS. The reason those don't error though, is because they open the Note without the $FILE fields at all, so when I see if the field exists I get a false and the code to read in TYPE_OBJECT fields is never executed.
Any ideas what I'm missing?
I'd provide code, but I'm actually using .NET interop to accomplish all this, and the code is spread across multiple files, etc.... If you have any questions please ask and I'll provide as much detail as I can.
Craig
I figured out the issue. It came from the fact that when using interop in C#, you can't call C macros. OSLockBlock is defined as a macro to another macro to a function. Essentially, it locks the BlockId.Pool pointer, then increments the pointer by BlockId.BlockHandle. I was mis-interpreting that macro logic to be first increment BlockId.Pool by BlockId.BlockHandle, then lock.
Essentially:
Lock(BlockId.Pool)+BlockId.BlockHandle Vs Lock(BlockId.Pool+BlockId.BlockHandle)
It's interesting that the latter would work when creating a new note with new attachments. I finally figured that out as well, BlockId.BlockHandle was always zero when doing that. So that's why that always worked.
In the last few years I've encountered "ghost files" in Netbeans, but I didn't have proof of it, so I had to live with it and when I tried to explain the situation, it's hard to believe, now I have proof of it and it's a show stopper, any fix for it ?
It goes like this, I have a Java class that I've been using for many years, sort of a tool, I add a bit as I have more experience, but once in a while, after I added a new method, and used it in another class, Netbeans couldn't recognize it, it seemed to me Netbeans was still looking at the old copy of the class where the newly added method didn't exist. And yet if I copied this updated class to another project, the new method works fine and Netbeans can find it. In NB 6.7 it just acted like the class froze in time, any new additions to it wouldn't be recognized, now when I tried it in NB 6.9 I could catch the "ghost" !
It happened by accident, yesterday after I updated the class, I tried to use the new method in another class in the same project, the red flag went up, it couldn't find the new method, so I moused over the new method call, and right clicked on it, "Navigate" => "Go to source", bang the ghost showed up ! If I do this in NB 6.7, it just rang a bell as if it's telling me it couldn't find it. But in NB 6.9 it goes to the "source" which is not my java class source file[Get_Time.java], it's another generated file, so I moused over the opened "ghost" file name in the editor, the name was "C:\Users\USER.netbeans\6.9\var\cache\index\s117\java\14\gensrc\Get_Time.java(read-only)", the content seemed like a skeleton of my source file Get_Time.java , but definitely different, and I am pretty sure it's this "ghost file" that's been causing problems.
During the course of development I occasionally changed the system time to test different functions in the class, could this caused the ghost file to mess up, if I change the current time to 2016 and modified the source file, then NB might record the file last changed in 2016, and if I change the time back to 2011, and add a new function, it wouldn't accept it, because it might compare the dates of different versions of source file and stick with the "latest time stamp" ?!
I wish NB never keep ghost files, "Always Use The Actual Source File", this would avoid a lot of such problems. I did try to delete that ghost file, but the next time I compiled, it's generated again. I don't want to delete too much content from "C:\Users\USER.netbeans\6.9...", it might mess up my NB setting. Anyhow, it's now a show stopper, I can't add more changes to the class, it's frozen in time, what's the fix ?
Just some suggestions as I got stung by this problem before.
Did you built a jar and added dependency to this jar manually?
e.g.
1) project A is packaged into A.jar with a class Time.
2) project B depends on A.jar and project A
3) Time.java in project A is changed
4) project B will not see the changes as it'll always read from the A.jar built before the change happen.
Try deleting NetBeans' cache (~/.netbeans/6.9/var/cache/index/ directory) when you go back to the future and forward to the past. NetBeans is probably getting a bit confused by the file timestamps. Since it is somewhat of an edge case to be hopping around dates like that, I doubt it would be something NetBeans would give a high priority in attempting to fix/handle.