Global flag to revert to original logic in C - c

I've recently been asked by my supervisor to prepare a solution in which multiple pieces of logic throughout our application can be reverted back to an earlier piece of code while the application is live. In effect, I need to prepare something like a flag or an indicator that can be dynamically activated to switch all instances of code in our application, from the new version back to the old one.
The new logic was prepared by a new member of our team and we are concerned about memory leaks that will emerge once the code goes to production, and we want a solution in place that will allow us to turn those changes off and return to the original code if necessary.
if (new_code == ON)
{
New Logic
}
else
{
Old logic
}
This project was originally meant to help get rid of build and compile warnings during our build process so it affects code ranging from function arguments to variable declarations, so there's no one single type of code that will be affected. We are running off a tuxedo stack but implementing a tuxedo config file to effect this change isn't recommended according to one of our senior developers. I'm not aware of a similar solution, though.
Any ideas? Thanks!

Would it work? Sure. Is it a good idea? No. You now have the risk of the new code, plus the risk of bugs in your switch code, plus the risk of what happens if you switch from one to the other in mid-run. You shouldn't be doing this, its far more likely to cause trouble than just deploying the changes directly.
What you should do- if you're really concerned about it, don't deploy it. Put it through additional testing until you're comfortable with it. Then when you do deploy it, have a plan to roll back to a previous version without these changes if something slips through testing.

call the function using function pointers.
make an API to change the function pointer to old or new depending on your need.

Related

Is it always necessary to add all used functions within useEffect to its dependencies array?

I am currently building a Tetris game on React just to practice hooks (used to develop in class components back in the day, kind of left React for a while and yesterday I decided to use it once again).
The game is working perfectly well, and it behaves as expected on each and every situation, however, there is a constant warning related to using a function within useEffect without it being a dependency.
To clarify - I have a useEffect function that all it does is call an updateFunction and is dependent just on the x and y coordinates of the moving Tetris block. The update function updates the state of the board whenever the position of the shape changes.
I know that React re-creates functions on each and every render, but giving a useCallback to the update function would cause it to be re-created endlessly (as then, the complier would ask me to make it dependent on the board state, thus each time it updates the board, it'll be forced to be re-created once again), and this causes an infinite loop of rendering.
Is it really necessary to put every function within useEffect as a dependency, even if said functions only causes a visual rendering to show the current state of the game?
You can always put whole function inside useEffect and you want get this error and problem
Someone just asked the similar question, and my answer is there, React won't let me use `useEffect` in a completely reasonable way
The answer to your question is that, no you don't have to, if the warning doesn't bother you, and your code is still working, then move on.
Otherwise you can try to find a way to disable this linter, or ask yourself why not to meet all the dependency requirement.
It's best to think of the effect dependencies as "correct" or not, and not try to tailor them to achieve some specific behavior. (The vast majority of the time anyway)
This means that if an effect uses a value that could possibly change, then it's declared as a dependency. Sometimes you app works fine if your dependencies are "incorrect" because elsewhere you can guarantee a value is constant. But this is more about maintaining the application than having it work currently.
Later if you change how one of the dependencies works in a refactoring, then the effect now may need to re-run and doesn't leading to strange and hard to diagnose bugs like stale values where you can't tell where they come from.
If this leads to cumbersome hooks with huge dependency arrays that infinite loop themselves, then it's an indication that logic is messy and needs to be refactored, split into multiple effects, rethink how the data flows through your hooks, etc.
It's hard to advise specifically without seeing your code. But in your case it seems like you could make the "update function" take arguments instead, which means it could then be completely static. Or if you did use useCallback that depends on x,y then it would regenerate when those change, but you probably have a lot of logic to run when those change, so that's probably expected.
Summary:
I would advise you to take the warnings of eslint-plugin-react-hooks seriously, and clean your code until it passes. In the long wrong, in a large and complex application, it really helps keep things clean.
There are very rare exceptions that come up when doing non standard things, but it's worth it to try your best on this before resorting to ignoring those warnings.

DllHost.exe memory leak when getting PrintTicket

I have a x86 application working on windows10 (64 bit environment).
One pf the app's features is to generate a lot of reports, so there is a lot of printing involved.
However, I noticed that every time I try to use call DefaultPrintTicket on the print queue the dllhost process (COM Surrogate) grows in memory.
I managed to isolate the code responsible and moved it to a test WPF app. When a button is clicked this code is being fired:
var localPrintServer = new LocalPrintServer();
var oneNotePrintQueue = localPrintServer.GetPrintQueues().FirstOrDefault(p => p.Description.Contains(OneNote));
var printTicket = oneNotePrintQueue?.DefaultPrintTicket;
The printing queue is irrelevant as I tried them all and the problem remains.
I am aware that this might be a duplicate to: PrintTicket DllHost.exe Memory Climbs
However, the solution provided there does not work as PrintTicked is not an IDisposable object.
I also tried some tweaks in the registry (i.e. finding AppId AA0B85DA-FDDF-4272-8D1D-FF9B966D75B0 and removing "AccessPermission", "LaunchPermission" and "RunAs") with no result.
I cannot rebuild the app as AnyCpu and I would like to avoid creating a separate 64bit process just for printing as it would be difficult to send a report generated in one app to another.
Any suggestions are greatly appreciated.
It seems that the topic is difficult.
Just want to share the solution I went with in case anyone else has the same problem.
In the end I created a separate x64 app which handles the printing.
Initially I wanted to go with a WCF service. However, I ran into problems with serializing of FixedDocuments and PrintQueue. Hence the separate app.
The solution if far from perfect and in my opinion is not nice at all. However, it solved the memory leak issue.

Short lived DbContext in WPF application reasonable?

In his book on DbContext, #RowanMiller shows how to use the DbSet.Local property to avoid 1.) unnecessary roundtrips to the database and 2.) passing around collections (created with e.g. ToList()) in the application (page 24). I then tried to follow this approach. However, I noticed that from one using [} – block to another, the DbSet.Local property becomes empty:
ObservableCollection<Destination> destinationsList;
using (var context = new BAContext())
{
var query = from d in context.Destinations …;
query.Load();
destinationsList = context.Destinations.Local; //Nonzero here.
}
//Do stuff with destinationsList
using (var context = new BAContext())
{
//context.Destinations.Local zero here again;
//So no way of getting the in-memory data from the previous using- block here?
//Do I have to do another roundtrip to the database here to get the same data I wanted
//to cache locally???
}
Then, what is the point on page 24? How can I avoid the passing around of my collections if the DbSet.Local is only usable inside the using- block? Furthermore, how can I benefit from the change tracking if I use these short-lived context instances not handing over any cache data to each others under the hood? So, if the contexts should be short-lived for freeing resources such as connections, have I to give up the caching for this? I.e. I can’t use both at the same time (short-lived connections but long-lived cache)? So my only option would be to store the results returned by the query in my own variables, exactly what is discouraged in the motivation on page 24?
I am developing a WPF application which maybe will also become multi-tiered in the future, involving WCF. I know Julia has an example of this in her book, but I currently don’t have access to it. I found several others on the web, e.g. http://msdn.microsoft.com/en-us/magazine/cc700340.aspx (old ObjectContext, but good in explaining the inter-layer-collaborations). There, a long-lived context is used (although the disadvantages are mentioned, but no solution to these provided).
It’s not only that the single Destinations.Local gets lost, as you surely know all other entities fetched by the query are, too.
[Edit]:
After some more reading in Julia Lerman’s book, it seems to boil down to that EF does not have 2nd level caching per default; with some (considerable, I think) effort, however, ones can add 3rd party caching solutions, as is also described in the book and in various articles on MSDN, codeproject etc.
I would have appreciated if this problem had been mentioned in the section about DbSet.Local in the DbContext book that it is in fact a first level cache which is destroyed when the using {} block ends (just my proposal to make it more transparent to the readers). After first reading I had the impression DbSet.Local would always return the same reference (Singleton-style) also in the second using {} block despite the new DbContext instance.
But I am still unsure whether the 2nd level cache is the way to go for my WPF application (as Julia mentions the 2nd level cache in her article for distributed applications)? Or is the way to go to get my aggregate root instances (DDD, Eric Evans) of my domain model into memory by one or some queries in a using {} block, disposing the DbContext and only holding the references to the aggregate instances, this way avoiding a long-lived context? It would be great if you could help me with this decision.
http://msdn.microsoft.com/en-us/magazine/hh394143.aspx
http://www.codeproject.com/Articles/435142/Entity-Framework-Second-Level-Caching-with-DbConte
http://blog.3d-logic.com/2012/03/31/using-tracing-and-caching-provider-wrappers-with-codefirst/
The Local property provides a “local view of all Added, Unchanged, and Modified entities in this set”. Like all change tracking it is specific to the context you are currently using.
The DB Context is a workspace for loading data and preparing changes.
If two users were to add changes at the same time, they must not know of the others changes before they saved them. They may discard their prepared changes which suddenly would lead to problems for other other user as well.
A DB Context should be short lived indeed, but may be longer than super short when necessary. Also consider that you may not save resources by keeping it short lived if you do not load and discard data but only add changes you will save. But it is not only about resources but also about the DB state potentially changing while the DB Context is still active and has data loaded; which may be important to keep in mind for longer living contexts.
If you do not know yet all related changes you want to save into the database at once then I suggest you do not use the DB Context to store your changes in-memory but in a data structure in your code.
You can of course use entity objects for doing so without an active DB Context. This makes sense if you do not have another appropriate data class for it and do not want to create one, or decide preparing the changes in them make more sense. You can then use DbSet.Attach to attach the entities to a DB Context for saving the changes when you are ready.

C, design: removing global objects

I'm creating a small Avida-style life simulation. I started out with a very basic, everything-is-global 600-line program in a single file to test some ideas, and now I want to create a real design.
Among other things, I had a global configuration object that every other function got something out of. Now, I must localize the object and pass pointers around. Thing is, mostly everyone needs this object. I've thought of three possible solutions:
a) Keep the configuration object
global (simplest, though not really a
solution)
b) Store pointers everywhere they are
needed (easy enough, though a waste
of memory, since some small
plain-old-data structures would need
it).
c) Create factories for the POD types
that need access to options, and have
the factory perform all operations on
them.
Of my ideas, only (c) sounds logical, but I don't want to needlessly complicate the structure. What would you guys do?
I'm fine with new ideas, and will provide whatever information about the program you want to know.
Thanks in advance!
I have to agree with #Carl Norum: there is nothing wrong with the global config setup you have now. You say that everybody "got something out of" it. As you know, the problem with globals comes when everybody writes into them. In your case, the config info truly is needed globally so deserves to be global.
If you want to make it be a little more decoupled and protected -- a little less global-ish -- then why not add some read/write access routines.
See, storing pointers everywhere isn't going to really solve the problem: it will only add a layer of indirection that will merely disguise or camouflage what are, in reality, the global accesses that are making you nervous. And that extra layer of indirection will add juuuuust enough room for juuuuust a teeny-weeny little bug to creep in.
So, bottom line: if stuff is naturally global then make it global and don't worry about the usual widespread received wisdom that's mostly correct but might not be the right thing in your application. To always be bound by the rules/propaganda that CS teachers put out there is, imo, the perfect example of a foolish consistency.
Global variables are awesome. Spend your time actually getting something done instead of refactoring for no reason. Every company I have worked at uses them heavily.
Ask yourself if you're actually gaining anything by moving it to an object you're just passing around everywhere. Might as well save yourself the extra complexity..
Go for B, unless profiling proves it to be a problem. On most machines, the memory required to store a pointer is very, very trivial.

How to make your embedded C code immune to requirement changes without adding too much overhead and complexity?

In many embedded applications there is a tradeoff between making the code very efficient or isolating the code from the specific system configuration to be immune to changing requirements.
What kinds of C constructs do you usually employ to achieve the best of both worlds (flexibility and reconfigurabilty without losing efficiency)?
If you have the time, please read on to see exactly what I am talking about.
When I was developing embedded SW for airbag controllers, we had the problem that we had to change some parts of the code every time the customer changed their mind regarding the specific requirements. For example, the combination of conditions and events that would trigger the airbag deployment changed every couple weeks during development. We hated to change that piece of code so often.
At that time, I attended the Embedded Systems Conference and heard a brilliant presentation by Stephen Mellor called "Coping with changing requirements". You can read the paper here (they make you sign-up but it's free).
The main idea of this was to implement the core behavior in your code but configure the specific details in the form of data. The data is something you can change easily and it can even be programmable in EEPROM or a different section of flash.
This idea sounded great to solve our problem. I shared this with my colleague and we immediately started reworking some of the SW modules.
When trying to use this idea in our coding, we encountered some difficulty in the actual implementation. Our code constructs got terribly heavy and complex for a constrained embedded system.
To illustrate this I will elaborate on the example I mentioned above. Instead of having a a bunch of if-statements to decide if the combination of inputs was in a state that required an airbag deployment, we changed to a big table of tables. Some of the conditions were not trivial, so we used a lot of function pointers to be able to call lots of little helper functions which somehow resolved some of the conditions. We had several levels of indirection and everything became hard to understand. To make a long story short, we ended up using a lot of memory, runtime and code complexity. Debugging the thing was not straightforward either. The boss made us change some things back because the modules were getting too heavy (and he was maybe right!).
PS: There is a similar question in SO but it looks like the focus is different. Adapting to meet changing business requirements?
As another point of view on changing requirements ... requirements go into building the code. So why not take a meta-approach to this:
Separate out parts of the program that are likely to change
Create a script that will glue parts of source together
This way you are maintaining compatible logic-building blocks in C ... and then sticking those compatible parts together at the end:
/* {conditions_for_airbag_placeholder} */
if( require_deployment)
trigger_gas_release()
Then maintain independent conditions:
/* VAG Condition */
if( poll_vag_collision_event() )
require_deployment=1
and another
/* Ford Conditions */
if( ford_interrupt( FRONT_NEARSIDE_COLLISION ))
require_deploymen=1
Your build script could look like:
BUILD airbag_deployment_logic.c WITH vag_events
TEST airbag_deployment_blob WITH vag_event_emitter
Thinking outloud really. This way you get a tight binary blob without reading in config.
This is sort of like using overlays http://en.wikipedia.org/wiki/Overlay_(programming) but doing it at compile-time.
Our system is subdivided into many components, with exposed configuration and test points. There is a configuration file that is read at start-up that actually helps us instantiate components, attach them to each other, and configure their behavior.
It's very OO-like, in C, with the occasional hack to implement something like inheritance.
In the defense/avionics world software upgrades are very strictly controlled, and you can't just upgrade SW to fix issues... however, for some bizarre reason you can update a configuration file without a major fight. So it's been darn useful for us to be able to specify a lot of our implementation in those configuration files.
There is no magic, just good separation of concerns when designing the system and a bit of foresight on the part of the developers.
What are you trying to save exactly? Effort of code re-work? The red tape of a software version release?
It's possible that changing the code is reasonably straight-forward, and quite possibly easier than changing data in tables. Moving your often-changing logic from code to data is only helpful if, for some reason, it's less effort to modify data rather than code. That might be true if the changes are better expressed in a data form (e.g. numeric parameters stored in EEPROM). Or it might be true if the customer's requests make it necessary to release a new version of software, and a new software version is a costly procedure to build (lots of paperwork, or perhaps OTP chips burned by the chip maker).
Modularity is very good principle for these sort of things. Sounds as though you're already doing it to some degree. It's good to aim to isolate the often-changing code to as small an area as possible, and try to keep the rest of the code ("helper" functions) separate (modular) and as stable as possible.
I don't make the code immune to requirements changes per se, but I always tag a section of code that implements a requirement by putting a unique string in a comment. With the requirements tags in place, I can easily search for that code when the requirement needs a change. This practice also satisfies a CMMI process.
For example, in the requirements document:
The following is a list of
requirements related to the RST:
[RST001] Juliet SHALL start the RST with 5 minute delay when the ignition
is turned OFF.
And in the code:
/* Delay for RST when ignition is turned off [RST001] */
#define IGN_OFF_RST_DELAY 5
...snip...
/* Start RST with designated delay [RST001] */
if (IS_ROMEO_ON())
{
rst_set_timer(IGN_OFF_RST_DELAY);
}
I suppose what you could do is to specify several valid behaviors based on a byte or word of data that you could fetch from EEPROM or an I/O port if necessary and then create generic code to handle all possible events described by those bytes.
For instance, if you had a byte that specified the requirements for releasing the airbag it could be something like:
Bit 0: Rear collision
Bit 1: Speed above 55mph (bonus points for generalizing the speed value!)
Bit 2: passenger in car
...
Etc
Then you pull in another byte that says what events happened and compare the two. If they're the same, execute your command, if not, don't.
For adapting to changing requirements I would concentrate on making the code modular and easy to change, e.g. by using macros or inline functions for parameters which are likely to change.
W.r.t. a configuration which can be changed independently from the code, I would hope that the parameters which are reconfigurable are specified in the requirements, too. Especially for safety-critical stuff like airbag controllers.
Hooking in a dynamic language can be a lifesaver, if you've got the memory and processor power for it.
Have the C talk to the hardware, and then pass up a known set of events to a language like Lua. Have the Lua script parse the event and callback to the appropriate C function(s).
Once you've got your C code running well, you won't have to touch it again unless the hardware changes. All of the business logic becomes part of the script, which in my opinion is a lot easier to create, modify and maintain.

Resources