what is the usage of context in strategy pattern? - strategy-pattern

I know how strategy pattern works, but i can't see the usage of context. I think the interface
strategy is enough. If i use context, user need to know context and concrete strategy, if not use context, user need to know interface strategy and concrete strategy. I think there's no difference between them. So why is context needed?

Related

Is React Context API suitable for large scale applications

I am planning to build a large React application which might contain hundreds of components. But not sure what state management system to use between Redux and Context API.
Context API is in-built in React and doesn't need any third party library. It is easy to implement and solves the problem of sharing states at different levels of the component.
But on the other hand Redux is the Industry standard and has support for middleware to perform async actions.
If I choose Context API how can we manage API calls with it. Also do you think it is a good idea to use context for a large application where we might need state objects extensively.
The design benefit from Redux is that the action does not implement. An action is an indication that something happened (for example SAVE_PROFILE_CLICKED) but the action doesn't do anything (like connecting to api, sending data and saving response in state). You can do this with context api but the separation isn't enforced as much and you won't have the redux devtools. The pattern is called event store/sourcing. You could change the reducer and replay the events to see if your changes work and create a consistent state, testing is easier, extending is easier, logic is better isolated and probably many more benefits to be had.
The design also separates writing to state (reducer), side effects (thunk) and reading from it (selectors). This pattern (writing/reading separation) is called cqrs. Your query/selector is separated from the command/reducer. This gives you easier testing, isolation of logic, less chance of duplicate implementation and probably many more benefits.
You can still make a complete mess of your project when using Redux and not fully understand it so using Redux does not guarantee anything.
If I choose Context API how can we manage API calls with it.
You can do it any way you like it, the question is too general to answer.
Also do you think it is a good idea to use context for a large application where we might need state objects extensively.
As stated before; Redux is no guarantee your project won't be a mess. It will give you the tools to implement certain patterns with more ease. Make sure you understand it and it's patterns. Most example applications don't demonstrate why Redux is so powerful as the problem they implement (counter, todo app) isn't complex enough to even warrant using it. I can only advice you would write code that you're comfortable with and can understand.

If GraphQL and Relay are non-negotiable, does it follow that redux is better left out? Or is there not such complete overlap in concerns?

I've read a lot of heated and contradictory information about this sort of subject and I have no dog in the fight - I have the advantage of an open mind.
Let me walk you through my very informal thought process that led me to think, after using a boilerplate with everything in the relay kitchen sink, that I still needed something like redux.
In more than a few scenarios, sufficient redux boilerplate takes less time and much less thought than the sufficient GraphQL and relay boilerplate
The abstraction of actions in redux is a good thing that keeps many programmers out of trouble by forcing them to separate concerns and to encapsulate the internals of a request / how it works vs the existence of one.
Specifically, if I am making a canvas editor with lots of tools and brushes, it seems like it would take a long time to get started the right way in relay making all those mutators and queries and so on with care for persistence.
But if I say, you know, I don't need my app to know every state ever of everything, without something like redux, there is no alternative to either managing state via some sort of big container object or winging it, both of which are inferior to redux
Therefore my instinct that redux has its place in this scenario is the right instinct
However, I might just not understand how GraphQL and relay are supposed to be used.
Therefore, I am asking for concrete answers about 1) whether this is a fairly objective or subjective question, 2) whether there is a consensus or not, and 3) if I should care.
One more thing - if it is the case that redux is fair game in such a scenario, is it still a good rule of thumb that my app ought to have a single store? Or can I start using redux more modularly and in mroe of an adhoc fashion?
By the way here is a more simple scenario: I want to use a Stepper from material-ui which requires State. Without redux my choices are either to faithfully do that at the relay level or below, wing it in the components, or try to fudge it somehow or mix it. The only sound option is the first, and that takes time.

Is it good form in Java to place your DB functions in dedicated class/methods?

In writing a relatively simple app, I was considering making a class that handled all the database interaction. I was going to construct all the prepared statements in the class. That way, any DB changes would (probably) only result in changes to that one class. (Also, it puts the DB user ID & password in one class.)
For example, I was planning to write a class with a method to register the DB driver, another to make a connection, another to read, another to write, and yet another method to update.
Besides ease of maintaining the code, does this offer any other benefit? Maybe under a multithreaded context?
Also, I was planning to passing as arguments to the query methods the variables to bind to the prepared statements. I was going to return the result set as an argument as well.
Am I over thinking this?
TIA.
I would say using ORM like hibernate is much better choice. It will reduce lot of boiler plate code. Plus much easier to read-write or write migrations (adding new fields or tables).
I used similar approach for my college project which was purely java. I created a singleton class Database with all functions. Singleton class ensures only on copy of Database methods.
This sounds "by definition" to be encapsulation of the DB and then a adapter/bridge/facade-'design pattern' variation (depending on your exact implementation)
So no, I don't think you are over-thinking this.
You could use DBUtil which is library encapsulating DB access and operations (same as what you are trying to do).
You should not hardcode credentials in your class since that could be accessed by de-compiling your classes. See here.

Caching TaskScheduler retrieved FromCurrentSynchronizationContext in WPF app

Does it makes sense to store/cache the TaskScheduler returned from TaskScheduler.FromCurrentSynchronizationContext while loading a WPF app and use it every from there on? Are there any drawbacks of this kind of usage?
What I mean from caching is to store a reference to the TaskScheduler in a singleton and make it available to all parts of my app, probably with the help of a DI/IoC container or worst case in a bare ol' singleton.
As Drew says, there's no performance benefit to doing this. But there might be other reasons to hold onto a TaskScheduler.
One reason you might want to do it is that by the time you need a TaskScheduler it may be too late to call FromCurrentSynchronizationContext because you may no longer be able to be certain that you are in the right context. (E.g., perhaps you can guarantee that your constructor runs in the right context, but you have no guarantees about when any of the other methods of your class are called.)
Since the only way to obtain a TaskScheduler for a SynchronizationContext is through the FromCurrentSynchronizationContext method, you would need to store a reference to the TaskScheduler itself, rather than just grabbing SynchronizationContext.Current during your constructor. But I'd probably avoid calling this "caching" because that word implies that you're doing it for performance reasons, when in fact you'd be doing it out of necessity.
Another possibility is that you might have code that has no business knowing which particular TaskScheduler it is using, but which still needs to use a scheduler because it fires off new tasks. (If you start new tasks, you're choosing a scheduler even if you don't realise it. If you don't explicitly choose which scheduler to use, you'll use the default one, which isn't always the right thing to do.) I've written code where this is the case: I've written methods that accept a TaskScheduler object as an argument and will use that. So this is another scenario where you might want to keep hold of a refernce to a scheduler. (I was using it because I wanted certain IO operations to happen on a particular thread, so I was using a custom scheduler.)
Having said all that, an application-wide singleton doesn't sound like a great idea to me, because it tends to make testing harder. And it also implies that the code grabbing that shared scheduler is making assumptions about which scheduler it should be using, and that might be a code smell.
The underlying implementation of FromCurrentSynchronizationContext just instantiates an instance of an internal class named SynchronizationContextTaskScheduler which is extremely lightweight. All it does is cache the SynchronizationContext it finds when constructed and then the QueueTask implementation simply does a Post to that SynchronizationContext to execute the Task.
So, all that said, I would not bother caching these instances at all.

Has inheritance become bad?

Personally, I think inheritance is a great tool, that, when applied reasonably, can greatly simplify code.
However, I seems to me that many modern tools dislike inheritance. Let's take a simple example: Serialize a class to XML. As soon as inheritance is involved, this can easily turn into a mess. Especially if you're trying to serialize a derived class using the base class serializer.
Sure, we can work around that. Something like a KnownType attribute and stuff. Besides being an itch in your code that you have to remember to update every time you add a derived class, that fails, too, if you receive a class from outside your scope that was not known at compile time. (Okay, in some cases you can still work around that, for instance using the NetDataContract serializer in .NET. Surely a certain advancement.)
In any case, the basic principle still exists: Serialization and inheritance don't mix well. Considering the huge list of programming strategies that became possible and even common in the past decade, I feel tempted to say that inheritance should be avoided in areas that relate to serialization (in particular remoting and databases).
Does that make sense? Or am messing things up? How do you handle inheritance and serialization?
There are indeed a few gotcha with inheritance and serialization. One is that it leads to an asymmetry between serialization/deserialization. If a class is subclassed, this will work transparently during serialization, but will fail during deserialization, unless the deserialization is made aware of the new class. That's why we have tags such as #SeeAlso to annotate data for XML serialization.
These problems are however not new about inheritance. It's frequently discussed under the terminology open/closed world. Either you consider you know the whole world and classes, or you might be in a case where new classes are added by third-parties. In a closed world assumption, serialization isn't much a problem. It's more problematic in an open world assumption.
But inheritance and the open world assumption have anyway other gotchas. E.g. if you remove a protected method in your classes and refactor accordingly, how can you ensure that there isn't a third party class that was using it? In an open world, the public and internal API of your classes must be considered as frozen once made available to other. And you must take great care to evolve the system.
There are other more technical internal details of how serialization works that can be surprising. That's for Java, but I'm pretty sure .NET has similarities. E.g. Serialization Killer, by Gilad Bracha, or Serialization and security manager bug exploit.
I ran into this on my current project and this might not be the best way, but I created a service layer of sorts for it and its own classes. I think it came out being named ObjectToSerialized translator and a couple of interfaces. Typically this was a one to one (the "object" and "serialized" had the exact same properties) so adding something to the interface would let you know "hey, add this over here too".
I want to say I had a IToSerialized interface with a simple method on it for generic purposes and used automapper for most of the conversions. Sure, it's a bit more code but hey whatever, it worked and doesn't gum up other things.

Resources