Using getApplicationContext() vs. referencing to custom Application class in Android - android-context

I've been researching ways to store global settings for my Android application and so far the best way seems to extend the Application class and store the shared data inside it, as described here. I've discovered that instead of using (CustomApplicationClass)getApplicationContext().getSomething() i can do the same thing by referencing directly to the static method inside the class like this: CustomApplicationClass.getSomething() and both ways work just fine.
Here's a piece from CustomApplicationClass:
public class CustomApplicationClass extends Application {
private static boolean something;
#Override
public void onCreate() {
[...]
}
public static boolean isSomething() {
return something;
}
public static void setSomething(boolean something) {
this.something = something;
}
}
Now, if i want to retrieve value of "something" variable somewhere in my code, say, from my application Activity, is there a difference between:
boolean var1 = ((CustomApplicationClass)getApplicationContext()).isSomething();
and
boolean var1 = CustomApplicationClass.isSomething();
? When running the application, both work fine. Is the second way safe to use, or is it inadvisable?

I've been researching ways to store global settings for my Android application and so far the best way seems to extend the Application class and store the shared data inside it, as described here.
Except that you're not doing that.
I've discovered that instead of using (CustomApplicationClass)getApplicationContext().getSomething() i can do the same thing by referencing directly to the static method inside the class like this: CustomApplicationClass.getSomething() and both ways work just fine.
Of course. You could just as easily had CustomApplicationClass extend Object, then executed CustomApplicationClass.getSomething(). You are gaining nothing by your current approach versus just using an ordinary singleton pattern in Java, and you are losing flexibility, as an application can only have one custom subclass of Application.
Is the second way safe to use, or is it inadvisable?
The first way is pointless, since your data member and methods are static.
Either:
Make your stuff in CustomApplicationClass not be static, and then use getApplicationContext().
Refactor CustomApplicationClass to not extend Application, and then use the static data member and/or accessor methods, or switch more formally to the Java singleton pattern.
Personally, I would go with option #2.

If you check the api of android.app.Application (http://developer.android.com/reference/android/app/Application.html) then you will find on Class Overview as following:
Base class for those who need to maintain global application state. You can provide your own implementation by specifying its name in your AndroidManifest.xml's tag, which will cause that class to be instantiated for you when the process for your application/package is created.
There is normally no need to subclass Application. In most situation, static singletons can provide the same functionality in a more modular way. If your singleton needs a global context (for example to register broadcast receivers), the function to retrieve it can be given a Context which internally uses Context.getApplicationContext() when first constructing the singleton.

Related

OOP composition and orm

I am building a simple rate limiter to train my oop skills and I am having some doubts regarding composition and orm.
I have the following code:
interface RateLimiterService {
void hit(String userid, long timestamp) throws UserDoesNotExistEx, TooManyHitsEx; // Option A
SingleRateLimiter getUser(String userid) throws UserDoesNotExistEx; // Option B
Optional<SingleRateLimiter> getUser(String userid); // Option C
}
class LocalRateLimiterService implements RateLimiterService {
// Uses an hash table userid -> SingleRateLimiter
}
interface SingleRateLimiter {
void hit(long timestamp) throws TooManyHitsEx;
}
class TimestampListSRL implements SingleRateLimiter {
// Uses a list to store the timestamps and purges the expired ones at each call
}
class TokenBucketSRL implements SingleRateLimiter {
// Uses the token bucket aproach
}
My doubts are:
Which option should I use for the RateLimiterService interface?
Option A is usually called "method forwarding" or "delegation" or "Law of Demeter". It protects the composed object by only exposing the intended methods and/or by possibly adding some extra validation logic before forwarding the call. Therefore, it seems like a good solution when that is needed. However, when this is not the case (as in my example), this option creates a lot of redundant repetitions which add nothing usefull.
Option B breaks encapsulation in a way but it avoids method repetions (the DRY principle). By picking A or B you always end up breaking some well known principles/good practices. Is there another option?
Option C is the same as B but returns an optional instead of throwing an exception. Which approach is considered better?
If the classes that implement the RateLimiterService had a single composing SingleRateLimiter instead of a collection of SingleRateLimiters (doesn't make much sense in this case but trying to be generic to other situations when the composed object is not a colection), would the best Option change to other alternative from the one in 1.?
If I wanted to add a database to this system, what would be the best approach to "talk to" the database?
creating a class DBRateLimiterService that implements RateLimiterService and has a private connection object to the database (is basically a DAO)? In this case, this class does not know anything besides the userid of the inner SingleRateLimiters, since there are multiple implementations available/possible. So how can I do this approach without changing the current OOP architecture?
In addition, I would need to create a DAO for each SingleRateLimiter implementation too, right? In this case, the SingleRateLimiter is not a simple model object that has only getters and setters so it should also be a DAO, right? Its hit method must be implemented as a transaction in most cases (if not all). If this is the right approach, how can the two DAOs operate together and map to the same database table?
What other options could serve for this?

Is it ok to create more modules in one assembly in prism?

I want to create two modules that describes toolbar and menu feature, but I don't want to define them in two different assemblies, and I tried to do it in that way,It works fine,but I'm afraid that would it takes twice as much as memory like just define them in one module? Follow is my demo code, they're written in one project.
public class MainMenuModule : IModule {
public void Initialize() {
RegionHelper.RegisterViewWithRegion(Shell.RegionNames.Menu, typeof(Views.Menu));
}
}
public class ToolBarModule : IModule {
public void Initialize() {
RegionHelper.RegisterViewWithRegion(Shell.RegionNames.ToolBarRegion, typeof(Views.ToolBar));
}
}
Note that RegionHelper is a wrapper prism region API.
That's fine. Although I don't really see a use case for two modules in one assembly...
And btw, the module definition classes are ready for collection once their Initialization methods return.
Is it OK to put two Prism modules in a single assembly? Well, yes. It will work and there is nothing stopping you from really. You can put as many module classes in a single assembly as you want to.
Keep in mind that a module is supposed to be a set of loosely coupled functional units though. If you put two modules in a single assembly, you can no longer load the first one without also loading the second one and vice versa. This may be a problem or it may not depending on how the modules are used by your application.
The possible down-sides of using too many assemblies are discussed here: Specific down-sides to many-'small'-assemblies?
This is generally not an issue.

One Windows Form needs an access to the components of another. What is the easiest implementation?

In my project I'm using C++/CLI and Windows Forms. I have two Forms. One is executed in main()
Application::EnableVisualStyles();
Application::SetCompatibleTextRenderingDefault(false);
Application::Run(gcnew FormA);
Another FormB is executed from the instance of FormA
FormB^ fb = gcnew FormB();
fb->Show();
I need to change components of FormB from FormA. Normally they are in private: section of class FormB. Is there any nice way to do that different from simply making them all public? In Native C++ I would use friend class but it's not allowed in C++/CLI.
C++/CLI has an access modifier that native C++ does not have. You are looking for internal:
Empowered by the strong support for modules in .NET. It is broader than friend but you have a pretty hard guarantee that whomever is messing with your private parts is never more than a few cubicles away from yours. Code that accesses internal members must be compiled into the same assembly. So your FormB class must be in the same project as your FormA class. The common case.
If you need the equivalent of friend across modules then you need the [InternalsVisibleTo] attribute. Exposing members with a public property would be another common way.
While providing public access to FormB's members may seem like a quick and easy solution, I would advise you to add some methods on FormB to perform said actions.
This way, you can call those methods from FormA, and at the same time retain proper encapsulation.
Hope this helps.

Calling non static method in static context(main)

I know that non static methods cannot be referenced from some static context, you have to make an instance of the class and call the method on that instance, or , you can make the method static. I also know the reason why. But I cannot decide what is the best practice to do this? Making the method/variable static or using instance of the class to call the method/variable, and why?
Object oriented languages work best when you use objects. If its anything more than the most basic of applications, create a class to house the functionality and instantiate it. You'll just end up refactoring into classes later anyway.
The reason is that objects, instances, etc all describe varying degrees of scope, allowing you to create complex programs from an amalgamation of encapsulated, fairly simple functionalities

Ninject ActivationBlock as Unit of Work

I have a WPF application with MVVM. Assuming object composition from the ViewModel down looks as follows:
MainViewModel
OrderManager
OrderRepository
EFContext
AnotherRepository
EFContext
UserManager
UserRepository
EFContext
My original approach was to inject dependencies (from the ViewModelLocator) into my View Model using .InCallScope() on the EFContext and .InTransientScope() for everything else. This results in being able to perform a "business transaction" across multiple business layer objects (Managers) that eventually underneath shared the same Entity Framework Context. I would simply Commit() said context at the end for a Unit of Work type scenario.
This worked as intended until I realized that I don't want long living Entity Framework contexts at the View Model level, data integrity issues across multiple operations described HERE. I want to do something similar to my web projects where I use .InRequestScope() for my Entity Framework context. In my desktop application I will define a unit of work which will serve as a business transaction if you will, typically it will wrap everything within a button click or similar event/command. It seems that using Ninject's ActivationBlock can do this for me.
internal static class Global
{
public static ActivationBlock GetNinjectUoW()
{
//assume that NinjectSingleton is a static reference to the kernel configured with the necessary modules/bindings
return new ActivationBlock(NinjectSingleton.Instance.Kernel);
}
}
In my code I intend to use it as such:
//Inside a method that is raised by a WPF Button Command ...
using (ActivationBlock uow = Global.GetNinjectUoW())
{
OrderManager orderManager = uow.Get<OrderManager>();
UserManager userManager = uow.Get<UserManager>();
Order order = orderManager.GetById(1);
UserManager.AddOrder(order);
....
UserManager.SaveChanges();
}
Questions:
To me this seems to replicate the way I do business on the web, is there anything inherently wrong with this approach that I've missed?
Am I understanding correctly that all .Get<> calls using the activation block will produce "singletons" local to that block? What I mean is no matter how many times I ask for an OrderManager, it'll always give me the same one within the block. If OrderManager and UserManager compose the same repository underneath (say SpecialRepository), both will point to the same instance of the repository, and obviously all repositories underneath share the same instance of the Entity Framework context.
Both questions can be answered with yes:
Yes - this is service location which you shouldn't do
Yes you understand it correctly
A proper unit-of-work scope, implemented in Ninject.Extensions.UnitOfWork, solves this problem.
Setup:
_kernel.Bind<IService>().To<Service>().InUnitOfWorkScope();
Usage:
using(UnitOfWorkScope.Create()){
// resolves, async/await, manual TPL ops, etc
}

Resources