Can someone please explain to me what exactly "Max Root View Port" means in the context of an ADF (12c) application? I know the default value is 20, if it's not specified in adf-config.xml. I've read the Oracle documentation. However, I don't think it's clear what the actual impact of changing the setting is. I mean, the impact to end users of the application.
Is this the same concept as "max concurrent users" of the application?
Many thanks,
Brian.
No, its not the same as "max concurrent users". And we don't care about users, but we care about sessions. And its not max session count limit either.
Root view port is basicaly main application window.
One session can have many application windows in several tabs, etc. Since this feature can be abused, number of this "tabs" restricted to some large but resonable value (there is lower hardlimit of 5 viewports present also). Exceeding this value will cause least used viewport to expire. Since different users doesn't share one session, it won't limit max concurent users count.
However raising root view port limits may allow one session to consume more memory, affecting your application capabilities.
Related
I am working on a CRUD webapp where users create and manager their data assets.
Having no desire to be bombarded with tons of data for the first time, I think that it would be reasonable to set limits where possible in the database.
For example, limit number of created items A = 10, B = 20, C = 50 then, if user reaches the limit, have a look at his account and figure out if I should update the rules if it doesn't break the code and performance.
Is it a good practice at all to set such limits from the performance/maintenance side, not from the business side or should I think like data entities are unlimited and try to make it well-performing with lots of data from the start?
You suggest to test your application's performance on real users, which is bad. In addition, your solution will create inconvenience for users by limiting them, when there is no reason for that (at least from user's point of view), which decreases user's satisfaction.
Instead, you should test performance before you release. It will give you understanding of your application's and infrastructure's limits of running under high load. Also, it will help you to find and eliminate bottle necks in your code. You can perform such testing with tools like JMeter and many others.
Also, if you afraid of tons of data at start moment, you can release your application as private beta: just make a simple form where users can ask for early access and receive invite. By sending invites you can easily control growth of user base and therefore loading on you app.
But you should, of course, create limitations where it is necessary, for example, limit items per page, etc.
I'm trying to find a way to access a centralized database for both retrieval and update.
the following is what I'm looking for,
Server 1 has this variable for example
int counter;
Server 2 will be interacting with the user, and will increase the counter whenever the user uses the service, until a certain threshold is reached. when this threshold is reached then server 2 will start rejecting the user access.
Also, the user will be able to use multiple servers (like server 2) from multiple locations and each time the user accesses the access any server the counter will be increased.
I tried google but it's hard to search for something without a name.
One approach to designing this is to do sharding by user - i.e. split the users between your servers depending on the ID of the user. That is, if you have 10 servers, then users with ID's ending with 2 would have all of their data stored on server 2, and so on. This assumes that user ID's are distributed uniformly.
One other approach is to shard the users by location - if you have servers in Asia vs Europe, for example. You'd need a property in the User record that tells you where the user is located; based on that, you'll know which server to route them to.
Ultimately, all of these design options have a concept of "where does the master record for a user reside?" Each of these approaches attempts to definitively answer this question.
A different category of approaches has to do with multi-master replication, which is supported by some database vendors; this approach does not scale as well (i.e. it's hard to get it to scale to 20 servers), but you might want to look into it, too.
On the dashboard, there are many charts that we can look at.
Which one would be the best to use in order to know when an additional instance will be needed to handle additional traffic?
Some possible ways I thought about :
When the 'Active instances' line is about to get above the current number of 'Billable instances'
When 'Milliseconds used per second' is about to break over 1000
Would these be true?
Any other way?
App Engine will spin up a new instance (or make an idle instance active) whenever the pending MS (the amount of time a request has been waiting in the queue) has reached the value you specified in the billing settings. Unfortunately, there's currently no graph that corresponds to that.
There's something I really don't get about the new pricing. As far as I can see, I am now billed (amongst others) for the number of "instance/hours". On the other hand, a while back I've opted for the "Always on" feature, which since then effectively has 3 "Resident" instances of my application always running.
Now, A.F.A.I.C.S. , on the old pricing model, the one where I was charged by CPU Time used, the "always on" feature was great, not only did it made the app more responsive, but since the instances were no longer started-up/torn-down when traffic was scarce, the CPU Time was lowered (and indeed this is visible on the dashboard).
But now, since I'm billed by Instance/hours, the fact that I have this "always on" option active doesn't in fact add a lot of money to my bill, even when those instances are not actually doing anything (simply because they're just there, always on)?
I'm asking this because since the new pricing model was activated, I have whopping increase in Frontend Instance Hours (right now it's 29.21 for the last 9 hours), where before the CPU Time never really came close to depassing the free quota.
The "Always On" feature does not exist as of 1.6.0. The equivalent replacement is setting the Min Idle Instances slider to 3 (and leaving the Max Idle Instances at "Automatic") in your Application Settings in the Admin Console.
add a lot of money to my bill, even when those instances are not actually doing anything (simply because they're just there, always on)?add a lot of money to my bill, even when those instances are not actually doing anything ...
The problem is that they are doing something. They are occupying RAM. The new pricing model attempts to more accurately model the underlying costs to Google, or at least that's what they tell us. You can change how many instances are always on by going to the admin interface. If you aren't really using all 3, try going down to 2 or 1. If your traffic spikes, more instances will be started up. You can also set a value for how much latency you want users to endure before new instances are spun up.
The scheduler might be spinning up more than one instance to respond to threads.
Is this in Java? You could try to make it threaded, to make it more responsive to lower latency.
You could also tweak the scheduler parameters to discourage it from spinning up more instances.
I have developed a mobile application which is using extensively web services. It connects to my shared hosting server to get real-time information. Therefore, making sure the server is up is extremely important. Otherwise I am going to lose customers.
Some background. I changed no less than 3 hosting providers because they were not very reliable in terms of uptime. My currrent hosting is way better than those previous three, have I used it now for over a year, they have 99.9% uptime guarantee and all, but today I had about 3 hours of downtime. Which is why I am creating this post.
Not all of us small developers can afford expensive dedicated hosting, or have our own servers at home (which is not a guarantee it never will be down). In my case, having shared hosting for a very reasonable $10-15/month is OK. Except for those few hours it might be down.
One idea I have to deal with this is the following: have a second (different) shared hosting with another provider, and make the app to default to using this second hosting when my primary host is down. It's very unlikely that both will be down at the same time. I am going to pay only a few dollars extra per month for this, not 10 times more per month as I would for a dedicated hosting.
I am sure I am not the first person in this situation. Have anyone found a good way to deal with this problem, not requiring deep pockets? We are after all talking only about short periods of downtime on the primary server.
Thanks in advance for your suggestions.
If you are relying on a third party host and don't want to pay for greater reliability then a second server is the way to go. Depending on your application and budget you will also need to consider:
Database access and synchronization
Hosts in different physical locations
Multiple domain names and/or load balancing
If you opt to use multiple hosts and switch to a different (backup) host if one (the first) fails then you should aim to always have both (all) always in use. This way you won't get caught out trying/having to switch over to a "backup" server. By always using both (all) you can be sure that they are both (all) always up to date and working.
If your service is so critical that a couple of hours down time would be unacceptable to your users, then it should be easy to get the users to pay for that kind of reliability. This could fund hosting with a provider who can provide a greater level of up time or a second site. This will also help fund the time and effort to set all this up. ;)