How any concurrent user can use tomcat 6 - tomcat6

I have created a web application, now I need to deploy my dynamic web application on tomcat 6.I want to know how many user can log in to my application at one time.

your application can be accessed by any number of users at the same time only condition is how many request your server can compute. I mean the threads can be created till the server memory is available.
You can refer to this link visit http://www.tomcatexpert.com/ask-the-experts/simultaneous-users.

Related

Run different web apps as different users in Tomcat

Is it possible to run each webapp in Tomcat as a specific user? My goal is to authenticate each app as a domain user against SQL Server using integrated security.
If you mean OS-User: No. Tomcat is one process, which runs as one OS-User.
You can provide different databases (e.g. connection pools) to each application. But they all will run within the same process.
Alternatively, you can run many different tomcats (naturally, on multiple ports) and combine them all with a frontend Apache httpd or nginx, forwarding the requests to each respective tomcat. This way, all tomcats can run as their individual OS-User, but still appear as a single webserver on the standard ports 80 and 443.
If you want to authenticate against active directory, there is a how to on the apache page . This does not mean the user under tomcat is running nor the user accessing the DB, it's just the user using tomcat.

Web application authentication design pattern

I'm working on a web application and I'm having problem accessing the database on server side because there is no user for the DB proxy to map. In other word, I have a method which will start as soon as the application comes online and will call itself every 5 seconds to check for new messages. If it receives a specified message, it then goes to the database and finds whatever it needs. However, accessing database on server side wouldn't be possible because there is no user for the DB proxy to map. So what is a good design pattern for this type of application? Should I need an application account for these type of automation process?
Btw, I'm using Weblogic JPA 2.1 for database stuff.
Thanks in advance.
First of all, what exactly do you mean by "no user for the DB proxy to map"?
I assume, you meant that you don't have a user known by a session who connects to the database?
If yes, you usually wouldn't do that anyway and instead nearly always have a database user for your application. Then, no matter a user triggers a database call by an action or the backend triggers it by some scheduling, it will always be the same user who does it. In your Java EE application, you'd have a datasource containing this user in its configuration and all your application parts use the related entity manager when doing persistent actions or queries.

How in the web farms or cloud the actual data gets synced between the servers inside the web farm or cloud

1- lets say the web app is hosted on some cloud like azure, aws etc.
2- and Let say a user changed his profile details on my web app...
3- i am assuming that the request with the new data will hit one of the servers/VM inside the cloud.
4- Lets say the data has got saved in sql server db hosted on the same server/VM as the request landed on ...
Now the questions ..
What i am really confused about is that ..
1- the data will be saved on one single database in the first place then how it gets synced to other servers (if it happens, i am not sure about this) instantly.. because there is no guarantee that the next request from the same user will land on the same server...
2- and if the above scenario is invalid and there exists a shared database server for all application servers inside the cloud then isn't it going to be useless as ultimately the database server is going to get overloaded ... because all the servers/VMs hosting application will be hitting the same db at once ...
i know it's a wide question and i don't know if i have explained it properly ..
but please ask me about anything which i haven't made clear ..
any help whether it is a good link explaining the insights or a series of Q&A with me, anything would be great.. as i have to design such a mechanism and i couldn't find any standard approaches or that how the market leaders are doing it ...

Azure Web App - Request Timeout Issue

I have and MVC5 web-app running on Azure. Primarily this is a website but I also have a CRON job running (triggered from an external source) which calls a URL (a GET Request) to carry out some house keeping.
This task is asynchronous can take up to and sometimes over the default timeout on Azure of 230 seconds. I'm regularly getting 500 errors due to this.
Now, I've done a bit of reading and this looks like it's something to do with the Azure Load Balancer settings. I've also found some threads relating to being able to alter that timeout in various contexts but I'm wondering if anyone has experience in altering the Azure Load Balancer timeout in the context of a Web App? Is it a straightforward process?
but I'm wondering if anyone has experience in altering the Azure Load Balancer timeout in the context of a Web App?
You could configure the Azure Load Balancer settings in virtual machines and cloud services.
So, if you want to do it, I suggest you could deploy web app to Virtual Machine or migrate web app to cloud services.
For more detail, you could refer to the link.
If possible, you could try to optimize your query or execution code.
This is a little bit of an old question but I ran into it while trying to figure out what was going on with my app so I figured I would post an answer here in case anyone else does the same.
An Azure Load Balancer (created via ANY means) which mostly comes along with an external IP Address — at least when created via Kubernetes / AKS / Helm — has the 4 min, 240 second idle connection timeout that you referred to.
That being said there are two different types of Public IP Addresses — Basic (which is almost always the default that you probably created) and Standard. More information on the docs: https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-standard-overview
You can modify the idle timeout see the following doc: https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-tcp-idle-timeout
That being said changing the timeout may or may not solve your problem. In our case we had a Rails app that was connection to a database outside of azure. As soon as you add a Load Balancer with a Public IP all traffic will EXIT that public IP and be bound by the 4 minute idle timeout.
That's fine except for that Rails anticipates that a connection to a Database is not going to be cut frequently — which in this case happens ALL the time.
We ended up implementing a connection pooling service that sat between our Rails app and our real database (called PGbouncer — specific to Postgres DBs). That service monitored a connection and re-connected when the timer was nearing the Azure LB timeout.
It took a little while to implement but in our case it works flawlessly. You can see some more details over here: What Azure Kubernetes (AKS) 'Time-out' happens to disconnect connections in/out of a Pod in my Cluster?
The longest timeout you can set for a Public IP / Load Balancer is 30 minutes. If you have a connection that you would like to utilize that runs idle longer than that — then you may be out of luck. As of now 30 mins is the max.

Single Page Application Server Separation of Concern

Im just in the process of putting together the base framework for a multi-tenant enterprise system.
The client-side will be a asp.net mvc webpage talking to the database through the asp.net web api through ajax.
My question really resides around scalability. Should I seperate the client from the server? i.e. Client-side/frontend code/view in one project and webapi in another seperate project server.
Therefore if one server begins (server A) to peak out with load/size than all need to do is create another server instance (server B) and all new customers will point to the webapi's on server B.
Or should it be all integrated as one project and scale out the sql server side of things as load increase (dynamic cloud scaling)?
Need some advice before throwing our hats into the ring.
Thanks in advance
We've gone with the route of separating out the API and the single page application. The reason here is that it forces you to treat the single page application as just another client, which in turn results in an API that provides all the functionality you need for a full client...
In terms of deployment we stick the single page application as the website root with a /api application containing the API deployment. In premise we can then use application request routing or some content-aware routing mechanism to throw things out to different servers if necessary. In Azure we use the multiple websites per role mechanism (http://msdn.microsoft.com/en-us/library/windowsazure/gg433110.aspx) and scale this role depending upon load.
Appearing to live in the same domain makes things easier because you don't have to bother with the ugliness that is JSONP or CORS!
Cheers,
Dean

Resources