Is it possible to run each webapp in Tomcat as a specific user? My goal is to authenticate each app as a domain user against SQL Server using integrated security.
If you mean OS-User: No. Tomcat is one process, which runs as one OS-User.
You can provide different databases (e.g. connection pools) to each application. But they all will run within the same process.
Alternatively, you can run many different tomcats (naturally, on multiple ports) and combine them all with a frontend Apache httpd or nginx, forwarding the requests to each respective tomcat. This way, all tomcats can run as their individual OS-User, but still appear as a single webserver on the standard ports 80 and 443.
If you want to authenticate against active directory, there is a how to on the apache page . This does not mean the user under tomcat is running nor the user accessing the DB, it's just the user using tomcat.
Related
I know cloud run and appengine are different services.
I need connect via ssh to an appengine or cloud run instance to execute some process manually.
The reason to use one of these services is they charge only when I use it, not 24x7 hours
Some way to do that?
Thanks
Short answer: you can't.
In fact, these services are designed to answer to HTTP request, and only when an HTTP request is processed you pay for the service. If you log into an instance in SSH, will you pay for the HTTP request? If you run a process on the instances, will you pay for the HTTP request?
Of course not. But the cost isn't the main reason. Cloud Run and App Engine can create and destroy instances as they wish, according with the traffic or something else. It's useless to log into an instance and to run a process and few seconds/minutes after the instance is deleted and a new one created, you will lost all what you do.
If you use these services, you must accept that the servers are managed by Google, that you can only deploy a service and use it through HTTP. It's not a traditional VM instance, it's "serverless".
After saying that, if you want to explore the runtime configuration, you can use a HTTP reverse shell. But, at the end, it's not very useful...
Context
I code using codeanywhere, because I had multiple places with desktop computers to work and don't want to load a laptop
Actually I had vps's as enviroments, like my projects are long time, don't need to rebuild or change the enviroment in years
The need:
I run some times per month shell commands like test nodejs scripts, before to move them to serverless (cloud run)
The old-approach:
try to run these scripts on a working enviroment connecting via ssh
The moderm developer way:
use codeanywhere containers as code storage and testing + create a gitlab ci/cd to deploy automatically on google cloud run instances
Our AppEngine app is connecting to a remote service which requires a VPN and also required me to add entries to the hosts file on my local machine in order to connect to their endpoints.
e.g.
10.200.30.150 foo.bar.com
This is working fine when running the app locally, but I can't figure out how to set this up on Google Cloud to work once deployed.
I can't use the IP addresses directly because it errors that the IP is not on the cert's list.
How do I map the host names to the IPs in Google Cloud so that AppEngine can use them?
From the error mentioned in the comment I suspect connecting directly through the IP fails because the certificate doesn't recognize the IP to DNS mapping as valid and therefore the secure connection setup breaks. Based on the requirements of connecting to the API by VPN and tweaking the hosts mapping there are few things you may try.
The simplest approach that may work would be using a Google Compute Engine VM instance, since there you would able to manipulate the etc/hosts file and replicate the local machine setup. This VM could be used either as the main app service or as a proxy from App Engine to the 3rd party API endpoint. To go that route I would suggest taking a look at these two posts which explain how to change the etc/hosts file on GCE (Changing the file once wouldn't work as the VM periodically overrides it, see the posts for cronjob like workaround).
Separately, as your app runs in App Engine flexible environment there is the chance to provide a docker container with the app packaged. It may be possible to set the workaround above in the docker file and have it working in App Engine too.
I'm using Google App Engine Standard to write a small application, let's call it AppX.
AppX is supposed to receive a POST message from another website, let's say B, and then do some processing and show on its mainpage.
The question is:
I don't know how to use dev_app server to debug. As if I use dev_app server, the server will run locally without https, then I don't know how to send a POST message from website B.
Google cloud shell is a very limited shell too, which does have limited ports enabled for outgoing connections only.
Although there might be a way to configure it, I think the easiest way to test calls from website B to a dev_app would be configuring a GCE virtual machine with a fixed IP. There you can configure the firewall freely, and also not worry any non-interactive session to finish abruptly.
I would like to debug my Google App Engine (GAE) app locally but without using localhost. Since my application is made up of microservices, the urls in a production environment would be along the lines of:
https://my-service.myapp.appspot.com/
But code in one service can call another service and that means that the urls are hardcoded. I could of course use a mechanism in code to determine whether the app is running locally or on GAE and use urls that are different although I don't see how a local url would handle the since the only way to run an app locally is to use localhost. Hence:
http://localhost:8080/some-service
Notice that "some-service" maps to a servlet, whereas "my-service" is a name assigned to a service when the app is uploaded. These are really two different things.
The only possible solution I was able to find was to use a reverse proxy which would map one url to a different one. Still, it isn't clear whether the GAE development SDK even supports this.
Personally I chose to detect the local development vs GAE environment and build my inter-services URLs accordingly. I feel it was a well-worthy effort, I've been (re)using it a lot. No reverse proxy or any other additional ops necessary, it just works.
Granted, I'm using Python, so I'm not 100% sure a complete similar Java solution exists. But maybe it can point you in the right direction.
To build the per-service URLs I used modules.get_hostname() (the implementation is presented in Resolve Discovery path on App Engine Module). I believe the Java equivalent would be getInstanceHostname() from com.google.appengine.api.modules.
This method, when executed on the local server, automatically provides the particular port the server listens to for each service.
BTW, all my services for an app are executed by a single development server process, which listens on multiple ports (this is, I guess, how it can provide the modules.get_hostname() info). See Running multiple services using dev_appserver.py on different ports. This is part I'm unsure about: if/how the java local dev server can simultaneously run multiple services. Apparently this used to be supported some time ago (when services were still called modules):
Serving multiple GAE modules from one development server?
GAE modules on development server
This can be accomplished with the following steps:
Create an entry in the hosts file
Run the App Engine Dev server from a Terminal using certain options
Use IntelliJ with Remote debugging to attach the App Engine Dev server.
To edit the hosts file on a Mac, edit the file /etc/hosts and supply the domain that corresponds to your service:. Example:
127.0.0.1 my-service.myapp.com
After you save this, you need to restart your computer for the changes to take place.
Run the App Engine Dev server manually:
dev_appserver.sh --address=0.0.0.0 --jvm_flag=-Xdebug
--jvm_flag=-Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8000
[path_to_exploded_war_directory]
In IntelliJ, create a debug configuration. Use the Remote template to create this configuration. Set the host to the url you set in the hosts file and set the port to 8000.
You can set a breakpoint and run the app in IntelliJ. IntelliJ will attach to the running instance of App Engine Dev server.
Because you are using a port during debugging and no port is actually used when the app is uploaded to the GAE during production, you need to add code that identifies when the app is running locally and when it's running on GAE. This can be done as follows:
private String mServiceUrl = "my-service.my-app.appspot.com";
...
if (SystemProperty.environment.value() != SystemProperty.Environment.Value.Production) {
mServiceUrl += ":8000";
}
See https://cloud.google.com/appengine/docs/standard/java/tools/using-local-server
An improved solution is to avoid including the port altogether and not having to use code to determine whether your app is running locally or on the production server. One way to do this is to use Charles (an application for monitoring and interacting with requests) and use a feature called Remote Mapping which lets you map one url to another. When enabled, you could map something like:
https://my-service.my-app.appspot.com/
to
https://localhost:8080
You would then enable the option to include the original host, so that this gets delivered to the local dev server. As far as your code is concerned it only sees:
https://my-service.my-app.appspot.com/
although the ip address will be 127.0.0.1:8080 when remote mapping is enabled. To use https on local host however does require that you enable ssl certificates for Charles.
For a complete overview on how to setup and debug microservices for a GAE Java app in IntelliJ, see:
https://github.com/JohannBlake/gae-microservices
Question: How can I communicate between two web sites using HTTPS within one Azure Cloud Service deployment?
Details:
I have architected an Azure Cloud Service deployment (one “subdomain.cloudapp.net”) in such a way as to run two separate web sites inside the deployment by using different ports.
Site 1 is what I’m calling a Service Site that is a standard ASP.NET site that hosts a bunch of WCF Services with no HTML or ASPX pages (except for a default.aspx page that redirects to site 2). Site 1 is running on port 80.
Site 2 is my main site and hosts a Silverlight application that uses the services from Site 1 to access the database and process results. It is only accessible by way of HTTPS and uses port 443.
Both of these sites are defined in a Visual Studio Azure solution with two endpoints, one for each port. Further, each endpoint is associated with the two web site project insides of the VS Solution by way of the ServiceDefinition.csdef configuration file in the Azure project.
I have purchased my security certificate and associated it with a domain name that I am using with Site 2 by way of domain name redirecting and CName mapping at my DNS.
Accessing site 2 using the mapped domain name and an encrypted connection works well via HTTPS. However, when I try to make a service call internally from Site 2 to Site 1, I am going from an HTTPS connection to an HTTP connection because I don’t have a SSL Certificate at Site 1. As a result, when site 2 makes a service call to Site 1, it tries to serve clientaccesspolicy.xml. This causes Internet Explorer to display the ‘Mixed Content’ message (Because of the HTTPS/HTTP mix). Bottom line is I need to get rid of this pop-up prompt. I know it can be disabled on the client end but with over a thousand users, I can’t depend on them being able to turn the message off. I need to make the message go away on the server side.
Back to the question, so is there a way to access site 1 using SSL from site 2 all within the one Azure Cloud Service? What it sounds like I need to do is assign an SSL Certificate to site 1. However, I’m hitting two roadblocks. First, I can’t assign an SSL Certificate to a wildcard.clouapp.net domain name. Second, I can’t assign a second domain name to the Azure Cloud Service since I can’t do domain name forwarding to a specific port (remember that Site 2 is already using domain forwarding anyway).
I could accomplish a solution by breaking out the two sites into their own cloud services but I would rather not because it would double my cost. Are there any suggestions on how to accomplish getting this “Mixed Content” message to go away, either by way of securing site 1 or some other method?