Configurable port in $resouce with spring-boot - angularjs

Prerequisites
spring-boot 1.3.1.RELEASE
TypeScript 1.8.9
AngularJs 1.5.3
Zuul
I have an api gateway which hides a set of microservices.
This api gateway uses zuul to map Urls to the different microservices.
zuul:
routes:
service1:
path: /service1/**
serviceId: FIRST-MICROSERVICE
service2:
path: /service2/**
serviceId: SECOND-MICROSERVICE
service3:
path: /service3/**
serviceId: THIRD-MICROSERVICE
Question
I want to be able to start the api gateway on different ports with spring-boot like this:
java -jar -Dserver.port=8081 myspringbootapplication.jar
The host should always be the host from which the angularjs application was delivered.
The problem is that the port depends on the port spring-boot startet the tomcat server.
At the moment http://localhost:9001 is hardcoded.
.factory('app.core.services.myApiResource', ['$resource', '$http',
($resource:ng.resource.IResourceService, $http:ng.IHttpService):ng.resource.IResourceClass<ng.resource.IResource<any>> => {
var apiRoot : ng.resource.IResourceClass<ng.resource.IResource<any>>
= $resource('http://localhost:9001/service1/myentity/search/:finderName');
return apiRoot;
}
])
In bootstrap.yml the default port is 9001
# Default ports
server.port: 9001
Is there a way to set the port value in angularjs $resource to spring-boot server.port value?

Related

How to send request from react app inside nginx which is inside docker container to golang webservice which is inside docker container on same AWS EC2

Currently I have two container inside AWS EC2 instance.One of them is React app. It uses nginx. This app should send request to golang web service which is inside another container. They are both in same EC2 instance. When i open browser and go to EC2 instance's public IP address i am able to see my React app is working. When i try to send request to golang web service it says that "(failed)net::ERR_CONNECTION_REFUSED". I am able to use curl request inside EC2 and receive response. How can i do the same on React Request.
Here is the my axios post
axios.post('http://localhost:9234/todos', { "name": todo, "completed": false },).then((res) => {
console.log(res)
if (res.data.todo.name === todo) {
setTodos([...todos, todo]);
}
}).catch((err) => { console.log(err) });
Here is my request with curl
curl -d '{"id":9,"name":"baeldung"}' -H 'Content-Type: application/json' http://localhost:9234/todos
Thanks for helps
from your details it seems likely that one or more of the following is true:
you do not have port 9234 forwarded to the container
you do not have port 9234 open in the EC2 instance's security group
Furthermore, localhost as #Jaroslaw points out is from the persective of the browser. That localhost should also be the IP of the ec2 instance or dns that resolves to that IP.
To be clear, the react webapp doesn't run on the ec2 instance. Its static assets such as DOM elements and Javascript get served to the browser and it runs there.
As #Daniel said javascript gets served to the browser and it runs there. So when your browser requesting the address localhost it actually means your computer's localhost. To access the golang server you need to forward the 9234 port from the docker container.
services:
web:
ports:
- "9234:9234"
And then also you need to open the 9234 port in firewall of your ec2 instance then you can access your go server using the public address of your ec2 from your browser.
axios.post('http://{{public_address_of_Ec2}}:9234/todos', { "name": todo, "completed": false },).then((res) => {
console.log(res)
if (res.data.todo.name === todo) {
setTodos([...todos, todo]);
}
}).catch((err) => { console.log(err) });
But this is not recommended to expose the ports to access your server. You may use nginx to listen on your port 80 and then load balance the requests to your go server. Here is a yaml you can add in your docker-compose to use nigx:
nginx:
image: nginx:latest
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
depends_on:
- web #your web app service name
ports:
- "80:80"
networks:
- "web.network" #your existing network name
And the nginx conf should be:
user nginx;
# can handle 1000 concurrent connections
events {
worker_connections 1000;
}
# forwards http requests
http {
# http server
server {
# listens the requests coming on port 80
listen 80;
access_log off;
# / means all the requests have to be forwarded to api service
location / {
# resolves the IP of api using Docker internal DNS
proxy_pass http://web:9234;
}
}
}

Websocket disconnects immediately after handshake (guacamole)

Forgive bad formatting as it is my first question on here, and thanks in advance for reading!
I am currently writing a remote web application that utilises Apache Guacamole to allow RDP, VNC, and SSH connections. The components I am using are:
Django for backend server - API calls (database info) and Guacamole Websocket Transmissions;
I am using Pyguacamole with Django consumers to handle Guacamole Server communication;
Reactjs for frontend and proxy;
Nginx for reverse proxy;
All this is hosted on a Centos Stream 8 vm
Basically, my websocket has trouble communicating through a proxy. When I run the application without a proxy (firefox in centos running localhost:3000 directly), the guacamole connection works! Though this is where the application communicates directly with the Django server on port 8000. What I want is for the react application to proxy websocket communications to port 8000 for me, so my nginx proxy only has to deal with port 3000 for production.
Here is the code I have tried for my react proxy (src/setupProxy.js):
const { createProxyMiddleware } = require('http-proxy-middleware');
let proxy_location = '';
module.exports = function(app) {
app.use(createProxyMiddleware('/api', { target: 'http://localhost:8000', changeOrigin: true, logLevel: "debug" } ));
app.use( createProxyMiddleware('/ws', { target: 'ws://localhost:8000' + proxy_location, ws: true, changeOrigin: true, logLebel: "debug" } ));
};
I have also already tried with http://localhost:8000 for the ws target url. Also, the api proxy works, but I am unsure if the ws proxy works. After making a websocket request, the consumer does a guacamole handshake, but disconnects the websocket before it can send anything back.
Also, the HPM output shows that it does try upgrading to websocket, but the client disconnects immediately.
Do let me know if you require more information.
I managed to find what was wrong, it was a small mistake though I felt the need to update this thread.
Basically, in consumers I used accept() instead of websocket_accept(), receive() instead of websocket_receive(), and so on. Careless mistake on my part, but hope this helps someone out!

Eureka Server with Google app engine (hostname problem)

I am building msa using eureka, zuul and with google appengine standard.
The problem is that zuul routing works normally in the local environment, but not in the GAE environment.
If I look at the Eureka page, I can check the registered services.
but The href link in the status column is "192.168.1.1:8080/info"
I know 192.168.1.1 is private ip address... can't access..
The methods i tried
#Eureka Standalone Server
eureka:
client:
registerWithEureka: false
fetchRegistry: false
serviceUrl:
defaultZone: https://my-eureka-server.appspot.com/eureka/
#Eureka Client
eureka:
instance:
prefer-ip-address: true
client:
serviceUrl:
defaultZone: https://my-eureka-server.appspot.com/eureka/
this result -> http://192.168.1.1:8080/info
#Eureka Client
eureka:
instance:
hostname: my-eureka-client.appspot.com
client:
serviceUrl:
defaultZone: https://my-eureka-server.appspot.com/eureka/
this result -> http://my-eureka-client.appspot.com:8080/info, also can't access
I want to http://my-eureka-client.appspot.com/info
what is hostname???
In local environment, If the euraka hostname is localhost or not specified,
href link is http://localhost:8080/info or http://MY-DESTOP-ID:8080/info
It can access.

Proxy outbound API calls from Google App Engine via Google Compute Engine running Squid

I'm trying to proxy outbound API calls made from a Google App Engine application via a Google Compute Engine server VM instance running Squid proxy server.
The aim is that the REST api calls will all be made from a static ip address so that the 3rd party API will be able to identify and permit the calls via their firewall.
I have read and followed the instructions on this post:
connect Google App Engine and Google Compute Engine
I have managed to do the following so far:
Created a Google cloud compute VM and successfully assigned it a static external IP address.
Created a Serverless VPC access connector successfully (all resources are located in the same GAE region).
Added the vpc_access_connector name to my app.yaml in the Google App Engine project (which runs on Node.js).
Deployed the app using gcloud beta, with api calls being targeted towards the internal IP address of the proxy server, using the correct Squid default port (3128).
On issuing a request from the GAE app, I can see from the server logs that the correct IP address and port are being attempted but get the following error: "Error: tunneling socket could not be established, cause=connect ECONNREFUSED [my-internal-ip-address]:3128"
I've also tried running a curl command from the cloud shell interface, but the request times out every time.
If anyone could help solve this issue, I will be very grateful.
Here is a possible example of how to proxy outbound HTTP requests from an App Engine Standard application on NodeJS runtime via a Compute Engine VM running Squid, based on a slight modification of the available Google Cloud Platform documentation 1 2 and Quickstarts 3.
1. Create a Serverless VPC Access conector: Basically follow 2 to create the connector. After updating the gcloud components and enabling the Serverless VPC Access API on your project running the following command should suffice:
gcloud compute networks vpc-access connectors create [CONNECTOR_NAME] \
--network [VPC_NETWORK] \
--region [REGION] \
--range [IP_RANGE]
2. Create a Compute Engine VM to use as proxy: Basically follow 1 to set up a Squid proxy server:
a. Reserve a static external IP address and assign it to a Compute Engine VM.
b. Add a Firewall rule to allow traffic on Squid's default port: 3128. This command should work if you are using the default VPC network: gcloud compute firewall-rules create [FIREWALL_RULE_NAME] --network default --allow tcp:3128
c. Install Squid on the VM with the following command sudo apt-get install squid3.
d. Enable the acl localnet src entries in the Squid config files for the VPC Access connector:
sudo sed -i 's:#\(http_access allow localnet\):\1:' /etc/squid/squid.conf
sudo sed -i 's:#\(acl localnet src [IP_RANGE]/28.*\):\1:' /etc/squid/squid.conf
For example: if you used 10.8.0.0 as value for the [IP_RANGE] field for creating the connector, it should look something like sudo sed -i 's:#\(acl localnet src 10.8.0.0/28.*\):\1:' /etc/squid/squid.conf
e. Start the server with sudo service squid start
3. Modifications on App Engine application: Based on the Quickstart for Node.js modify the following files in order to create an application that crawls a webpage using the request-promise library and displays the HTML of the webpage. The request is send to the webpage using the VPC Access connector and the VM as a proxy with the modifications of the app.yaml and app.js files.
a. package.json
...
"test": "mocha --exit test/*.test.js"
},
"dependencies": {
"express": "^4.16.3",
"request": "^2.88.0",
"request-promise": "^4.2.5"
},
"devDependencies": {
"mocha": "^7.0.0",
...
b. app.js
'use strict';
// [START gae_node_request_example]
const express = require('express');
const app = express();
app.get('/', (req, res) => {
res
.status(200)
.send('Hello, world!')
.end();
});
//Add a handler to test the web crawler
app.get('/test', (req, res) => {
var request = require('request-promise');
request('http://www.input-your-awesome-website.com')
.then(function (htmlString) {
res.send(htmlString)
.end();
})
.catch(function (err) {
res.send("Crawling Failed...")
.end();
});
});
// Start the server
const PORT = process.env.PORT || 8080;
app.listen(PORT, () => {
console.log(`App listening on port ${PORT}`);
console.log('Press Ctrl+C to quit.');
});
// [END gae_node_request_example]
c. app.yaml
runtime: nodejs10
vpc_access_connector:
name: "projects/[PROJECT]/locations/[REGION]/connectors/[CONNECTOR_NAME]"
env_variables:
HTTP_PROXY: "http://[Compute-Engine-IP-Address]:3128"
HTTPS_PROXY: "http://[Compute-Engine-IP-Address]:3128"
Each time you go to the /test handler monitor that the requests go through the proxy by using sudo tail -f /var/log/squid/access.log command from the VM and checking the changes on the logs.
Notes: The connector, application and VM need to be on the same region to work and these are the supported regions for the connector.

Trying to connect to laravel backend with grunt-connect-proxy

I am finding myself pretty stuck using grunt-connect-proxy to make calls from my yeoman generated angular app running on port 9000 to my laravel backend which is running on port 8000. After following the instructions on the grunt-connect-proxy github I see the following message upon running grunt serve:
Running "configureProxies:server" (configureProxies) task
Proxy created for: /api to localhost:8000
I have my proxies set up here in connect.proxies directly following connect.options:
proxies: [{
context: '/api', // the context of the data service
host: 'localhost', // wherever the data service is running
port: 8000 // the port that the data service is running on
}],
In my controller then attempt to make a call to the api to test my proxy:
var Proxy = $resource('/api/v1/purchase');
Proxy.get(function(test){
console.log(test);
});
In the result of this in my console is a 500 error indicating that the call was still made to port 9000 rather than 8000:
http://localhost:9000/api/v1/purchase 500 (Internal Server Error)
Here is a link to a gist containing my full gruntfile: https://gist.github.com/JohnBueno/7d48027f739cc91e0b79
I have seen quite a few posts on this but so far none of them have been of much help to me.

Resources