I have deployed my MERN application to Alibaba ECS instance. Is there any way to access it in the browser, just like AWS public DNS? In AWS you use the public DNS to access your deployed application. I am not sure what to use to achieve the same. Below is the NGINX config present in the /etc/nginx/sites-available/default. I am using Ubuntu 18.04.
Surprisingly, I was able to hit the APIs without any issue. you can check the pm2 logs below
I am new to cloud deployment. If I have missed anything or if you need more information please let me know. Any help would be highly appreciated.
Maybe you can find solution on this two articles on Alibaba Cloud documentation for that problem, if I understood your question correctly:
IP addresses of ECS instances within VPCs: https://www.alibabacloud.com/help/doc-detail/25434.htm
Connect to a Linux instance by using a username and password: https://www.alibabacloud.com/help/doc-detail/25434.htm
Hope this would help
I have got the problem. In ECS I didn't set up the security group for HTTP(port 80). When I added the security group and tweaked the NGINX configuration a bit, it worked like a charm. Marking my own answer as acceptable
Related
Problem. I'm looking for an agile way to shoot a docker container (stored on GCR.IO) to a managed service on GCP:
one docker container gcr.io/project/helloworld with private data (say, Cloud SQL backend) - can't face the real world.
a bunch of IPs I want to expose it to: say [ "1.2.3.4" , "2.3.4.0/24" ].
My ideal platform would be Cloud Run, but also GAE works.
I want to develop in agile way (say deploy with 2-3 lines of code), is it possible run my service secretly and yet super easily? We're not talking about a huge production project, we're talking about playing around and writing a POC you want to share securely over the internet to a few friends making sure the rest of the world gets a 403.
What I've tried so far.
The only think that works easily is a GCE vm with docker-friendly OS (like cos) where I can set up firewall rules. This works, but it's a lame docker app on a disposable VM. Machine runs forever and dies at reboot unless I stabilize it on cron/startup. Looks like I'm doing somebody else's job.
Everything else I've tried so far failed:
Cloud Run. Amazing but can't set up firewall rules on it, or Cloud Director, .. seems to work only with IAP which is painful to set up.
GAE. Works with multiple IPs and can't detach public IPs or firewall it. I managed to get the IP filtering within the app but seems a bit risky. I don't [want to] trust my coding skills :)
Cloud Armor. Only supports a HTTPS Load Balancer which I don't have. Nor I have MIGs to point to. I want simplicity.
Traffic Director and need a HTTP L7 balancer. But I have a docker container, on a single pod. Why do I need a LB?
GKE. Actually this seems to work: [1] but it's not fully managed (I need to create cluster, pods, ..)
Is this a product deficiency or am I looking at the wrong products? What's the simplest way to achieve what I want?
[1] how do I add a firewall rule to a gke service?
Please limit your question to one service. Not everyone is an expert on all Google Cloud services. You will have a better chance of a good answer for each service if they are separate questions.
In summary, if you want to use Google Cloud Security Groups to control IP based access you need to use a service that runs on Compute Engine as security groups are part of the VPC feature set. App Engine Standard and Cloud Run do not run within your project's VPC. This leaves you with App Engine Flex, Compute Engine, and Kubernetes.
I would change strategies and use Google Cloud Run managed by authentication. Access is controlled by Google Cloud IAM via OAuth tokens.
Cloud Run Authentication Overview
I have agreed with the John Hanley’s reply and I have up-voted his answer.
Also, I’ve learned that you are looking how to restrict access to your service through GCP.
By setting a firewall rules, You can limit access to your service by limiting the Source IP range as Allowed source, so that only this address will be allowed as source IP.
Please review another thread in Server Fault [1], stating how to “Restrict access to single IP only”.
https://serverfault.com/questions/901364/restrict-access-to-single-ip-only
You can do quite easily with a Serverless NEG for Cloud Run or GAE
If you're doing this in Terraform you can follow this article
The error I'm getting is BETTING_RESTRICTED_LOCATION. But when I run my app locally using London location from VPN I am able to login in perfectly.
Is there a way I can ensure that the app is running from places where betting is legal?
There is another question like this but its very old and doesn't help me.
The IPs of Google Cloud Platform share the same geolocation (US). And it could be possible that your bot doesn't allow connections from this part of the world. If this is the issue, there isn't any available solution within GCP just yet. You can follow this feature request or in the meantime, just point the requests to an on-prem service hosted in London that acts as a proxy.
How to host my dynamic angular web app in AWS EC2 instance and access it through browser. Adding custom inbound rules didn't worked for me. May be I have done it wrong. Can someone explain the process to be followed.
Assuming that you have configured a webserver(Nginx/Apache etc..) correctly on your EC2 on a certain port(eg: Port 80), check your EC2 security group configurations.
In the above picture, You might set SSH to access only from your IP rather than from Anywhere.
As the title says, I managed to get my Cloud SQL instance all connected to my Flask App via SQLAlchemy using Google's Cloud proxy, but once I deploy the app, it can't connect.
I'm figuring it's an issue with the Database URI I'm providing (since the two connections require different URI's), but for the life of me, I cannot figure out what the string should be.
When connecting locally through the proxy, I have this in my code:
app.config['SQLALCHEMY_DATABASE_URI'] = 'mysql://root:MYPASSWORD#127.0.0.1:3307/inkdb?unix_socket=/cloudsql/MYPROJECTID:us-east1:MYDBINSATNCEID'
and I get it to connect and work fine. Following the online documentation, I tried connecting to a number of different URI's, including:
app.config['SQLALCHEMY_DATABASE_URI'] = 'mysql+pymysql://root#/inkdb?unix_socket=/cloudsql/MYPROJECTID:us-east1:MYDBINSATNCEID'
(I could try listing all the URI's I tried, but it would be a long list).
Is there something fundamental I'm missing? I'm pretty sure I have included all my 3rd party packages as required, but I just can't get anything out of it. I've looked all over the internet (with special attention to SE, of course), but nothing seems to work.
I'd appreciate any help, thanks!
I lead a web/mobile project and I still need to know the tools we will be using for development.
We have a 6 months access to IBM Bluemix, and its security check tools, CloudFoundry, and others may appear really useful.
However, we don't want to rely on a solution that would trap our project without any possibility of migration if needed.
I looked up on the internet how to export a project from Bluemix as a docker, with elements created from IBM. I didn't find anything relevant (I might be bad at googling, but all I can find is "how to export to Bluemix/how to work locally").
Does Bluemix allow to export the entire project onto another hoster, does it depend on the services we used in the project ?
Thank you in advance.
If you package your application in a container you can run it on any provider that supports Docker. That could be another cloud, in a local datacenter or on your own laptop.
If you are planning to use Bluemix services as part of that application then you will have two options if moving your application off Bluemix.
Keep using the services in Bluemix but connect to them remotely from wherever you're now hosting your appliaction. This will require internet connectivity and you'll have to hard code the service credentials in to your application (not good practice).
Migrate the services as well as the application. This will only be possible for the non-unique services IBM offer e.g. Redis, Mongo, Elasticsearch etc.. You'll need to refactor your application to accept the new provider of these services.
If your service/app is dockerized, and is being hosted as a container on Bluemix.
You can pull the container image of your service/app in your own docker enabled cloud or local environment. Following steps can be followed for the same:
install bluemix-container cli package https://www.ng.bluemix.net/docs/containers/container_cli_ov.html
do cf ic login using your bluemix credentials
check for your images using cf ic images command
pull the image in your environment using docker pull <image-registry-url>
run the container with required parameters using docker run
Hope it helps. Thanks.