I'm trying to connect to my cluster on Elastic Cloud using elasticsearch-py on GAE, but I'm running into the following error:
ConnectionError: ConnectionError('VerifiedHTTPSConnection' object has no attribute '_tunnel_host') caused by: AttributeError('VerifiedHTTPSConnection' object has no attribute '_tunnel_host')
I've tried this fix that I've seen in a number of places already that reference the '_tunnel_host' error, but it's not resolving my issue:
from requests_toolbelt.adapters import appengine
appengine.monkeypatch()
I've also tried a few variations that I've seen for the es declaration, but none of them have worked; for example:
es = Elasticsearch(["https://elastic:password#xxxxx.us-central1.gcp.cloud.es.io:9243"],
send_get_body_as='POST',
use_ssl=True,
verify_certs=True)
I'd like to be able to establish the connection and begin sending and consuming data from my cluster, but can't find a way to do this. Any help would be much appreciated!
There is an article with example of real word app Elasticsearch on Google Cloud with Firebase functions.
On the other hand there is Google Cloud Marketplace with many available Elasticsearch solutions, for example:
1.You can deploy and configure Elasticsearch Cluster that works with kubernetes, using Google Click to Deploy containers.
Or Elasticsearch complete solution using virtual machines provided by Google.
Related
I have a Quarkus application already deployed on Google Cloud Run.
It depends on MySQL, hence there is an instance started on Cloud SQL.
Next step in my deployment process is to add keycloak. From what I've read the best option seems to be Google App Engine.
The approved answer in this question gave me some good insight of what needs to be done ... mostly.
What I did was:
Locally I made a sub-directory in the main project.
In that directory I added the app.yaml and the Dockerfile (as described here for instance).
There I executed the said two commands: gcloud init and gcloud app deploy.
I had my doubts about this set up and they were backed up by the error I got eventually:
ERROR: (gcloud.app.deploy) INVALID_ARGUMENT: The first service (module) you upload to a new application must be the 'default' service (module). Please upload a version of the 'default' service (module) before uploading a version for the 'morph-keycloak-service' service (module).
I understand my set up breaks the overall structure of the project but I'm not sure how to mix those two application with the right services.
I understand keycloak is a stateful application, hence cannot live on Cloud Run (by the way the intention is for keycloak to use the same database instance shared with the application).
So does any one know a more sensible set up, or what can I move in mine in order to fix it?
In short:
The answer really is in reading the error message (thanks #gaefan) - about the error itself it explains enough. So I just commented out the service: my-keycloak-service line in the app.yaml (thus leaving gcloud to implicitly mark it as the default one) and the deployment continued.
Eventually keycloak didn't connect to the database but if I don't manage to adjust the configurations that would probably be a subject to a different question.
On the point of project structure and functionality:
First off, thanks #NoCommandLine and #guillaume-blaquiere for your input!
#NoCommandLine the application on Cloud Run is sort of a headless REST API enabled backend. Most of the API calls are secured by keycloack. A next step in the deployment process would be to port an existing UI (React) client on the Firebase hosting (or on another suitable service - I'm still not completely sure which approach is best) and in order for the users to work with this client properly they must make an SSO through keycloak first.
I'm quite new to GCP and the number and variants of the available options are still overwhelming to me - one must get familiar with the nuances but I guess it takes time. So I'm still taking suggestions on how to adjust my project structure to fit better the services stack. Thanks!
I have deployed my MERN application to Alibaba ECS instance. Is there any way to access it in the browser, just like AWS public DNS? In AWS you use the public DNS to access your deployed application. I am not sure what to use to achieve the same. Below is the NGINX config present in the /etc/nginx/sites-available/default. I am using Ubuntu 18.04.
Surprisingly, I was able to hit the APIs without any issue. you can check the pm2 logs below
I am new to cloud deployment. If I have missed anything or if you need more information please let me know. Any help would be highly appreciated.
Maybe you can find solution on this two articles on Alibaba Cloud documentation for that problem, if I understood your question correctly:
IP addresses of ECS instances within VPCs: https://www.alibabacloud.com/help/doc-detail/25434.htm
Connect to a Linux instance by using a username and password: https://www.alibabacloud.com/help/doc-detail/25434.htm
Hope this would help
I have got the problem. In ECS I didn't set up the security group for HTTP(port 80). When I added the security group and tweaked the NGINX configuration a bit, it worked like a charm. Marking my own answer as acceptable
I am trying to use Google Objectify for Datastore (https://github.com/objectify/objectify). My app is not hosted on GAE, but I still make use of Datastore, so I need to use the remote API. Right now, I use the low level API and connect successfully like this :
DatastoreOptions options = DatastoreOptions.builder()
.projectId("PROJECT_NAME")
.authCredentials(AuthCredentials.createApplicationDefaults()).build();
Datastore client = options.service();
And the library used is http://googlecloudplatform.github.io/gcloud-java/0.2.0/index.html. My application defaults for "AuthCredentials.createApplicationDefaults()" is in my home folder in development as well as on the server.
In the doc I saw for Objectify, I did not see anyway of specifying the connection like above, thus no way of telling to use the credentials file in our home folder. The code I see for Objectify is mostly like this Objectify.ofy(). So I see no way with this method of telling to use the auth credentials defaults.
Thank you very much.
Use the Google App Engine remote api:
https://cloud.google.com/appengine/docs/java/tools/remoteapi
You could try gcloud-java datastore module.
http://googlecloudplatform.github.io/gcloud-java/0.2.0/index.html
But I encounter some performance issues on outside of Google Sandbox (GAE-Compute Engine)
I lead a web/mobile project and I still need to know the tools we will be using for development.
We have a 6 months access to IBM Bluemix, and its security check tools, CloudFoundry, and others may appear really useful.
However, we don't want to rely on a solution that would trap our project without any possibility of migration if needed.
I looked up on the internet how to export a project from Bluemix as a docker, with elements created from IBM. I didn't find anything relevant (I might be bad at googling, but all I can find is "how to export to Bluemix/how to work locally").
Does Bluemix allow to export the entire project onto another hoster, does it depend on the services we used in the project ?
Thank you in advance.
If you package your application in a container you can run it on any provider that supports Docker. That could be another cloud, in a local datacenter or on your own laptop.
If you are planning to use Bluemix services as part of that application then you will have two options if moving your application off Bluemix.
Keep using the services in Bluemix but connect to them remotely from wherever you're now hosting your appliaction. This will require internet connectivity and you'll have to hard code the service credentials in to your application (not good practice).
Migrate the services as well as the application. This will only be possible for the non-unique services IBM offer e.g. Redis, Mongo, Elasticsearch etc.. You'll need to refactor your application to accept the new provider of these services.
If your service/app is dockerized, and is being hosted as a container on Bluemix.
You can pull the container image of your service/app in your own docker enabled cloud or local environment. Following steps can be followed for the same:
install bluemix-container cli package https://www.ng.bluemix.net/docs/containers/container_cli_ov.html
do cf ic login using your bluemix credentials
check for your images using cf ic images command
pull the image in your environment using docker pull <image-registry-url>
run the container with required parameters using docker run
Hope it helps. Thanks.
I am new to Google App engine and I have tried to run an demo application called guestbook to connect to Google cloud sql from the Google app engine with app-engine-sdk version 1-7.0. But each time I am getting an error saying "java.lang.IllegalStateException: System property rdbms.driver must be set at com.google.appengine.api.rdbms.dev.LocalRdbmsServiceLocalDriver.registerDriver(LocalRdbmsServiceLocalDriver.java:80)". I double check my code and every thing looks ok, and I still have no glue where the error coming from.
Below is a snippet of my connection code :
c = DriverManager.getConnection("jdbc:google:rdbms://my_instance/my_database");
and mysql-connector-java-5.1.21-bin is in the class path,
and I have enable Google cloud sql in the Google app engine,
and I have checked the use of Google Cloud instance in the app engine as well with the my instance of the database, database name, login , and password,
and I am using Eclipse Juno.
I think I have missed something important; so would you please help me if you know what I have missed.
Thank you very much in advance,
Minh
Once you are new to GAE I recommend to give a try on the Big Table database. Using it you will not have to setup any database localy. So just the eclipse plugin will be enough for your first test. The Guestbook uses this database so it will be easier to follow the tutorial.