GCloud AppEngine Cannot ssh into standard instance - google-app-engine

Trying to ssh into AppEngine instance using cli. I have the right command but not sure how to allow ssh for a standard instance or if this is not possible at all. I am new to Gcloud (AWS Guy). Their documentation is not that good about instance types and what is or isnt allowed. Anyone have any pointers on this. Thanks in advance!

No, it is not possible to access (via SSH or otherwise) a standard environment instance.
Only the flexible environment instances are accessible, see Connecting to the instance.

I had issues like this and I fix them by configuring the ressources in the yaml file. Increase number of CPUs and memory it may help

Related

Can I develop in Node.js for local and server at the same time?

So, I guess the language doesn't really matter but the thing is, I have a file called database.js in a folder. Its code must be different depending on whether I am programming in my local environment, or if the code has to be alocated the server (heroku in my case).
For the local environment, I have a postgresql database, which is different from the one I have as an addon in the heroku server. That's why the code is different. The connection to the database is the 'problem'.
What I do now to solve this, is comment out the code for server, test locally, and when I have something usable, I comment out the code for local, comment in the code for server, and deploy in the server.
Here's the code
https://github.com/MauroOyhanart/ControlDeGastosIngresos/blob/main/util/database.js
What solutions can I find? I'm pretty flexible. I'm really new to web programming and I'm looking for tools.
You're probably going to want to use some sort of configuration / environment variables. All if not most hosting services should offer a solution, but since you mentioned Heroku I know they support config variables. A bunch of information about them in different languages and how to add them to Heroku can be found here.
Just a quick summary; in NodeJS you are going to want to use process.env to access environment variables. For example (using fictional functions because that isn't the important part here):
db.connect(process.env.DB_HOST, process.env.DB_PORT);
db.login(process.env.DB_USERNAME, process.env.DB_PASSWORD);
Again look at the link above for the actual useful bit with Heroku. This should do the trick for you.

GAE could not set min_instances for automatic scaling

I'm trying to configure my GAE project to use min_instances=0 when running over automatic scaling option.
I followed all steps on docs but after to click "EXECUTE", I received a BAD Request error:
The error informs "This field is not supported for VM versions" but I'm using GAE only.
Also, during the first execution, the service asked me about some authorization, and I agreed with.
Is there some way to fix this? I could not find any explanation how to fix this issue.
#GonçaloAlbino observed that I used Flex environment instead of Standard. So I'm able to use automaticScaling.min_total_instances.

Is it possible to restore a datomic backup to a local dynamodb

I'm trying to diagnose some performance issues, so I have a the Datomic transactor running locally backed by a local instance of DynamoDB. What I can't figure out is how to populate it from a backup of our primary Datomic environment. I know the basic command is:
>datomic restore-db s3://<BUCKET> datomic:ddb://<REGION>/<DB-NAME>
but how to I tell datomic to use the local dynamodb? It seems to only accept the valid AWS regions for REGION. I've also tried using datomic:ddb-local as the protocol but no luck there either.
How do I form the target URI? Or is this even possible?
You should be able to use a ddb-local URI as indicated here: http://docs.datomic.com/storage.html#dynamodb-local
It will be something like: datomic:ddb-local://localhost:8000/my-table/my-db-name?aws_access_key_id=ABC&aws_secret_key=DEF, assuming you're running ddb-local at localhost on port 8000.
Note that the ddb-local protocol does require an access key and secret, even though they are ignored.
Best,
Marshall

Embedded Solr on Amazon AWS

Currently, I have developed a web application. In my web application, I used embedded solr server to make indexing. After that I deployed onto the Tomcat 6 on window xp. Everything is ok. Next, I have tried my web application to deploy on Amazon AWS. My platform is linux + mysql. When I deployed, I got the exception related with embedded solr.
[ WARN] 19:50:55 SolrCore - [] Solr index directory 'solrhome/./data/index' doesn't exist. Creating new index...
[ERROR] 19:50:55 CoreContainer - java.lang.RuntimeException: java.io.IOException: Cannot create directory: /usr/share/tomcat6/solrhome/./data/index
at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:403)
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:552)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:480)
So how to fix my problem. I am novie to linux.
My guess is that the user you are running Solr under does not have permission to access that directory.
Also, which version of Solr are you using? Looks like 3+. The latest version is 4, so it may make sense to try using that from the start. Probably a bit more troubleshooting to start, but a much better pay off that starting with legacy configuration.
I got solution. That is because of permission affair on Amazon Linux with ec2-user. So , I changed permission by following.
sudo chmod -R ugo+rw /usr/share/tomcat6
http://wiki.apache.org/solr/SolrOnAmazonEC2strong text
t should allow access to ports 22 and 8983 for the IP you're working from, with routing prefix /32 (e.g., 4.2.2.1/32). This will limit access to your current machine. If you want wider access to the instance available to collaborate with others, you can specify that, but make sure you only allow as much access as needed. A Solr instance should not be exposed to general Internet traffic. If you need help figuring out what your IP is, you can always use whatismyip.com. Please note that production security on AWS is a wide ranging topic and is beyond the scope of this tutorial.

Solr : replication options

I've got a SOLR instance running behind a firewall. I'm about to put up another instance which will not be firewalled. Howevever, SOLR appears to only support pull replication and not push replication.
What are my options with regard to maintaining the same level of security? I'd rather not open too many ports in the firewall. Would HTTP over a SSH tunnel be the best option? Would it also be possible to just replicate the index files using plain old rsync (not using any SOLR specific features) or would this break something?
Would it also be possible to just replicate the index files using plain old rsync
Solr actually supports this kind of distribution with its snappuller mechanism, documented here: http://wiki.apache.org/solr/CollectionDistribution
I would open a port and specify the IP address of the slave, and just use ordinary HTTP-based replication; that would be quite secure, I think, and easier to maintain probably. I know it's not exactly where you were angling, but it's what I'd recommend.
I'm answering my own question as the solution i went for is different than what the two other answers suggested. I ended up using a SSH tunnel for HTTP traffic. Thus, i used SSH to redirect all traffic to port 8080 on the HostA to port 8080 on hostB through a SSH tunnel.
The solution appears to be working fine. I'm using a script which validates the tunnel every 5 minutes or so.
You could use HTTP basic authentication (see https://wiki.apache.org/solr/SolrReplication#Slave) but since the password will be passed in plain text, an SSH tunnel or secure VPN would also be required in order to deter more determined attackers.
I'll be going for a VPN solution to start with and consider an SSH tunnel before moving to production if we feel we are unable to place sufficient trust in our internal networks.

Resources