Mulesoft Environments - mulesoft

Is there a prescribed best practice for being able to run multiple environments in a single VPC? I'm asking for Non-Prod - I want to setup a Dev and a QA environment, would I just need to setup separate subnets for these individual environments?

It depends on your requirements and resources available. Ideally you should have a different VPC for QA and Dev environments. If your security requirements are not that strict you could use the same VPC for both.
I'm not quite sure if I understand the separate subnets part. Each VPC has its own subnet. Environments don't have subnets.

I would go with same VPC for all Non-PROD environments. And for the PROD (or Similar to PROD) environments will go with another VPC. This way you can easily make more strict network rules around the PROD VPC. Aled's point is very valid if you are using sensitive data for SIT testing.
Also each VPC has its own VPN and it CIDR block.
You can easily create difference environments and access restrictions to allow and restrict your testers/developers.

Related

What is the best architecture to secure traffic between compute engine and app engine?

We have a HTTP communication currently in between an app engine and a compute engine. What would be the best way secure the communication without providing a secure layer at the HTTP.
Are there native Google functionalities that are well suited - any VPC configuration, VPN technologies, peering? What would you recommend we start to explore?
Thank you!
First, an approach could depend on where are your App Engine and Compute Engine instances located: are they part of the same or different projects, reside in the same or different zone or region, do they communicate via Internal or External IP.
Google uses authentication, integrity checks and AS128 encryption to protect data at one or more network layers when that data goes outside physical boundaries not controlled by Google.
Data in transit inside a physical boundary controlled by Google is not necessarily encrypted because rigorous physical security measures are in place by default, but authentication and integrity are always applied.
More details can be found in the document Encryption in Transit in Google Cloud.
Next, it'd be good to figure out what security risks you expect regarding your data in transit. Speaking about risks, potential vulnerabilities and threats as well as their probability and feasibility should be assessed. For example, for the data transferred between two services communicating via Internal IPs in the same subnetwork of the same zone, threats and vulnerabilities are quite limited compared to public network.
In case the security requirements are so high that the underlying network can't be trusted, Google VPC and VPN go beyond the scope, and you have to encrypt at the Level 7. That is why using HTTPS seems like a logical solution in this case.

Is it possible to run Postgres in Google App Engine Flexible?

Is it possible to run postgres (essentially, a non-HTTP service) in a custom Google App Engine Flexible container? Or will I be forced to use Google's Cloud SQL solution?
TL;DR: You could do that, but don’t. It’s better to externalize the persistent data storage.
Yes, it is possible to run a PostgreSQL database as a microservice (named simply a 'service' in Google Cloud Platform) in a custom Google App Engine Flexible container. However, that raises another important question, namely why would you like to run an SQL database inside a container. This is a risky solution, unless you are perfectly sure about what you are doing and how to manage that.
Typical container orchestration is based on stateless services which means that they are not intended to store persistent data. This kind of containers do have some form of storage sometimes, like NoSQL databases for cache or user session information. This data is not persistent, it can be lost during restarts or destruction of instances in an agile containerized application environment. PostgreSQL databases are rather used as stateful services and do not suit the aforementioned model. Putting such database into a container, one can run into problems like data corruption or direct concurrency when accessing some shared data directory. Also, in Google App Engine Flexible it’s not possible to add a shared persistent disk, the volumes are attached to instances and destroyed together with them. Much safer solution is keeping the SQL database in an external, durable storage, as Cloud SQL that you have mentioned. There are numerous blog posts and articles that elaborate this issue with the stateless/stateful services, like this one.
It should be mentioned that if you are to use the container in a local environment or for test/development (and you are not looking for a durable state of the database), putting a PostgreSQL inside a container should be perfectly ok. Also, if you design a special way of splitting your data across instances this could work fine, as the guys did with their MySQL servers in this article. So once again, the idea of putting a PostgreSQL database in a container should be carefully thought-out, especially that there are so many options of a safe externalization of such a service.
And just as a side note, you are not forced to use Cloud SQL. The database can be hosted on Compute Engine, another cloud provider, on premises, or can be managed by a third-party vendor. In case of hosting it in Compute Engine the application is able to communicate with the database inside the same project using the internal IP of the Compute Engine instance. Using Cloud Launcher you can quickly deploy PostgreSQL and other popular databases to Compute Engine. Check these Google docs for more information about using third-party databases.

Security risk of whitelisting google apps script ip addresses

perhaps this isn't the best place to ask this, but I wanted to be educated.
I would like to have a Google sheet interact with data on an oracle server.
I have been reading the Google Apps Script guide for setting up JDBC connections, and note that I would need to whitelist a range of IP addresses in order for this to work.
I was told by our infosec department that our db is only allowed to be accessed by onsite servers or through VPN connection, which negates using Google apps script JDBC.
My question is: what is the risk of whitelisting these IP address ranges, is spoofing a google app script IP address within the range plausible or overly cautious? What other factors am I not aware of that I should be considering?
You are correct in that Apps Script needs you to allow your database to accept connections from Apps Script in order to use JDBC with that database. There is no reasonable workaround that would not also violate the rules your infosec department has put in place.
As for whether those rules are overzealous, bare in mind that security folks need to be very cautious as a rule. This discussion on StackExchange may be helpful.

When should one use the following: Amazon EC2, Google App Engine, Microsoft Azure and Salesforce.com?

I am asking this in very general sense. Both from cloud provider and cloud consumer's perspective. Also the question is not for any specific kind of application (in fact the intention is to know which type of applications/domains can fit into which of the cloud slab -SaaS PaaS IaaS).
My understanding so far is:
IaaS: Raw Hardware (Processors, Networks, Storage).
PaaS: OS, System Softwares, Development Framework, Virtual Machines.
SaaS: Software Applications.
It would be great if Stackoverflower's can share their understanding and experiences of cloud computing concept.
EDIT: Ok, I will put it in more specific way -
Amazon EC2: You don't have control over hardware layer. But you can take your choice of OS image, Dev Framework (.NET, J2EE, LAMP) and Application and put it on EC2 hardware. Can you deploy an applications built with Google App Engine or Azure on EC2?
Google App Engine: You don't have control over hardware and OS and you get a specific Dev Framework to build your application. Can you take any existing Java or Python application and port it to GAE? Or vice versa, can applications that were built on GAE be taken out of GAE and ported to any Application Server like Websphere or Weblogic?
Azure: You don't have control over hardware and OS and you get a specific Dev Framework to build your application. Can you take any existing .NET application and port it to Azure? Or vice versa, can applications that were built on Azure be taken out of Azure and ported to any Application Server like Biztalk?
Good question! As you point out, the different offerings fit into different categories:
EC2 is Infrastructure as a Service; you get VM instances, and do with them as you wish. Rackspace Cloud Servers are more or less the same.
Azure, App Engine, and Salesforce are all Platform as a Service; they offer different levels of integration, though: Azure pretty much lets you run arbitrary background services, while App Engine is oriented around short lived request handler tasks (though it also supports a task queue and scheduled tasks). I'm not terribly familiar with Salesforce's offering, but my understanding is that it's similar to App Engine in some respects, though more specialized for its particular niche.
Cloud offerings that fall under Software as a Service are everything from infrastructure pieces like Amazon's Simple Storage Service and SimpleDB through to complete applications like Fog Creek's hosted FogBugz and, of course, StackExchange.
A good general rule is that the higher level the offering, the less work you'll have to do, but the more specific it is. If you want a bug tracker, using FogBugz is obviously going to be the least work; building one on top of App Engine or Azure is more work, but provides for more versatility, while building one on top of raw VMs like EC2 is even more work (quite a lot more, in fact), but provides for even more versatility. My general advice is to pick the highest level platform that still meets your requirements, and build from there.
This is an excellent question. Full disclosure as I am partial to Azure but have experience with the others.
Where I think Azure stands out from the others is the quick transition from on prem to the cloud. For example -
SQL Azure - change connection string, upload DB, go!
Queues work a lot like MSMQ.
Blobs are pretty much blobs any way you shake them but they scale like crazy.
The table storage component is good because it provides incredible scalability for name/value pairs - but takes some getting used to.
Service Bus is my favorite of the services because it allows for a variety of communications paradigms. Two SB endpoints first try to connect to each other, if they cannot, then they route through the cloud - makes for very secure and scalable processing when firewalls tend to get in the way.
Access control list - paired typically with the service bus to make sure the right people access the right things - think SAML in the cloud.
I hope that helps!
My cloud experience is currently limited to Salesforce.com
For standard business operations and automation it provides a significant number of features that allow us to get apps up and running very quickly. We are particularly benefitting from the following:
Security (Administrators can control access to objects and fields)
Workflow & Approvals
Automatic UI generation
Built in reporting and dashboards
Entire system (including our custom changes) is accessible via web services
Ability to make the data in the system available through public sites (e.g. eCommerce)
Large library of third party apps to solve standard problems
The platform does NOT solve every problem.
I would not use the platform to model a nuclear power station or build the next twitter.
The major points of cloud computing is to save on costs by paying for usage and enable immediate deployment of computing resources.
The costs are not purely x amount of cents per instance per hour. The costs include maintenance, development, administration, etc. The huge benefit of cloud, in my mind is to liberate the customers from having to manage anything that is not within the realm of their core business competency. If I am an insurance business, I want my developers to concentrate on my insurance problems that help solve needs of my claims, rates, etc. I would rather avoid dealing with problems of email servers, file servers, document repositories, and administrating OS patches, service packs, etc.
Thus, in my opinion, the biggest benefits are derived from the SaaS and PaaS cloud offerings. One should go to IaaS only when PaaS or SaaS have serious restrictions to specific needs (i.e. I need to install a set of proprietary COM components and Azure does not support them).
SaaS is good for commodity type of applications that are not the core line of business for the client, but are more of a utility. These are your typical Messaging systems, Portals, Document Repositories, Email systems, CRMs, ERP's, Accounting, etc. etc. etc. Why reinvent the wheel by writing your own when you can customize a well supported third party product.
PaaS is great for core line of business software that supports companies' main business offering. Abstracts clients from having to deal with OS management and lets clients concentrate on the business system development - something that noone else can do for the client.
One can also take advantage of the benefits of PaaS (let's say, Google App Engine) and extend it, at times and if necessary, by pulling out some virtual machines from IaaS providers (e.g. Amazon) to do some number crunching then just send back the output to Google App Engine.
This way, you get the best of both worlds -- you can rapidly develop scalable apps in GAE, then you can always augment it by running any program you want from Amazon virtual machines.
This keeps changing, now Windows Azure also supports VM, so it is also an IaaS provider now.
Now how about Free Amazon EC2 for a year to do a better comparision. Check this out.
http://www.buzzingup.com/2010/10/amazon-announces-free-cloud-services-for-new-developers/

Scaling on Amazon EC2

I have several newbie questions about EC2, thanks for your attention,
1) why EC2 instances come with specific memory/storage quotas? In the cloud environment, can't we just request the amount of memory/storage as we require, and the amazon infrastructure take care of the allocation? I understand an pre-determined allocation of memory/storage is required to setup a VM image, however, is this indeed necessary ? In Google app engine, I don't see any limit on the memory, and the storage is charged in a pay-as-you-go manner.
2) Related to the first. If amazon allows instances created with a dynamic memory/storage quote, do we still need to create multiple instances and take care of load balancing, e.g.
Or, we can just create a powerful instance, and leave other scaling issues to Amazon.
3) The performance of EC2 instance, do you have experience to tell how it compares to a physical machine with similar configuration (memory/CPU)
Fundamentally it's because Amazon's infrastructure is based on the Xen virtualization platform, and Xen does not support dynamic reallocation of resources between VM's.
VMWare has announced support for that type of reallocation. It will be interesting to see how Amazon reacts.
why EC2 instances come with specific memory/storage quotas? In the cloud environment, can't we just request the amount of memory/storage as we require, and the amazon infrastructure take care of the allocation?
Because EC2 emulates individual machines that you can control while you have no control over these "computers" on GAE. You cannot do things like use files on GAE.
Related to the first. If amazon allows instances created with a dynamic memory/storage quote, do we still need to create multiple instances and take care of load balancing, e.g. Or, we can just create a powerful instance, and leave other scaling issues to Amazon.
You will usually need to do this by yourself. EC2 provides on demand virtual "computers".
The performance of EC2 instance, do you have experience to tell how it compares to a physical machine with similar configuration (memory/CPU)
"One EC2 Compute Unit equals 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor."

Resources