Can we Deploy Identity Server4 as a AWS Serverless Lambda Function? We have build .net core web application and planning to deploy it as a AWS Lambda function.
I was able to get the development environment version working with the in memory datastore fairly easily with the Amazon.Lambda.AspNetCoreServer NuGet package that lets you run ASP.NET Core applications in Lambda. One gotcha I ran into was by default the ASP.NET Core blueprint that Visual Studio uses creates a API Gateway stage called "Prod" and the uppercase "P" caused problems. So I created a new stage in in all lower case and then it worked of me.
I didn't tackle the issue of using something besides an in memory datastore which would be crucial. I would like to look into using DynamoDB as the datastore so I wouldn't have to have a SQL Server instance.
Related
I have a spring boot application connected to azure sql db using sql authentication (user/pass). It's working fine on local development and when the .war is deployed to external tomcat because the application is to be used on a linux environment.
I'm trying to use managed identity for authentication and came upon this link.
I tried to implement it on my spring boot application but I'm getting this error:
"Windows logins are not supported in this version of sql server."
Also can this be used if I am not using azure cloud? As I mentioned earlier the application is being deployed in a linux server on tomcat.
As mentioned in the comment, MSI(managed identity) just works in the azure services that support MSI(need enable MSI first), when using MSI to auth, it essentially makes an API call to the azure instance metadata endpoint to get the access token, then use the token to auth, it is just available in the MSI-supported service. So in conclusion, you could not use this feature if you are not using azure cloud.
I have a flask app connected to a local postgres db with SQlAlchemy for dev purposes. I also have a live version of the app running on Google App Engine which is connected to a postgres db on CloudSQL
I am trying to use flask-migrate (which builds upon alembic) to manage database migrations. I can run the migrations fine locally but I am unsure how to manage the migrations for the deployed version?
this answer
has a couple of useful answers. One suggesting getting an IP address and being able to connect directly but I don't know where I would use that URI for migrations?
The other answer suggests running the code as an endpoint in the app itself.
Any pointers would be greatly appreciated
I am building a backend for an application with Google App Engine and Cloud SQL.
I do have:
A webserver as a proxy in front of my API server which handles sessions (using Cloud SQL and memcache) and calls the API
An API server which has access to the resource in the Cloud SQL instance
oAuth server which also needs Cloud SQL and memcache for tokens etc.
So my questions: Do I need three Cloud SQL project, which all have their own replica? Or is it ok to have one Cloud SQL project and all three App Engine projects access this Cloud SQL instance through the Cloud SQL proxy?
All projects will be located in the us-central region.
Would love to hear some thoughts.
Thanks!
I’m adding this information as a formal answer for the community. All credit goes to Dan Cornilescu.
You do not need to create 3 different projects. You can have 3 Google App Engine services running and a single Cloud SQL instance in the same project. That seems to be the best option for your situation. Using multiple services within a single project has its advantages one of them being increased performance.
Note that you could also have multiple Cloud SQL instances running in the same project. You can follow this document that talks more about creating a Cloud SQL instance:
Creating Instances
In case you need more information about Google App Engine services, this is a good resource:
Microservices Architecture on Google App Engine
we're using Google App Engine and Cloud SQL for a django web app. We want to run migrations during the build; however, GAE uses Container Registry to build the app, and Container Registry not authenticated to access Cloud SQL. So, as expected, the migrations fail to due to a rejected connection.
How does someone authorize Container Registry to access Cloud SQL?
When you say:
GAE uses Container Registry to build the app, and Container Registry not authenticated to access Cloud SQL.
I assume that you mean:
GAE uses Container Builder to build the app, and the Container Builder Service Account is not authenticated to access Cloud SQL.
Assuming that's what you need, this document explains how to use IAM to grant additional permissions to the Service Account: https://cloud.google.com/container-builder/docs/how-to/service-account-permissions
If you are in fact asking a different question, please clarify, including an example that demonstrates the problem you are having.
How to deploy a asp.net core web application with sql db to azure using visual studio team services(VSTS). Application is using entity framework code first approach.
This is a challenging question that falls into the topic of continous integration. It takes a bit of time.
from VS you have set up TFS for pointing your VSTS account and project and the code first to install its changes when deployed at the first launch (application start event).
on VSTS project account, you have set up building feature for launch a build after, let's suppose, a commit on server.
on VSTS project account, you have set up your azure web app account profile from the administration panel of the account (you have the xml file with the references to the web app account and password).
on VSTS project account, you have set up deploy features that make also substitution at fly of the environment connection string (I've done in this way..) and that points to the azure web app account profile you have created on VSTS.
make a commit from your VS and, if all it is done right, then you'll find your code and db changes on azure.
Describing the whole process, would require a substantial article and the relative time to do it. But you can find help googling around (most posts are on StackOverflow) and posting question step by step as you go forward to do this process.