ARM Template deployment - CosmosDB provision fails with - Request rate is large. More Request Units may be needed, so no changes were made - database

I tried deploying cosmosDB and when I first create it I get this error -
"Request rate is large. More Request Units may be needed, so no changes were made"
After redeploying it works.
But the initial creation doesn't.
I extended the throughput to 50000 (autoscale)
and 10000 with fixed size.
Is there another option to extend the RUs?
"type": "Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers",
"apiVersion": "2020-03-01",
"name": "
"dependsOn":
],
"properties": {
"resource": {
"id": "subscriptions",
"indexingPolicy": {
"indexingMode": "consistent",
"automatic": true,
"includedPaths": [
{
"path": "/*"
}
],
"excludedPaths": [
{
"path": "/\"_etag\"/?"
}
],
"spatialIndexes": [
{
"path": "/*",
"types": ["Point", "LineString", "Polygon", "MultiPolygon"]
}
]
},
"partitionKey": {
"paths": ["/partitionKey"],
"kind": "Hash"
},
"uniqueKeyPolicy": {
"uniqueKeys": []
},
"conflictResolutionPolicy": {
"mode": "LastWriterWins",
"conflictResolutionPath": "/_ts"
}
},
"options": {
"autoscaleSettings": {
"maxThroughput": 50000
}
}
}

The error message indicates that there was a large number of requests hitting the master partition for your account. This can be caused by other clients querying the database resource to list containers or querying containers to get it's properties. It can also happen when you are deploying a large number of Cosmos resources at the same time. This is mostly a transient issue and is hard to reproduce.
Your arm template for this container looks correct. I'm guessing you've purposely removed the name and depends on parameters.
I would update the api-version to 2020-04-01 however. There are issues with 2020-03-01 with autoscale and that api-version is going to be deprecated in the future.

Related

Starting and stopping services in vespa

In the benchmarking page "https://docs.vespa.ai/en/performance/vespa-benchmarking.html" it is given that we need to restart the services after we increase the persearch thread using the commands vespa-stop-services and vespa-start-services.
Could you tell us if we need to do this on all the content nodes or just the config nodes?
When deploying a change that requires a restart, the deploy command will list the actions you need to take. For example when changing the global per search thread setting changing from 2 to 5 in the below example:
curl --header Content-Type:application/zip --data-binary #target/application.zip localhost:19071/application/v2/tenant/default/prepareandactivate |jq .
{
"log": [
{
"time": 1645036778830,
"level": "WARNING",
"message": "Change(s) between active and new application that require restart:\nIn cluster 'mycluster' of type 'search':\n Restart services of type 'searchnode' because:\n 1) # Number of threads used per search\nproton.numthreadspersearch has changed from 2 to 5\n"
}
],
"tenant": "default",
"url": "http://localhost:19071/application/v2/tenant/default/application/default/environment/prod/region/default/instance/default",
"message": "Session 8 for tenant 'default' prepared and activated.",
"configChangeActions": {
"restart": [
{
"clusterName": "mycluster",
"clusterType": "search",
"serviceType": "searchnode",
"messages": [
"# Number of threads used per search\nproton.numthreadspersearch has changed from 2 to 5"
],
"services": [
{
"serviceName": "searchnode",
"serviceType": "searchnode",
"configId": "mycluster/search/cluster.mycluster/0",
"hostName": "vespa-container"
}
]
}
],
"refeed": [],
"reindex": []
}
}

80 Postgres databases in Azure Data Factory to copy into

I'm using Azure Data factory,
I'm using SQLServer as source and Postgres as target. Goal is to copy 30 tables from SQLServer, with transformation, to 30 tables in Postgres. Hard part is I have 80 databases from and to, all with the exact same layout but different data. Its one database per customer so 80 customers each with their own databases.
Linked Services doesn't allow parameters for Postgres.
I have one dataset per source and target using parameters for schema and table names.
I have one pipeline per table with SQLServer source and Postgres target.
I can parameterize the SQLServer source in linked service but not Postgres
Problem is how can I copy 80 source databases to 80 target databases without adding 80 target linked services and 80 target datasets? Plus I'd have to repeat all 30 pipelines per target database.
BTW I'm only familiar with the UI, however anything else that does the job is acceptable.
Any help would be appreciated.
There is simple way to implement this. Essentially you need to have a single Linked Service, which reads the connection string out of KeyVault. You can then parameterize source and target as keyvault secret names, and easily switch between data sources by just changing the secret name. This relies on all connection related information being enclosed within a single connection string.
I will provide a simple overview for Postgresql, but the same logic applies to MSSQL servers as source.
Implement a Linked Service for Azure Key Vault.
Add a Linked Service for Azure Postgresql that uses Key Vault to store access url in format: Server=your_server_name.postgres.database.azure.com;Database=your_database_name;Port=5432;UID=your_user_name;Password=your_password;SSL Mode=Require;Keepalive=600; (advise to use server name as secret name)
Pass this parameter, which is essentially correct secret name, in the Pipeline (you can also implement a loop that would accept immediately array of x elements, and parse n elements at a time into separate pipeline)
Linked Service Definition for KeyVault:
{
"name": "your_keyvault_name",
"properties": {
"description": "KeyVault",
"annotations": [],
"type": "AzureKeyVault",
"typeProperties": {
"baseUrl": "https://your_keyvault_name.vault.azure.net/"
}
}
}
Linked Service Definition for Postgresql:
{ "name": "generic_postgres_service".
"properties": {
"type": "AzurePostgreSql",
"parameters": {
"pg_database": {
"type": "string",
"defaultValue": "your_database_name"
}
},
"annotations": [],
"typeProperties": {
"connectionString": {
"type": "AzureKeyVaultSecret",
"store": {
"referenceName": "KeyVaultName",
"type": "LinkedServiceReference"
},
"secretName": "#linkedService().secret_name_for_server"
}
},
"connectVia": {
"referenceName": "AutoResolveIntegrationRuntime",
"type": "IntegrationRuntimeReference"
}
}
}
Dataset Definition for Postgresql:
{
"name": "your_postgresql_dataset",
"properties": {
"linkedServiceName": {
"referenceName": "generic_postgres_service",
"type": "LinkedServiceReference",
"parameters": {
"secret_name_for_server": {
"value": "#dataset().secret_name_for_server",
"type": "Expression"
}
}
},
"parameters": {
"secret_name_for_server": {
"type": "string"
}
},
"annotations": [],
"type": "AzurePostgreSqlTable",
"schema": [],
"typeProperties": {
"schema": {
"value": "#dataset().schema_name",
"type": "Expression"
},
"table": {
"value": "#dataset().table_name",
"type": "Expression"
}
}
}
}
Pipeline Definition for Postgresql:
{
"name": "your_postgres_pipeline",
"properties": {
"activities": [
{
"name": "Copy_Activity_1",
"type": "Copy",
"dependsOn": [],
"policy": {
"timeout": "7.00:00:00",
"retry": 0,
"retryIntervalInSeconds": 30,
"secureOutput": false,
"secureInput": false
},
...
... i skipped definition
...
"inputs": [
{
"referenceName": "your_postgresql_dataset",
"type": "DatasetReference",
"parameters": {
"secret_name_for_server": "secret_name"
}
}
]
}
],
"annotations": []
}
}

Why is Google App Engine throwing access forbidden errors?

Could really use some help here. I have a GAE NodeJS app in the standard environment. Until a few days ago (09/23) it was running just fine, it would respond to requests as expected, etc.
Today, the app responds with 403's when I try to make any request to my appspot url. I'm 100% certain this is not a code issue, as if I deploy the same code to GAE in another project, it works fine. Furthermore, the only firewall rule is a wildcard to allow all traffic.
Edit: adding the only relevant-looking log entry I see from the project:
{
"protoPayload": {
"#type": "type.googleapis.com/google.cloud.audit.AuditLog",
"status": {},
"authenticationInfo": {
"principalEmail": "address#domain.com"
},
"requestMetadata": {
"callerIp": "x.x.x.x",
"requestAttributes": {
"time": "2021-09-23T15:04:05.198927Z",
"auth": {}
},
"destinationAttributes": {}
},
"serviceName": "appengine.googleapis.com",
"methodName": "google.appengine.v1.Services.UpdateService",
"authorizationInfo": [
{
"resource": "apps/my-google-cloud-project-id/services/default",
"permission": "appengine.services.update",
"granted": true,
"resourceAttributes": {}
}
],
"resourceName": "apps/my-google-cloud-project-id/services/default",
"serviceData": {
"#type": "type.googleapis.com/google.appengine.v1.AuditData",
"updateService": {
"request": {
"name": "apps/my-google-cloud-project-id/services/default",
"service": {
"networkSettings": {
"ingressTrafficAllowed": "INGRESS_TRAFFIC_ALLOWED_INTERNAL_AND_LB"
}
},
"updateMask": "networkSettings"
}
}
},
"resourceLocation": {
"currentLocations": [
"us-east1"
]
}
},
"insertId": "an-id",
"resource": {
"type": "gae_app",
"labels": {
"project_id": "my-google-cloud-project-id",
"zone": "",
"module_id": "default",
"version_id": ""
}
},
"timestamp": "2021-09-23T15:04:05.131761Z",
"severity": "NOTICE",
"logName": "projects/my-google-cloud-project-id/logs/cloudaudit.googleapis.com%2Factivity",
"operation": {
"id": "some-operation-uuid",
"producer": "appengine.googleapis.com/admin",
"first": true
},
"receiveTimestamp": "2021-09-23T15:04:05.495890906Z"
}
I don't recall making this change, and I'm not sure what the ingressTrafficAllowed value was before.
Somehow the ingress setting on the GAE service got changed. I believe that issue was fixed by going to GCP console > App Engine > Services > select affected service(s) -> Edit ingress setting from the top, and select the appropriate value.
I say I believe this fixed the issue as I was still getting 403's on my appspot url after doing this, and ultimately I ended up deleting and re-creating the project from scratch, which got everything working again. Clearly there was some misconfiguration somewhere in my project, but GCP does not make it easy to diagnose what the issue might be.

How does Google Smart Home determine channelNumber for action.devices.commands.selectChannel?

Created Google Smart Home Action.
Implemented device with:
a. deviceType = action.devices.types.SETTOP
b. deviceTrait = action.devices.traits.Channel
Device is successfully discovered and added to Google Home App's Homegraph.
User sends command: "Ok Google, change to ESPN"
Receives the following json in fulfillment URL:
{
"requestId": "[RequestId GUID]",
"inputs": [{
"intent": "action.devices.EXECUTE",
"payload": {
"commands": [{
"devices": [{
"id": "[SettopBox device Id]"
}],
"execution": [{
"command": "action.devices.commands.selectChannel",
"params": {
"channelCode": "espn",
"channelName": "ESPN",
"channelNumber": "206"
}
}]
}]
}
}]
}
Questions:
How does Google Smart Home determine the "channelNumber" value for "ESPN"? The user's command was "Ok Google, change to "ESPN". This does not contain any information about the channel number.
If a provider was set automatically, is there a setting in Google Home or Google Assistant to change this provider?
The number of a channel for the Channel trait is provided in the SYNC request along with any relevant labels.
{
"availableChannels": [
{
"key": "ktvu2",
"names": [
"Fox",
"KTVU"
],
"number": "2"
},
{
"key": "abc1",
"names": [
"ABC",
"ABC East"
],
"number": "4-11"
}
]
}
As shown in the snippet, the channel number comes from the service. This may be up to the developer of the integration how these numbers may be determined, whether from a cable provider or over-the-air. The field is optional, so a service without channel numbers may still work by saying its name.

Alexa Smart Home "Failed to Retrieve State"

I am playing with a sample Alexa Smart Home skill - I am not talking to any real hardware or back-end, just trying to get message flow working. I have set up a simple switch/plug/light that can just support turning On/Off - and I have account linked working and the skill enabled. When I try looking at it via the Alexa app on phone or web (with debug enabled) it always says the device isn't responding, or it's "Failed to Retrieve State". I can definitely see the messages in Cloud Watch as follows.
Any idea why I'd be chronically getting such a response??
Request:
"directive": {
"endpoint": {
"cookie": {},
"endpointId": "endpoint-003",
"scope": {
"token": "<<<SUPRESSING>>",
"type": "BearerToken"
}
},
"header": {
"correlationToken": "<<SHORTENED>>",
"messageId": "50397414-bb9d-412f-8a2c-15669978ab64",
"name": "ReportState",
"namespace": "Alexa",
"payloadVersion": "3"
},
"payload": {}
}
}
Response:
{
"context": {
"properties": [
{
"name": "connectivity",
"namespace": "Alexa.EndpointHealth",
"timeOfSample": "2020-06-29T16:49:59.00Z",
"uncertaintyInMilliseconds": 0,
"value": "OK"
},
{
"name": "powerState",
"namespace": "Alexa.PowerController",
"timeOfSample": "2020-06-29T16:49:59.00Z",
"uncertaintyInMilliseconds": 0,
"value": "ON"
}
]
},
"event": {
"endpoint": {
"endpointId": "endpoint-003",
"scope": {
"token": "Alexa-access-token",
"type": "BearerToken"
}
},
"header": {
"correlationToken": "<<SHORTENED>>",
"messageId": "7a8b9a71-adda-41b8-acba-4d3855374845",
"name": "Response",
"namespace": "Alexa",
"payloadVersion": "3"
},
"payload": {}
}
}
Problem was: The "name" in my header response should have been "ReportState". "Response" is only used for things that set/change values.
My general advice is to always verify that THREE things are good:
Initial "Discovery"
"Response" messages
General "ReportState" queries.
By this - I mean that:
Anything you advertised as should be reported in "discovery" better be reported in other ("ReportState") messages. If you advertise a "PowerController" - if your ReportStates don't contain status for that, you'll either not see the status, or it'll keep retrying forever (continuing to look for it) - or you might get some sort of an error.
If you CHANGED your discovery stuff - make sure that you really removed, re-discovered, and that the states (above) for the new additions/removals are okay
Always make sure that "EndpointHealth" is being reported.

Resources