gcloud check if a topic exist and ability to reuse the topic - google-cloud-pubsub

I'm using gcloud-node.
createTopic api returns error 409, if that topic exist already. Is there a flag that can implicitly create a topic when publishing a message or Is there an API to check if a topic exist already?
Its easy to use getTopics API, iterate thru the response topic array and determine if a topic exist. Just wanted to make sure I dont write something, if it exists already.

Is there a flag that can implicitly create a topic when publishing a message or Is there an API to check if a topic exist already?
I believe the problem you'll run into is that if a message is published to a topic that doesn't exist, it is immediately dropped. So, it won't hang around and wait for a subscription to be created; it'll just disappear.
However, gcloud-node does have methods that will create a topic if necessary:
var topic = pubsub.topic('topic-that-maybe-exists');
topic.get({ autoCreate: true }, function(err, topic) {
// topic.publish(...
});
In fact, almost all gcloud-node objects have the get method that will work the same way as above, i.e. a Pub/Sub subscription or a Storage bucket or a BigQuery dataset, etc.
Here's a link to the topic.get() method in the docs: https://googlecloudplatform.github.io/gcloud-node/#/docs/v0.37.0/pubsub/topic?method=get

ran into this recently, and the accepted answer runs you into http 429 errors. topic.get is an administrative function which has a significantly lower rate limit than normal functions. You should only call them when neccessary eg. error code 404 during publish (topic doesn't exist), something like so:
topic.publish(payload, (err) => {
if(err && err.code === 404){
topic.get({ autoCreate: true }, (err, topic) => {
topic.publish(payload)
});
}
});

Personally use this one
const topic = pubsub.topic('topic-that-maybe-exists');
const [exists] = await topic.exists();
if (!exists) {
await topic.create();
}

Related

Correct place to audit query in Hot Chocolate graphql

I am thinking should I audit user queries in HttpRequestInterceptor or DiagnosticEventListener for Hot Chocolate v11. The problem with latter is that if the audit failed to write to disk/db, the user will "get away" with the query.
Ideally if audit fail, no operation should proceed. Therefore in theory I should use HttpRequestInterceptor.
But How do I get IRequestContext from IRequestExecutor or IQueryRequestBuilder. I tried googling but documentation is limited.
Neither :)
The HttpRequestInterceptor is meant for enriching the GraphQL request with context data.
The DiagnosticEventListener, on the other hand, is meant for logging or other instrumentations.
If you want to write an audit log, you should instead go for a request middleware. A request middleware can be added like the following.
services
.AddGraphQLServer()
.AddQueryType<Query>()
.UseRequest(next => async context =>
{
})
.UseDefaultPipeline();
The tricky part here is to inspect the request at the right time. Instead of appending to the default pipeline, you can define your own pipeline like the following.
services
.AddGraphQLServer()
.AddQueryType<Query>()
.UseInstrumentations()
.UseExceptions()
.UseTimeout()
.UseDocumentCache()
.UseDocumentParser()
.UseDocumentValidation()
.UseRequest(next => async context =>
{
// write your audit log here and invoke next if the user is allowed to execute
if(isNotAllowed)
{
// if the user is not allowed to proceed create an error result.
context.Result = QueryResultBuilder.CreateError(
ErrorBuilder.New()
.SetMessage("Something is broken")
.SetCode("Some Error Code")
.Build())
}
else
{
await next(context);
}
})
.UseOperationCache()
.UseOperationResolver()
.UseOperationVariableCoercion()
.UseOperationExecution();
The pipeline is basically the default pipeline but adds your middleware right after the document validation. At this point, your GraphQL request is parsed and validated. This means that we know it is a valid GraphQL request that can be processed at this point. This also means that we can use the context.Document property that contains the parsed GraphQL request.
In order to serialize the document to a formatted string use context.Document.ToString(indented: true).
The good thing is that in the middleware, we are in an async context, meaning you can easily access a database and so on. In contrast to that, the DiagnosticEvents are sync and not meant to have a heavy workload.
The middleware can also be wrapped into a class instead of a delegate.
If you need more help, join us on slack.
Click on community support to join the slack channel:
https://github.com/ChilliCream/hotchocolate/issues/new/choose

Stripe Create Payment Intents Promise Never Resolves

I signed up for a Stripe account and followed some simple steps to get up and running with Node. I just installed the package and tested a payment Intents with my test key:
const Stripe = require('stripe');
const handleStripe = async () => {
const stripe = Stripe(testKeyString);
console.log(“we make it here”);
try {
const paymentIntent = await stripe.paymentIntents.create({
amount: 1000,
currency: 'usd',
payment_method_types: ['card'],
receipt_email: 'jenny.rosen#example.com',
});
//we never make it here
console.log(paymentIntent);
}
catch(err){
//we never make it here either
console.log(err);
}
}
The console logs “we make it here”, but nothing else. The promise is never resolved.
I suspect that this might be a bug with the stripe npm package. Anybody have any thoughts on why the promise is never returned?
EDIT: sorry, I wasted everyone’s time here. I was following the docs QuickStart where it said “install a client library” and I assumed it was for the front end. So a very silly mistake on my part thinking that it was a good idea to make a payment intent from the front end with a secret key. Just getting going with the Stripe API and I’m off to a bad start. Thanks for your comments and answer
Thanks
What happens if you run it without the try/catch? Also what do you get if you try https://status.stripe.com/reachability from that server - are you sure you can reach Stripe's servers?

Unreliable Google Firebase transactions

In my (greatly simplified) model I have users, accounts and account_types. Each user can have multiple accounts of each account_type. When an account of type TT is created I'm updating the "users" field of that object so it keeps the users which have accounts of that types, and the number of such accounts they have.
users: {
some fields
},
accounts: {
userID: UU,
type: TT
},
account_type:
users: { UU: 31 }
}
I use the onCreate and onDelete cloud triggers for accounts to update the account_type object. Since multiple accounts can be created simultaneously I have to use transactions:
exports.onCreateAccount = functions.firestore
.document('accounts/{accountID}')
.onCreate((account, context) => {
const acc_user = account.data().userID;
const acc_type = account.data().type;
return admin.firestore().runTransaction(transaction => {
// This code may get re-run multiple times if there are conflicts.
const accountTypeRef = admin.firestore().doc("account_types/"+acc_type);
return transaction.get(accountTypeRef).then(accTypeDoc => {
var users = accTypeDoc.data().users;
if (users === undefined) {
users = {};
}
if (users[acc_user] === undefined) {
users[acc_user] = 1;
} else {
users[acc_user]++;
}
transaction.update(accountTypeRef, {users: users});
return;
})
})
.catch(error => {
console.log("AccountType create transaction failed. Error: "+error);
});
});
In my tests I'm first populating the database with some data so I'm also adding a user and 30 accounts of the same type. With the local emulator this works just fine and at the end of the addition I see that the account_type object contains the user with the counter at 30. But when deployed to Firebase and running the same functions the counter gets to less than 30. My suspicion is that since Firebase is much slower and transactions take longer, more of them are conflicted and fail and eventually don't execute at all. The transaction failure documentation (https://firebase.google.com/docs/firestore/manage-data/transactions) says:
"The transaction read a document that was modified outside of the transaction. In this case, the transaction automatically runs again. The transaction is retried a finite number of times."
So my questions:
What does "finite" mean?
Any way to control this number?
How can I make sure my transactions are executed at some point and don't get dropped like that so my data is consistent?
Any other idea as to why I'm not getting the correct results when deployed to the cloud?
What does "finite" mean?
It's the opposite of "unlimited". It will retry no more than a set number of times.
Any way to control this number?
Other than modifying the source code of the SDK, no. The SDK itself advertise a specific number, as it might change.
How can I make sure my transactions are executed at some point and don't get dropped like that so my data is consistent?
Detect the error and retry in your app. If you aren't seeing the transaction fail with an error, then nothing went wrong.
Any other idea as to why I'm not getting the correct results when deployed to the cloud?
Since we can't see what exactly you're doing to trigger the function, and have no specific expected results to compare to, it's not really possible to say.

Discord.js - Getting information after Prefix and command

I'm now working in a new command, a poll command.
For that, I need a way of get the arguments after the prefix and the command itself.
Example: +Poll Do you like puppies?
And, it'd ignore the "+Poll", and get only the question itself, for then create a poll.
To get the arguments, I'm using:
var Args = message.content.split(/\s+/g)
You probably want to try creating the poll with a command, store the question in your database, and then use a separate command to display current polls that are open. Then the users would select the poll via command and the bot would await the response to the question.
I won't go into detail about storing the question in a database, because that's a totally different question. If you need help setting up a local database and store the polls, link to another question and I'll be happy to give more examples.
To go with your question, I would suggest using subStr to save each word after the command in an array, so you can later use those parts in the code. Something like this will store everything after !poll in the variable poll:
if (message.content.startsWith("!poll ")) {
var poll = message.content.substr("!poll ".length);
// Do something with poll variable //
message.channel.send('Your poll question is: ' + poll);
});
For the user answering the poll, you can try using awaitMessage to ask the question, and give a set number of responses. You would want to wrap this in a command that queries your database for the available polls first, and use that identifier to actually get the right question and possible reponses. The example below just echos the response that is collected, but you would want to store the response in the database instead of sending it in a message.
if (message.content === '!poll') {
message.channel.send(`please say yes or no`).then(() => {
message.channel.awaitMessages(response => response.content === `yes` || response.content === 'no', {
max: 1, // number of responses to collect
time: 10000, //time that bot waits for answer in ms
errors: ['time'],
})
.then((collected) => {
var pollRes = collected.first().content; //this is the first response collected
message.channel.send('You said ' + pollRes);
// Do something else here (save response in database)
})
.catch(() => { // if no message is collected
message.channel.send('I didnt catch that, Try again.');
});
});
};

Azure Search RetryPolicy

We are using azure search and need to implement a retry stratgey as well as storing the Ids of failed documents as described.
Is there any documentation/samples on how to implement a RetryPolicy strategy in Azure Search.
Thanks
This is what I used:
private async Task<DocumentIndexResult> IndexWithExponentialBackoffAsync(IndexBatch<IndexModel> indexBatch)
{
return await Policy
.Handle<IndexBatchException>()
.WaitAndRetryAsync(5, retryAttempt => TimeSpan.FromSeconds(Math.Pow(2, retryAttempt)), (ex, span) =>
{
indexBatch = ((IndexBatchException)ex).FindFailedActionsToRetry(indexBatch, x => x.Id);
})
.ExecuteAsync(async () => await _searchClient.IndexAsync(indexBatch));
}
It uses the Polly library to handle exponential backoff. In this case I use a model IndexModel that has a id field named Id.
If you like to log or store the ids of the failed attempts you can do that in the WaitAndRetryAsync function like
((IndexBatchException)ex)ex.IndexingResults.Where(r => !r.Succeeded).Select(r => r.Key).<Do something here>
There is currently no sample showing how to properly retry on IndexBatchException. However, there is a method you can use to make it easier to implement: IndexBatchException.FindFailedActionsToRetry. This method extracts the IDs of failed documents from the IndexBatchException, correlates them with the actions in a given batch, and returns a new batch containing only the failed actions that need to be retried.
Regarding the rest of the retry logic, you might find this code in the ClientRuntime library useful. You will need to tweak the parameters based on the characteristics of your load. The important thing to remember is that you should use exponential backoff before retrying to help your service recover, since otherwise your requests may be throttled.

Resources