In my (greatly simplified) model I have users, accounts and account_types. Each user can have multiple accounts of each account_type. When an account of type TT is created I'm updating the "users" field of that object so it keeps the users which have accounts of that types, and the number of such accounts they have.
users: {
some fields
},
accounts: {
userID: UU,
type: TT
},
account_type:
users: { UU: 31 }
}
I use the onCreate and onDelete cloud triggers for accounts to update the account_type object. Since multiple accounts can be created simultaneously I have to use transactions:
exports.onCreateAccount = functions.firestore
.document('accounts/{accountID}')
.onCreate((account, context) => {
const acc_user = account.data().userID;
const acc_type = account.data().type;
return admin.firestore().runTransaction(transaction => {
// This code may get re-run multiple times if there are conflicts.
const accountTypeRef = admin.firestore().doc("account_types/"+acc_type);
return transaction.get(accountTypeRef).then(accTypeDoc => {
var users = accTypeDoc.data().users;
if (users === undefined) {
users = {};
}
if (users[acc_user] === undefined) {
users[acc_user] = 1;
} else {
users[acc_user]++;
}
transaction.update(accountTypeRef, {users: users});
return;
})
})
.catch(error => {
console.log("AccountType create transaction failed. Error: "+error);
});
});
In my tests I'm first populating the database with some data so I'm also adding a user and 30 accounts of the same type. With the local emulator this works just fine and at the end of the addition I see that the account_type object contains the user with the counter at 30. But when deployed to Firebase and running the same functions the counter gets to less than 30. My suspicion is that since Firebase is much slower and transactions take longer, more of them are conflicted and fail and eventually don't execute at all. The transaction failure documentation (https://firebase.google.com/docs/firestore/manage-data/transactions) says:
"The transaction read a document that was modified outside of the transaction. In this case, the transaction automatically runs again. The transaction is retried a finite number of times."
So my questions:
What does "finite" mean?
Any way to control this number?
How can I make sure my transactions are executed at some point and don't get dropped like that so my data is consistent?
Any other idea as to why I'm not getting the correct results when deployed to the cloud?
What does "finite" mean?
It's the opposite of "unlimited". It will retry no more than a set number of times.
Any way to control this number?
Other than modifying the source code of the SDK, no. The SDK itself advertise a specific number, as it might change.
How can I make sure my transactions are executed at some point and don't get dropped like that so my data is consistent?
Detect the error and retry in your app. If you aren't seeing the transaction fail with an error, then nothing went wrong.
Any other idea as to why I'm not getting the correct results when deployed to the cloud?
Since we can't see what exactly you're doing to trigger the function, and have no specific expected results to compare to, it's not really possible to say.
Related
I am new to Salesforce Marketing Cloud and journey builder.
https://developer.salesforce.com/docs/marketing/marketing-cloud/guide/creating-activities.html
We are building journey builder's custom activity in which it will use a data extension as the source and when the journey builder is invoked, it will fetch a row and send this data to our company's internal endpoint. The team got that part working. We are using the postmonger.js.
I have a couple of questions:
Is there a way to retrieve the data from data extension in bulk so that we can call our company's internal bulk endpoint? Calling the endpoint for each record in the data extension for our use case would not be efficient enough and won't work.
When the journey is invoked and an entry in the data extension is retrieved and that data is sent to our internal endpoint, is there a machanism to mark this entry as already sent such that next time the journey is run, it won't process the entry that's already sent?
Here is a snippet of our customActivity.js in which this is populating one record. (I changed some variable names.). Is there a way to populate multiple records such that when "execute" is called, it is passing a list of payloads as input to our internal endpoint.
function save() {
try {
var TemplateNameValue = $('#TemplateName').val();
var TemplateIDValue = $('#TemplateID').val();
let auth = "{{Contact.Attribute.Authorization.Value}}"
payload['arguments'].execute.inArguments = [{
"vendorTemplateId": TemplateIDValue,
"field1": "{{Contact.Attribute.DD.field1}}",
"eventType": TemplateNameValue,
"field2": "{{Contact.Attribute.DD.field2}}",
"field3": "{{Contact.Attribute.DD.field3}}",
"field4": "{{Contact.Attribute.DD.field4}}",
"field5": "{{Contact.Attribute.DD.field5}}",
"field6": "{{Contact.Attribute.DD.field6}}",
"field7": "{{Contact.Attribute.DD.field7}}",
"messageMetadata" : {}
}];
payload['arguments'].execute.headers = `{"Authorization":"${auth}"}`;
payload['configurationArguments'].stop.headers = `{"Authorization":"default"}`;
payload['configurationArguments'].validate.headers = `{"Authorization":"default"}`;
payload['configurationArguments'].publish.headers = `{"Authorization":"default"}`;
payload['configurationArguments'].save.headers = `{"Authorization":"default"}`;
payload['metaData'].isConfigured = true;
console.log(payload);
connection.trigger('updateActivity', payload);
} catch(err) {
document.getElementById("error").style.display = "block";
document.getElementById("error").innerHtml = err;
}
console.log("Template Name: " + JSON.stringify(TemplateNameValue));
console.log("Template ID: " + JSON.stringify(TemplateIDValue));
}
});
Any advise or idea is highly appreciated!
Thank you.
Grace
Firstly, i implore you to not proceed with the design pattern of fetching data for each subscriber, from Marketing Cloud, that gets sent through the custom activity, for arguments sake i'll list two big issues.
You have no way of limiting the configuration of data extensions columns or column names in SFMC (Salesforce Marketing Cloud). If any malicious user or by human error would delete a column or change a column name your service would stop receiving that value.
Secondly, Marketing Cloud has 2 sets of API limitations, yearly and minute by minute. Depending on your licensing, you could run into the yearly limit.
The problem you have with limitation on minutes (2500 for REST and 2000 for SOAP) is that each usage of the custom activity in journey builder would multiple the amount of invocations per minute. Hitting this limit would cause issues for incremental data flows into SFMC.
I'd also suggest not retrieving any data from Marketing Cloud when a customer gets sent through a custom activity. Users should pick which corresponding rows/data that should be sent to the custom activity in their segmentation.
The eventDefinitionKey can be picked up from postmonger after requestedTriggerEventDefinition in the eventDefinitionModel function. eventDefinitionKey can then be used to programmatically populate SFMC's POST call with data from the Journey Data model, thus allowing marketers to select what data to be sent with the subscriber.
Following is some code to show how it would work in your customActivity.js
connection.on(
'requestedTriggerEventDefinition',
function (eventDefinitionModel) {
var eventKey = eventDefinitionModel['eventDefinitionKey'];
save(eventKey);
}
);
function save(eventKey) {
// subscriberKey fetched directly from Contact model
// columnName is populated from the Journey Data model
var params = {
subscriberKey: '{{Contact.key}}',
columnName: '{{Event.' + eventKey + '.columnName}}',
};
payload['arguments'].execute.inArguments = [params];
}
I am building a chat app in react using Firebase Firestore as backend database.
I get recent 25 messages in useEffect hook as
useEffect(() => {
const q = query(
collection(db, 'messages'),
orderBy('createdAt', 'desc'),
limit(25)
);
return onSnapshot(q, (snapshot) => {
setData(
snapshot.docs.map((doc) => {
console.log('document read');
return { ...doc.data(), id: doc.id };
})
);
});
}, []);
But this operation results in 25 document reads on page load and 50 additional on sending a message.
If more users are connected, 25 request per user happen on single message send by any user.
Is there any way to reduce the reads?
Complete code:- https://github.com/Puneet56/Converse
You don't get 25 reads in a running query with a limit of 25 resulst if you get a new one inside. As the documentation says:
When you listen to the results of a query, you are charged for a read
each time a document in the result set is added or updated. You are
also charged for a read when a document is removed from the result set
because the document has changed. (In contrast, when a document is
deleted, you are not charged for a read.)
Also, if the listener is disconnected for more than 30 minutes (for
example, if the user goes offline), you will be charged for reads as
if you had issued a brand-new query.
As stated in the docs you will be charged only for the one that is added and the one that get's out of the query limit because a new one got in. So you get probably only 2 reads per new message. I think that is a reasonable amount. I can't any way to reduce this amount in a chat App. Even if you would increase the query limit only your inital reads (if it's a fresh read with no old or empty cache) will increas to but the reads while listening would stay the same.
I'm using gcloud-node.
createTopic api returns error 409, if that topic exist already. Is there a flag that can implicitly create a topic when publishing a message or Is there an API to check if a topic exist already?
Its easy to use getTopics API, iterate thru the response topic array and determine if a topic exist. Just wanted to make sure I dont write something, if it exists already.
Is there a flag that can implicitly create a topic when publishing a message or Is there an API to check if a topic exist already?
I believe the problem you'll run into is that if a message is published to a topic that doesn't exist, it is immediately dropped. So, it won't hang around and wait for a subscription to be created; it'll just disappear.
However, gcloud-node does have methods that will create a topic if necessary:
var topic = pubsub.topic('topic-that-maybe-exists');
topic.get({ autoCreate: true }, function(err, topic) {
// topic.publish(...
});
In fact, almost all gcloud-node objects have the get method that will work the same way as above, i.e. a Pub/Sub subscription or a Storage bucket or a BigQuery dataset, etc.
Here's a link to the topic.get() method in the docs: https://googlecloudplatform.github.io/gcloud-node/#/docs/v0.37.0/pubsub/topic?method=get
ran into this recently, and the accepted answer runs you into http 429 errors. topic.get is an administrative function which has a significantly lower rate limit than normal functions. You should only call them when neccessary eg. error code 404 during publish (topic doesn't exist), something like so:
topic.publish(payload, (err) => {
if(err && err.code === 404){
topic.get({ autoCreate: true }, (err, topic) => {
topic.publish(payload)
});
}
});
Personally use this one
const topic = pubsub.topic('topic-that-maybe-exists');
const [exists] = await topic.exists();
if (!exists) {
await topic.create();
}
Are there any special hoops one has to jump through when modifying user objects in Meteor? I have no problem changing other collections but the users are strangely and persistently resistant to the many suggestions I have found.
I can see that there are some user attributes such as profile that are published and presumably quite easy to change. I need more control over the access so just bunging my data into user.profile won't do. At the moment I'm trying to give users a grant table, so that for example I can write:
var user = Meteor.users.findOne();
var may_eat_popcorn = user.grants.popcorn;
This works:
$ meteor shell
// First check that the user is not allowed to eat popcorn:
> Meteor.users.findOne({_id:"iCTnpqwCR6jj9xxxx"});
....
grants: { popcorn: false } }
// Give the non-gender specific entity access to popcorn:
> Meteor.users.update({_id:"iCTnpqwCR6jj9xxxx"},{$set:{"grants.popcorn":true}}, function(err,res){console.log("grant:",err,res);});
> Meteor.users.findOne({_id:"iCTnpqwCR6jj9xxxx"});
....
grants: { popcorn: true } }
// Hooray.
This doesn't, even though equivalent code works fine with other collections:
Meteor.methods(
{ User_grant_popcorn: function(userId, granted){
// authentication. Then:
var grants = {"grants.popcorn": granted};
console.log(userId,grants);
Meteor.users.update({_id:userId},{$set:grants}, function(err,res){console.log("grant:",err,res);});
// This callback prints that there is no error, yet the database doesn't change on the server.
}
});
// On the client the admin picks the target user and sets their degree of pop:
Meteor.call('User_grant_popcorn', user._id, false);
Do you know how user is different? More importantly, how can I debug issues like this? Winning means getting awesome things done fast. That's meteor's promise. If debugging takes this long the advantage is lost.
Many thanks, Max
Programmatically create $set
Meteor.methods({
User_grant_popcorn: function(userId, granted) {
// authentication. Then:
var grants = {
"grants.popcorn": granted
};
var setHash = {
$set: grants
};
console.log(userId, grants);
Meteor.users.update({_id: userId}, setHash, function(err, res) {
console.log("grant:", err, res);
});
// This callback prints that there is no error, yet the database doesn't change on the server.
}
});
I have experienced some issues while setting up Slick 2.0.2. Any configuration that I do in one session is lost in the next. For example, in the first session, I create the table and add three people:
// H2 in-memory database
lazy val db = Database.forURL("jdbc:h2:mem:contacts", driver="org.h2.Driver")
// Contacts table
lazy val contacts = TableQuery[ContactsSchema]
// Initial session
db withSession { implicit session =>
contacts.ddl.create
// Inserts sample data
contacts += Person("John", "123 Main street", 29)
contacts += Person("Greg", "Neither here nor there", 40)
contacts += Person("Michael", "Continental U.S.", 34)
// Successfully retrieves data
contacts foreach { person =>
println(person)
}
}
All is well up to this point. The output repeats the three people whom I added. When I start a new session, I start to experience issues.
// New session in which the previous data is lost
db withSession { implicit session =>
contacts foreach { person =>
println(person)
}
}
The above block creates a org.h2.jdbc.JdbcSQLException: Table "CONTACTS" not found exception. If I edit as follows
db withSession { implicit session =>
contacts.ddl.create
contacts foreach { person =>
println(person)
}
}
then all the data is erased.
I see that the Scalatra guide to Slick uses a similar configuration to mine. What am I doing wrong? How should I get the data to persist between sessions? Does the fact that I am using an in-memory database have anything to do with it?
Two choices.
Either create a session and keep it open. That can be done with a withSession scope lower on the call stack or db.createSession.
Or add ;DB_CLOSE_DELAY=-1 to the database url. That keeps the db alive as long as the vm runs.
See http://www.h2database.com/html/features.html#in_memory_databases