I have created a Gatling project that contains many different data sets. The reason for this, I want to include randomness and uniqueness with every SOAP request I throw at my service. Example: One data set has id numbers and another data set has colors. I want to inject these values in my request that I send to my webservice.
When I start up gatling, it generates requests with random values (like expected) but then reuses the same id and color combination. If possible, I would like to send a different request everytime (for example) id: 001 and color: blue. Then send a request with id: 001 and color: red. Right now it just resends id: 001 and color: blue.
I have id.scala and color.scala files with hundreds of lines and a xml request file so combinations should be endless.
val id = jsonFile("data/id.json").circular
val color = jsonFile("data/color.json").circular
def updateIdWithColor() = {
.exec(http("Add Color")
.post("/")
.body(ElFileBody("requests/addcolor.txt"))
.check(status.is(200)))
}
val scn = scenario("Load Testing")
.feed(id)
.feed(color)
.forever() {
exec(updateIdWithColor())
}
setUp(
scn.inject(
nothingFor(5 seconds),
atOnceUsers(5),
rampUsers(userCount) during(rampUpUsersOverSeconds seconds)
).protocols(httpConf)
).maxDuration(testDuration minutes)
//.assertions(
// global.responseTime.max.lt(2000),
// global.successfulRequests.percent.gt(99.9),
//)
}
Is there a way I can REFRESH id to color combination so I can send a different request every time? If I run for 15 minutes, it just sends a SOAP xml request with 001 -> blue for 15 minutes.
Please let me know if I can bring anything else. I am confident there is a method I can use (potentially) in the code blocks I provided. I am just not aware. Thanks in advance!
You have to move your feeds inside your forever loop:
val scn = scenario("Load Testing")
.forever() {
feed(id)
.feed(color)
.exec(updateIdWithColor())
}
Related
I do have a test scenario to load test the API request with a different employee id (query param) and the corresponding payload, i do have all URIs handy to test, and Each POST request should execute only once.
so i assume, scn.inject(atOnceUser(1)) should be just 1 in my case, as user is a just single execution of a scenario from start to finish. Can someone confirm if my understanding is right. here i want all three POST requests to execute only once simultaneously.
I am planning to use scala3 to test the same. (like 100's of POST request simultaneously, with a different employee id each time, and submit POST request only once for each employee id)
POST: http://localhost:8080/orders/00000/product/item?employeeId=1234
payload A
POST: http://localhost:8080/orders/00000/product/item?employeeId=5678
payload B
POST: http://localhost:8080/orders/00000/product/item?employeeId=8352
payload C
and once the first round of testing is done, now i would like to execute the same POST request URIs (100s of POST URI's like above) with the different payload in round 2. In Round 2, the URI is still going to be the same with the same query param (employee id), but payload will be different from what i use for the previous round 1.
and again, i would like to run all below requests only once and run each request simultaneously. ((like 100's of POST request simultaneously)
POST: http://localhost:8080/orders/00000/product/item?employeeId=1234
payload D
POST: http://localhost:8080/orders/00000/product/item?employeeId=5678
payload E
POST: http://localhost:8080/orders/00000/product/item?employeeId=8352
payload F
and continue the same process as above for Round 3 and Round 4, with the same URI but with different payload, again execute all POST requests simultaneously, but execute each request only once.
Can you please help me with the approach to get this working?
Can you please advise how i should plan to store all these 100+ payload in the feeder file for each round, should i be having a single feeder file with 100+ post request body and Can you please advise if i should be storing all 100+ URIs with the query param in the feeder file as well? Note: for each round , the payload will be different
ffor eg: Round 1- 100+ POST URIs --> and corresponding 100+ request body
Round 2 - same 100+ POST URIs as Round 1 --> but this time different 100+ request body (corresponding payload for each URI)
Round 3 - same 100+ POST URIs as Round 1 --> but this time again different 100+ request body (corresponding payload for each URI)
Each URI has its own request payload for each round.
I'm using Apollo React Native client working with a query for which my request body has become too large to use (it's being rejected by our CDN for a request-too-large rule). So, I'm hoping to split/chunk this request into smaller requests and particularly curious if it's possible to do parallelized.
I think this is better illustrated with an example, so we can imagine I'm building a WhatsApp challenger -- WhoseApp -- for which we want users to be able to see who of their contacts have a WhoseApp account upon signup.
For our implementation, we'll take all of the phone numbers stored on our user's device and send them to our GraphQL query GetPhoneNumberAccountStatus which accepts an array of phone numbers and which returns an Account for each number associated to an account (and nothing for those that are not).
If we send the contacts as one request, we'll have a request body that looks something like this:
[
"+15558675309",
"+15558675308",
"+15558675307"
"+15558675306"
...
// 500+ numbers for some users
]
What's the correct way to split this request into multiple?
I'm curious of both:
What's the 'optimal' way to approach this using a sequential approach (e.g., send one group, wait for response, send next group), or
Is there a way to do this parallelized (e.g., send all groups at beginning and then receive responses as they arrive)?
I initially figured it might be possible to use useLazyQuery and send tranches of ~50 numbers at a time, firing each group and then awaiting the responses but this GitHub thread for the library makes it clear that that's not the correct approach.
I think it's readable
const promises = [];
const chunkSize = 50;
for (let i = 0; i <= contacts.length; i += chunkSize) {
const promise = apollo.query({...dataHere});
promises.push(promise);
}
await Promise.all(promises);
I am new to Salesforce Marketing Cloud and journey builder.
https://developer.salesforce.com/docs/marketing/marketing-cloud/guide/creating-activities.html
We are building journey builder's custom activity in which it will use a data extension as the source and when the journey builder is invoked, it will fetch a row and send this data to our company's internal endpoint. The team got that part working. We are using the postmonger.js.
I have a couple of questions:
Is there a way to retrieve the data from data extension in bulk so that we can call our company's internal bulk endpoint? Calling the endpoint for each record in the data extension for our use case would not be efficient enough and won't work.
When the journey is invoked and an entry in the data extension is retrieved and that data is sent to our internal endpoint, is there a machanism to mark this entry as already sent such that next time the journey is run, it won't process the entry that's already sent?
Here is a snippet of our customActivity.js in which this is populating one record. (I changed some variable names.). Is there a way to populate multiple records such that when "execute" is called, it is passing a list of payloads as input to our internal endpoint.
function save() {
try {
var TemplateNameValue = $('#TemplateName').val();
var TemplateIDValue = $('#TemplateID').val();
let auth = "{{Contact.Attribute.Authorization.Value}}"
payload['arguments'].execute.inArguments = [{
"vendorTemplateId": TemplateIDValue,
"field1": "{{Contact.Attribute.DD.field1}}",
"eventType": TemplateNameValue,
"field2": "{{Contact.Attribute.DD.field2}}",
"field3": "{{Contact.Attribute.DD.field3}}",
"field4": "{{Contact.Attribute.DD.field4}}",
"field5": "{{Contact.Attribute.DD.field5}}",
"field6": "{{Contact.Attribute.DD.field6}}",
"field7": "{{Contact.Attribute.DD.field7}}",
"messageMetadata" : {}
}];
payload['arguments'].execute.headers = `{"Authorization":"${auth}"}`;
payload['configurationArguments'].stop.headers = `{"Authorization":"default"}`;
payload['configurationArguments'].validate.headers = `{"Authorization":"default"}`;
payload['configurationArguments'].publish.headers = `{"Authorization":"default"}`;
payload['configurationArguments'].save.headers = `{"Authorization":"default"}`;
payload['metaData'].isConfigured = true;
console.log(payload);
connection.trigger('updateActivity', payload);
} catch(err) {
document.getElementById("error").style.display = "block";
document.getElementById("error").innerHtml = err;
}
console.log("Template Name: " + JSON.stringify(TemplateNameValue));
console.log("Template ID: " + JSON.stringify(TemplateIDValue));
}
});
Any advise or idea is highly appreciated!
Thank you.
Grace
Firstly, i implore you to not proceed with the design pattern of fetching data for each subscriber, from Marketing Cloud, that gets sent through the custom activity, for arguments sake i'll list two big issues.
You have no way of limiting the configuration of data extensions columns or column names in SFMC (Salesforce Marketing Cloud). If any malicious user or by human error would delete a column or change a column name your service would stop receiving that value.
Secondly, Marketing Cloud has 2 sets of API limitations, yearly and minute by minute. Depending on your licensing, you could run into the yearly limit.
The problem you have with limitation on minutes (2500 for REST and 2000 for SOAP) is that each usage of the custom activity in journey builder would multiple the amount of invocations per minute. Hitting this limit would cause issues for incremental data flows into SFMC.
I'd also suggest not retrieving any data from Marketing Cloud when a customer gets sent through a custom activity. Users should pick which corresponding rows/data that should be sent to the custom activity in their segmentation.
The eventDefinitionKey can be picked up from postmonger after requestedTriggerEventDefinition in the eventDefinitionModel function. eventDefinitionKey can then be used to programmatically populate SFMC's POST call with data from the Journey Data model, thus allowing marketers to select what data to be sent with the subscriber.
Following is some code to show how it would work in your customActivity.js
connection.on(
'requestedTriggerEventDefinition',
function (eventDefinitionModel) {
var eventKey = eventDefinitionModel['eventDefinitionKey'];
save(eventKey);
}
);
function save(eventKey) {
// subscriberKey fetched directly from Contact model
// columnName is populated from the Journey Data model
var params = {
subscriberKey: '{{Contact.key}}',
columnName: '{{Event.' + eventKey + '.columnName}}',
};
payload['arguments'].execute.inArguments = [params];
}
Hello i have tweet id's and i saved them to database before. But i saw that i could not save created time efficiently (it is saved like 00:00:00). Therefore i wished to update my tweets with tweet id by using following code.
MyConnectionBuilder myConnection = new MyConnectionBuilder();
Twitter twitter = new TwitterFactory(myConnection.configuration.build()).getInstance();
Status status = twitter.showStatus(Long.parseLong(tweetId));
But it takes too much time to get tweets, is there any rate limit for this ? If there is a rate limit how can i make it faster ?
Updating every single tweet via showStatus wastes your "credits" for a given timeframe (rate-limit).
For updating multiple tweets, you should use lookup with a maximum of 100 ids per request. This call will use the /statuses/lookup endpoint.
Rate-Limit and endpoint documentation can be found here
Code-Snipped for it:
Twitter twitter = twitterFactory.getInstance();
ResponseList<Status> responseList = twitter.lookup(ArrayUtils.toPrimitive(ids));
if(responseList != null) {
for (Status status : responseList) {
// do what you need to do here
}
}
I am using the deferred task queues library with GAE. Every day I need to send a piece of text to all users connected to a certain page in my app. My app has multiple pages connected, so for each page, I want to go over all users, and send them a daily message. I am using a cursor to iterate over the table of Users in batches of 800. If there are more than 800 users, I want to remember where the cursor left off, and start another task with the other users.
I just want to make sure that with my algorithm I am going to send all users only one message. I want to make sure I won't miss any users, and that no user will receive the same message twice.
Does this look like the proper algorithm to handle my situation?
def send_news(page_cursor=None, page_batch_size=1,
user_cursor=None, user_batch_size=800):
p_query = PageProfile.query(PageProfile.subscribed==True)
all_pages, next_page_cursor, page_more = p_query.fetch_page(page_batch_size,
start_cursor=page_cursor)
for page in all_pages:
if page.page_news_url and page.subscribed:
query = User.query(User.subscribed==True, User.page_id == page.page_id)
all_users, next_user_cursor, user_more = query.fetch_page(user_batch_size, start_cursor=user_cursor)
for user in all_users:
user.sendNews()
# If there are more users on this page, remember the cursor
# and get the next 800 users on this same page
if user_more:
deferred.defer(send_news, page_cursor=page_cursor, user_cursor=next_user_cursor)
# If there are more pages left, use another deferred queue to
# send the daily news to users in that page
if page_more:
deferred.defer(send_news, page_cursor=next_page_cursor)
return "OK"
You could wrap your user.sendNews() into another deferred task with specific name which will ensure that it's created only once.
interval = int(time.time()) / (60 * 60 * 24)
args = ('positional_arguments_for_object')
kwargs = {'param': 'value'}
task_name = '_'.join([
'user_name',
'page_name'
str(interval_num)
])
# with interval presented in the name we are sure that the task name for the same page and same user will stay same for 24 hours
try:
deferred.defer(obj, _name=task_name, _queue='my-queue', _url='/_ah/queue/deferred', *args, **kwargs)
except (taskqueue.TaskAlreadyExistsError):
pass
# task with such name already exists, likely wasn't executed yet
except (taskqueue.TombstonedTaskError)
pass
# task with such name was created not long time ago and this name isn't available to use
# this should reset once a week or so
Note that as far as I remember App Engine does not guarantee that the task will be executed only once, in some edge cases it could be executed twice or more times and ideally they should be idempotent. If such edge cases are important for you – you could transactionally read/write some flag in the datastore for each task, and before executing the task you check if that entity is there to cancel the execution.