Chunking a large GraphQL request into smaller requests - reactjs

I'm using Apollo React Native client working with a query for which my request body has become too large to use (it's being rejected by our CDN for a request-too-large rule). So, I'm hoping to split/chunk this request into smaller requests and particularly curious if it's possible to do parallelized.
I think this is better illustrated with an example, so we can imagine I'm building a WhatsApp challenger -- WhoseApp -- for which we want users to be able to see who of their contacts have a WhoseApp account upon signup.
For our implementation, we'll take all of the phone numbers stored on our user's device and send them to our GraphQL query GetPhoneNumberAccountStatus which accepts an array of phone numbers and which returns an Account for each number associated to an account (and nothing for those that are not).
If we send the contacts as one request, we'll have a request body that looks something like this:
[
"+15558675309",
"+15558675308",
"+15558675307"
"+15558675306"
...
// 500+ numbers for some users
]
What's the correct way to split this request into multiple?
I'm curious of both:
What's the 'optimal' way to approach this using a sequential approach (e.g., send one group, wait for response, send next group), or
Is there a way to do this parallelized (e.g., send all groups at beginning and then receive responses as they arrive)?
I initially figured it might be possible to use useLazyQuery and send tranches of ~50 numbers at a time, firing each group and then awaiting the responses but this GitHub thread for the library makes it clear that that's not the correct approach.

I think it's readable
const promises = [];
const chunkSize = 50;
for (let i = 0; i <= contacts.length; i += chunkSize) {
const promise = apollo.query({...dataHere});
promises.push(promise);
}
await Promise.all(promises);

Related

Can I send an alert when a message is published to a pubsub topic?

We are using pubsub & a cloud function to process a stream of incoming data. I am setting up a dead letter topic to handle cases where a message cannot be processed, as described at Cloud Pub/Sub > Guides > Handling message failures.
I've configured a subscription on the dead-letter topic to retain messages for 7 days, we're doing this using terraform:
resource "google_pubsub_subscription" "dead_letter_monitoring" {
project = var.project_id
name = "var.dead_letter_sub_name
topic = google_pubsub_topic.dead_letter.name
expiration_policy { ttl = "" }
message_retention_duration = "604800s" # 7 days
retain_acked_messages = true
ack_deadline_seconds = 600
}
We've tested our cloud function robustly and consequently our expectation is that messages will appear on this dead-letter topic very very rarely, perhaps never. Nevertheless we're putting it in place just to make sure that we catch any anomalies.
Given the rarity of which we expect messages to appear on the dead-letter-topic we need to set up an alert to send an email when such a message appears. Is it possible to do this? I've taken a look through the alerts one can create at https://console.cloud.google.com/monitoring/alerting/policies/create however I didn't see anything that could accomplish this.
I know that I could write a cloud function to consume a message from the subscription and act upon it accordingly however I'd rather not have to do that, a monitoring alert feels like a much more elegant way of achieving this.
is this possible?
Yes, you can use Cloud Monitoring for that. Create a new policy and perform that configuration
Select PubSub Topic and Published message. Observe the value every minute and count them (aligner in the advanced option). Now, in the config, when it's above 0 from the most recent value, the alert is raised.
To filter on only your topic you can add a filter by topic_id on your topic name.
Then, configure your alert to send an email. It should work!

Are JWT-signed prices secure enough for PayPal transactions client-side?

I'm using NextJS with Firebase, and PayPal is 100x easier to implement client-side. The only worry I have is somebody potentially brute-forcing the amount before the token is sent to PayPal. If I JWT sign with a secret key, is that secure enough (within reason) to dissuade people from attempting to manipulate the prices?
I thought about writing a serverless function, but it would still have to pass the values to the client to finish the transaction (the prices are baked into a statically-generated site). I'm not sure if PayPal's IPN listener is still even a thing, or even the NVP (name-value-pairs). My options as I see them:
Verify the prices and do payment server-side (way more complex)
Sign the prices with a secret key at build time, and reference those prices when sending to PayPal.
I should also mention that I'm completely open to ideas, and in no way think that these are the 'best' as it were.
Pseudo-code:
cart = {
product: [ obj1, obj2, obj3 where obj = { price, sale_price, etc.}],
total: cart.products.length
}
create an order with PayPal, using the cart array, and mapping over values
cart.products.map( prod => { return prod.sale_price || prod.price } etc.
Someone could easily modify the object to make price '.01' instead of '99.99' (for example)

Correct place to audit query in Hot Chocolate graphql

I am thinking should I audit user queries in HttpRequestInterceptor or DiagnosticEventListener for Hot Chocolate v11. The problem with latter is that if the audit failed to write to disk/db, the user will "get away" with the query.
Ideally if audit fail, no operation should proceed. Therefore in theory I should use HttpRequestInterceptor.
But How do I get IRequestContext from IRequestExecutor or IQueryRequestBuilder. I tried googling but documentation is limited.
Neither :)
The HttpRequestInterceptor is meant for enriching the GraphQL request with context data.
The DiagnosticEventListener, on the other hand, is meant for logging or other instrumentations.
If you want to write an audit log, you should instead go for a request middleware. A request middleware can be added like the following.
services
.AddGraphQLServer()
.AddQueryType<Query>()
.UseRequest(next => async context =>
{
})
.UseDefaultPipeline();
The tricky part here is to inspect the request at the right time. Instead of appending to the default pipeline, you can define your own pipeline like the following.
services
.AddGraphQLServer()
.AddQueryType<Query>()
.UseInstrumentations()
.UseExceptions()
.UseTimeout()
.UseDocumentCache()
.UseDocumentParser()
.UseDocumentValidation()
.UseRequest(next => async context =>
{
// write your audit log here and invoke next if the user is allowed to execute
if(isNotAllowed)
{
// if the user is not allowed to proceed create an error result.
context.Result = QueryResultBuilder.CreateError(
ErrorBuilder.New()
.SetMessage("Something is broken")
.SetCode("Some Error Code")
.Build())
}
else
{
await next(context);
}
})
.UseOperationCache()
.UseOperationResolver()
.UseOperationVariableCoercion()
.UseOperationExecution();
The pipeline is basically the default pipeline but adds your middleware right after the document validation. At this point, your GraphQL request is parsed and validated. This means that we know it is a valid GraphQL request that can be processed at this point. This also means that we can use the context.Document property that contains the parsed GraphQL request.
In order to serialize the document to a formatted string use context.Document.ToString(indented: true).
The good thing is that in the middleware, we are in an async context, meaning you can easily access a database and so on. In contrast to that, the DiagnosticEvents are sync and not meant to have a heavy workload.
The middleware can also be wrapped into a class instead of a delegate.
If you need more help, join us on slack.
Click on community support to join the slack channel:
https://github.com/ChilliCream/hotchocolate/issues/new/choose

Gatling test involving multiple clients

Is it possible to implement a gatling test with multiple clients? Example: The first client gets a key which is communicated to the second client. That key is then used until the second client is finished and the first client can then proceed and check to see the result.
Cookies are the problem why I'm having troubles implementing this as a single client posing as two separate. The clients must have different set of cookies.
Alternatively; can I hold and reinsert the cookies for the first client?
I was able to circumvent the problem be storing and restoring the whole cookie jar like this:
val builder = scenario("Thingies")
... do some first client stuff
.exec(session => {
session.set("first-session-cookies",
session("gatling.http.cookies").as[CookieJar])
})
... do some second client stuff
.exec(session => {
session.set("gatling.http.cookies",
session("first-session-cookies").as[CookieJar])
})
... back to first client stuff
Works like a charm :-)

Why does WebApp2 auth.get_user_by_session() change the token?

I am using WebApp2 with auth for user sessions. My client will occasionally make nearly simultaneous requests to the server. The first one will make a request with session data that looks like this:
{
'cache_ts': 1408106895,
'token': u'GXpsaVQh5ZWtqxJMUBpGTr',
'user_id': 5690665774088192L,
'remember': 1,
'token_ts': 1408034938
}
Then after a call to auth.get_user_by_session(), the session comes back like this:
{
'cache_ts': 1408124980,
'token': u'0IVduczdGR5PkrMqNhBvzW',
'user_id': 5690665774088192L,
'remember': 1,
'token_ts': 1408124980
}
As you can see the token has been changed, and the timestamps updated.
Nearly simutaneously, another request is made that contains the same initial session data.
{
'cache_ts': 1408106895,
'token': u'GXpsaVQh5ZWtqxJMUBpGTr',
'user_id': 5690665774088192L,
'remember': 1,
'token_ts': 1408034938
}
However, that token is now invalid, so the session data is set to None. This wipes the users session, and causes lots of problems. Is there some setting I should be using to extend the life of the UserToken? Is there a more appropriate method than get_user_by_session()? I woud imagine that nearly simultaneous requests with the same session data shouldn't cause enormous issues. The ideal situation would be that if auth received invalid or expired tokens it would just ignore them, and throw an error.
Update 1
Hoped it was something simple like passing False to get_user_by_session(). That of course killed the session immediately.
Update 2
I've found that I only really need the user_id field, and that comes for free with the cookie data. Implementing that reduces the frequency of the issue. However the problem isn't actually fixed, and I'd love some input from anyone with familiarity of this library.
This is due to token_new_age parameter which defaults to 1 day so... every 24h the token will change.
This is a security measure because if someone hacks that session it will only work for 24h.
Parameter 'token_max_age' will also delete the token when time is consumed.

Resources