Firebase GET request orderby 400 bad request - reactjs

For a get request, I am trying to user order by like below but always get a 400 bad request. Here, multiple users can have multiple blog posts each with a unique id like b1 in screenshot below. The long sequence of characters is the uid of a user under blogs. Each user has their own uid.
https://assignment-c3557-default-rtdb.asia-southeast1.firebasedatabase.app/blogs.json?orderBy="createdAt"
I followed the documentation here
https://firebase.google.com/docs/database/rest/retrieve-data
All I am doing is issuing a simple GET request in react js as follows:
const resp = await fetch(`https://assignment-c3557-default-rtdb.asia-southeast1.firebasedatabase.app/blogs.json?orderBy="createdAt"``)
const data = await resp.json()
if(!resp.ok) { ... }
Below is the database single entry for schema reference

As I said in my previous comment, this URL is invalid:
https://assignment-c3557-default-rtdb.asia-southeast1.firebasedatabase.app/blogs.json/orderBy="createdAt"
                                                                                     ^
The query portion of a URL starts with a ? character.
https://assignment-c3557-default-rtdb.asia-southeast1.firebasedatabase.app/blogs.json?orderBy="createdAt"
                                                                                     ^

Firebase Realtime Database - Adding query to Axios GET request?
I followed the above similar issue resolution to solve my problem

Related

Firestore function to listen for updates

I am a beginner in React native and firestore, and using these to build a kind of social media app, and I have a weird problem(I think I structured the db the wrong way). I want to have a feed, with all posts, no following-based, no nothing. The first time I structured my posts in db like this: users(collection)->user(doc)->thisUserPosts(collection inside doc) - but I couldn't find a way to fetch through all the thisUserPosts from all user(doc) and display them properly.
So I re-structured the db like this:
2 main collection, posts and users. Completely different. In users collection, only docs of users and their data(name, age, etc). In the other, their posts(name, media, desc, AND userId - where userId == the person who created it. userId field from posts collection docs should exist in users collection).
This second approach works just fine. In feed, I only fetch posts. But the problem arrises when I try to open the post(need to have this feature). I need to be able to display on react-navigation header the name of the user, yet I only have details of the post and only userId, which is to no good use.
So I came up with a solution : add a userName field in the posts collection doc, next to userId and simply display that. Now here's the catch: I need to figure a way(in firestore I think) to listen to updates from users collection docs, in case a user updates his name/username(I don't want to showcase the old name). And I don't know if that's possible inside firestore or how. Or is it better to find a different db structure?
TLDR: Need a function in firestore to listen to updates from other collection OR restructuring the db.
If you are fetching posts of a single user then you can just set a listener for his document.
Make sure that document has no sensitive information that must not be shared with others and is limited to the owner only.
If you are fetching posts from multiple users then you can use in operator:
db.collection("users").where("userID", "in", ["user_id1", "user_id2"])
.onSnapshot((snapshot) => {
console.log(snapshot.docs.map(user => user.data()))
});
If I assume you will be updating the new name in all the user's posts then you can set the listener on the posts document itself but that won't be nice in case all 30 posts fetched are from same user. That'll end up costing 30 reads just to update the same name.
Edit:
A simple example of reading a user's posts and listening updates on the user name:
const userID = "my_user_id"
// fetching user's 30 posts
const postsRef = firebase.firebase().collection("posts").where("userID", "==", userID).limit(30)
const postsSnapshot = await postsRef.get()
const postsData = postsSnapshot.docs.map(post => post.data())
// Array of posts data objects
// listening to change in user's name
firebase.firestore().collection("users").doc("user_id")
.onSnapshot((doc) => {
console.log("data: ", doc.data());
const newUsername = doc.data().username
const updatedPostsData = postsData.map(post => {
return ({...post, username: newUsername})
})
});

Correct place to audit query in Hot Chocolate graphql

I am thinking should I audit user queries in HttpRequestInterceptor or DiagnosticEventListener for Hot Chocolate v11. The problem with latter is that if the audit failed to write to disk/db, the user will "get away" with the query.
Ideally if audit fail, no operation should proceed. Therefore in theory I should use HttpRequestInterceptor.
But How do I get IRequestContext from IRequestExecutor or IQueryRequestBuilder. I tried googling but documentation is limited.
Neither :)
The HttpRequestInterceptor is meant for enriching the GraphQL request with context data.
The DiagnosticEventListener, on the other hand, is meant for logging or other instrumentations.
If you want to write an audit log, you should instead go for a request middleware. A request middleware can be added like the following.
services
.AddGraphQLServer()
.AddQueryType<Query>()
.UseRequest(next => async context =>
{
})
.UseDefaultPipeline();
The tricky part here is to inspect the request at the right time. Instead of appending to the default pipeline, you can define your own pipeline like the following.
services
.AddGraphQLServer()
.AddQueryType<Query>()
.UseInstrumentations()
.UseExceptions()
.UseTimeout()
.UseDocumentCache()
.UseDocumentParser()
.UseDocumentValidation()
.UseRequest(next => async context =>
{
// write your audit log here and invoke next if the user is allowed to execute
if(isNotAllowed)
{
// if the user is not allowed to proceed create an error result.
context.Result = QueryResultBuilder.CreateError(
ErrorBuilder.New()
.SetMessage("Something is broken")
.SetCode("Some Error Code")
.Build())
}
else
{
await next(context);
}
})
.UseOperationCache()
.UseOperationResolver()
.UseOperationVariableCoercion()
.UseOperationExecution();
The pipeline is basically the default pipeline but adds your middleware right after the document validation. At this point, your GraphQL request is parsed and validated. This means that we know it is a valid GraphQL request that can be processed at this point. This also means that we can use the context.Document property that contains the parsed GraphQL request.
In order to serialize the document to a formatted string use context.Document.ToString(indented: true).
The good thing is that in the middleware, we are in an async context, meaning you can easily access a database and so on. In contrast to that, the DiagnosticEvents are sync and not meant to have a heavy workload.
The middleware can also be wrapped into a class instead of a delegate.
If you need more help, join us on slack.
Click on community support to join the slack channel:
https://github.com/ChilliCream/hotchocolate/issues/new/choose

Discord REST API: get multiple users

I'm trying to query multiple users with a single request by using the Discord REST API with Node.js and the Unirest Module.
unirest.get(`https://discord.com/api/v8/users/{user.id}`).headers({ Authorization: `Bot${botSecret}`})
.then(response =>{
console.log(response);
})
.catch(error =>{
console.error(error);
})
In order to get multiple users, I'm passing in the user.id field a list of ids in the following way:
"id1,id2,id3"
https://discord.com/api/v8/users/id1,id2,id3
However I get a 'Bad Request' response.
Which is the correct way to query multiple users with a single request??
I had a quick look around the Discord API & discord.js code it seems like they are done individually.
Discord's API doesn't seem to support getting multiple users at once & discord.js gets an array of users, and iterates through them whilst also checking cache, making sure they dont violate ratelimit, etc.

aws - should I integrate s3 upload and store s3 url in dynamodb in one single request?

I have a Table called "Banner".
I have a banner upload function in my UI.
Aws Api gateway is used.
2 resources are created in api gateway,which are /s3 and /banner
I am using 2 separate requests to do this.
1.POST request, resource: /s3
This request runs below lambda function, to upload the banner image to s3
UploadBannerToS3.js
...
const s3 = new AWS.S3();
...
const data = await s3.upload(params).promise();
...
This returns a s3 url storing the banner(as image).
2. POST request, resource: /banner
This request take above s3 url as parameter, to store a banner information including the url in dynamodb.
The lambda function will like this.
CreateBanner.js
...
const { url} = JSON.parse(event.body);
const params = {
TableName : "Banner",
Item: {
id: id,
url: url,
createdAt: date,
}
};
...
const data = await documentClient.put(params).promise();
...
In my frontend code(I am using React) will like this.
handleUploadBanner = async (banners) => {
const image = await toBase64(banner);
const payload = { "banner": image }
try {
// request 1
const uploadResponse_S3 = await APIHandler.uploadBannerToS3(payload)
const s3Url = uploadResponse_S3.data.Location;
// request 2
const response = await APIHandler.createBanners({
url: s3Url,
})
console.log(response)
} catch (error) {
console.log(error)
}
}
If only request 1 is successfully sent, while request 2 fail to return successful status, would it be a mess for development?
Should I combine these 2 request in one single lambda function to handle it?
What is the best practise to do so?
If end-user (front-end) wants to have a "synchronized" response from API, so it means we need to design 2 apis as synchronized ones. But it doesn't mean we need to merge them.
If end-user wants to have only the first api response and doesn't care about the second one, we can design the second apis as asynchronized and you can use the pipeline like
a. Lambda 1 -> Performs its logic -> Send a SNS and return to end-user
b. SNS -> SQS -> Lambda 2
The more we design the system as "single responsibility" is the better for development and maintainance.
Thanks,
If only request 1 is successfully sent, while request 2 fail to return
successful status, would it be a mess for development?
Not necessarily. You could come up with a retry function in front-end for simplicity. But it depends because mess it is a very abstract concept. What is the requirement ? It is of vital importance that the requests never fail ? What do you wanna do if they fail ?
Should I combine these 2 request in one single lambda function to
handle it?
Either way is better to keep them small and short. It is how you work with aws lambdas.
But I think if you want more control over the outcome with better fail-over approach.
SQS it is one way of doing, however they are complex for that case. I would configure a trigger from s3 to lambda that way you will only update when the images get successfully updated.
So in summary:
Call Lambda 1 -> Upload s3 ? Successful ?
S3 Triggers Lambda 2
Lambda 2 saves to DB
I would prefer to process both at one lambda for s3 uploading and db storing. It's simpler and reliable to be said it makes sense to abstracting the fail response.
I mean, the app client is mirroring the file item to dynamodb not the s3. So I will assume, whatever it succeed either failed process we don't need to worries for the app getting wrong link. With some scenarios:
succeed upload, succeed db: App client get the correct link
succeed upload, failed db: App client will never get the correct link (no item)
failed upload, failed db: as same to point #2

Gmail API: messages.list suddenly there's a messages error key despite there's a nextPageToken in the prior iteration

I used to interact with the Gmail API since past year using these tests https://developers.google.com/gmail/api/v1/reference/users/messages/list#try-it but now this examples are failing because seems there are more messages but the next iteration is coming empty.
Problem is in this part of the code:
while 'nextPageToken' in response:
page_token = response['nextPageToken']
response = service.users().messages().list(userId=user_id, q=query,
pageToken=page_token).execute()
messages.extend(response['messages'])
The error is raised when trying to access the response['messages'] as the unique key in the reponse is 'resultSizeEstimate' and is 0. Sounds like the page_token is pointing to a next empty page.
Is someone experiencing this issue as well?
If your last page perfectly contains the last email with that particular query, you will get a nextPageToken to a page with a response like this:
{
"resultSizeEstimate": 0
}
The easiest way around this is to just add a check if messages is part of the response:
while 'nextPageToken' in response:
page_token = response['nextPageToken']
response = service.users().messages().list(userId=user_id, q=query, pageToken=page_token).execute()
if 'messages' in response:
messages.extend(response['messages'])

Resources