I am querying my database using graphQL. I am stuck on writing the graphQL resolvers to my database, using knexjs.
My problem is I want that a query or mutation to use only 1 database connection (correct me if this is wrong, but I really think this is true).
For instance, the call to the server
query {
post {
author
}
}
should use 2 database calls for the post and author fields, done in a single connection to the database.
I think transactions are the way to go, and I implemented resolvers using transactions (here is a toy example):
const trxProvider = knex.transactionProvider();
const resolvers = {
Query: {
post: async () => {
const trx = await trxProvider();
let res = await trx('posts')
return res
},
author: async () => {
const trx = await trxProvider();
let res = await trx('authors')
return res
}
}
}
How do I properly resolve this transaction? For instance, how would I call trx.commit() when a query/mutation has completed so the connection does not idle?
Are transactions the correct approach / What knex functionality should I use so that a single database connection is used for a query +mutation?
Answering any of these questions is great. Thanks!
Connection pooling is the preferred approach. Transactions are best used to wrap multiple database writes so that all the writes can be committed or rolled back together. This avoids inconsistent writes. I've found no advantage to using transactions for reads.
I've published a tutorial that covers a lot of what you need to do with knex and GraphQL. Hopefully you'll find it helpful.
Related
const Moralis = require('moralis').default
const {EvmChain} = require('#moralisweb3/evm-utils')
const runApp = async () => {
// WithOut Moralis initialising - I want to skip these ------(https://i.stack.imgur.com/u4cGM.jpg)
await Moralis.start({
apiKey: 'api_key_secret'
})
// ------------------
const address = '0xbf820316675F3F96beb7a47Cec34c5aEdf07BD0e'
const chain = EvmChain.GOERLI
const response = await Moralis.EvmApi.token.getWalletTokenBalances({
address,
chain
})
console.log(response.toJSON())
}
runApp()
As every detail of a smartContract is public. I don't want to use the API of a third party like Moralis as it slows the app.
Yes, you are right all the smartContract data on the blockchain is public. But it is not always easy to read this data. To read data from the blockchain you would need to run your own local rpc node or you may have to rely on another 3rd node provider or API provider to read the blockchain data.
Moralis provides the data to users through the API and it is one of the fastest ways to read real-time blockchain data.
If you don't want to use any third-party providers for reading blockchain data, one option is to run your own full RPC node. This requires setting up a server and syncing the entire blockchain to your machine. It gives you the ability to read the data directly from the blockchain. This can be a good option if you have the technical expertise and the resources to set up and maintain a full node.
But this is not an easy option nor the fastest option to choose if you are only looking to get the ERC20 token wallet balances.
ive created M0 Cluster Sandbox via Mongo Atlas. It is working pretty nice. But I want to use transactions with it. And I've read that to use transactions I need to have a replica set.
In the Atlas it seems like my DB have a replicaSet already (i didn't do anything). So how i can connect to that replica set?
My current connection link is mongodb+srv://admin:password#de.xxx.mongodb.net/db?retryWrites=true&w=majority
Thanks in advance!
It should be enough with passing the connection string when you create your MongoClient object:
const { MongoClient, ServerApiVersion } = require('mongodb');
const uri = "your_string_connection";
const client = new MongoClient(uri, { options-you-need });
client.connect(err => {
const collection = client.db("test").collection("devices");
// perform actions on the collection object
client.close();
});
This code was copy-pasted from atlas cluster instructions to connect to the cluster: Connect your applications -> check Include full driver code example.
I have been asked to report an issue with connecting to Snowflake using the node connector here.
Issue: https://github.com/snowflakedb/snowflake-connector-nodejs/issues/113
The issue is I can't find any documentation on how to re-use an existing token to avoid taking a long time when connecting to Snowflake.
Would appreciate any help.
EDIT
Here is the code I use:
// Tokens are retrieved from a DB
if (tokens) {
connection.masterToken = tokens.masterToken;
connection.masterTokenExpirationTime = tokens.masterTokenExpirationTime;
connection.sessionToken = tokens.sessionToken;
connection.sessionTokenExpirationTime = tokens.sessionTokenExpirationTime;
}
connection.connect(async function (err, conn) {
if (err) {
reject(err);
} else {
resolve();
}
});
This might not be a full answer, but hopefully it helps you or someone else. I've had similar issues. For us the process is to get a JWT token via a web service. I haven't tested this, but suspect this could be re-used.The JSON response includes a "lease_duration" property. I'm guessing this is in seconds, but do not know though I tried to check. To give you an idea, one value I got for this is 2764800. You could calculate the do something like:
Long leaseDurationInMs = Long.parseLong(result.get("lease_duration"));
Date estimatedLeaseExpiration = new Date(leaseStartTime+leaseDurationInMs);
System.out.println("Estimated lease expiration timestamp (human readable): "+estimatedLeaseExpiration);
Long estimatedLeaseExpirationInMs = estimatedLeaseExpiration.getTime();
and if then check this value each time you would have fetched whatever this token thing is to see if you need to get another one.
Sorry for answering my own question but I ended up caching the data on my side to avoid connecting too often.
In my (greatly simplified) model I have users, accounts and account_types. Each user can have multiple accounts of each account_type. When an account of type TT is created I'm updating the "users" field of that object so it keeps the users which have accounts of that types, and the number of such accounts they have.
users: {
some fields
},
accounts: {
userID: UU,
type: TT
},
account_type:
users: { UU: 31 }
}
I use the onCreate and onDelete cloud triggers for accounts to update the account_type object. Since multiple accounts can be created simultaneously I have to use transactions:
exports.onCreateAccount = functions.firestore
.document('accounts/{accountID}')
.onCreate((account, context) => {
const acc_user = account.data().userID;
const acc_type = account.data().type;
return admin.firestore().runTransaction(transaction => {
// This code may get re-run multiple times if there are conflicts.
const accountTypeRef = admin.firestore().doc("account_types/"+acc_type);
return transaction.get(accountTypeRef).then(accTypeDoc => {
var users = accTypeDoc.data().users;
if (users === undefined) {
users = {};
}
if (users[acc_user] === undefined) {
users[acc_user] = 1;
} else {
users[acc_user]++;
}
transaction.update(accountTypeRef, {users: users});
return;
})
})
.catch(error => {
console.log("AccountType create transaction failed. Error: "+error);
});
});
In my tests I'm first populating the database with some data so I'm also adding a user and 30 accounts of the same type. With the local emulator this works just fine and at the end of the addition I see that the account_type object contains the user with the counter at 30. But when deployed to Firebase and running the same functions the counter gets to less than 30. My suspicion is that since Firebase is much slower and transactions take longer, more of them are conflicted and fail and eventually don't execute at all. The transaction failure documentation (https://firebase.google.com/docs/firestore/manage-data/transactions) says:
"The transaction read a document that was modified outside of the transaction. In this case, the transaction automatically runs again. The transaction is retried a finite number of times."
So my questions:
What does "finite" mean?
Any way to control this number?
How can I make sure my transactions are executed at some point and don't get dropped like that so my data is consistent?
Any other idea as to why I'm not getting the correct results when deployed to the cloud?
What does "finite" mean?
It's the opposite of "unlimited". It will retry no more than a set number of times.
Any way to control this number?
Other than modifying the source code of the SDK, no. The SDK itself advertise a specific number, as it might change.
How can I make sure my transactions are executed at some point and don't get dropped like that so my data is consistent?
Detect the error and retry in your app. If you aren't seeing the transaction fail with an error, then nothing went wrong.
Any other idea as to why I'm not getting the correct results when deployed to the cloud?
Since we can't see what exactly you're doing to trigger the function, and have no specific expected results to compare to, it's not really possible to say.
So I have a database that store users. When someone log on my website, it stores in apollo cache a user as currentUser. And I only store his id.
So I made a query to get a user by passing his id :
query {
user(id: "id") {
id
username
avatar
}
}
But everytime I wanna get data for that user I need to make two query (the first one locally to get back his id from the cache and a second one to the server).
const GET_CURRENT_USER = gql`
query getCurrentUser {
currentUser #client
}
`;
const GET_USER_DATA = gql`
query getUser($id: String!) {
user(id: $id) {
id
username
avatar
}
}
`;
const currentUserData = useQuery(GET_CURRENT_USER);
const { currentUser } = currentUserData.data;
const { data, loading } = useQuery(GET_USER_DATA, {
variables: { id: currentUser.id },
fetchPolicy: "cache-and-network"
});
Is here a way that I can reduce that to only one query (the one to the server) ?
id value stored in the cache can be read using readQuery, you can store it in other global store/state, f.e. redux.
If you're using apollo cache as global store then using queries is a natural part of this process.
Using readQuery you can read the value without querying (but doing the same). One query 'saved' ;)
Deeper integration (additional query, local resolver) is not a good thing - creating unnecessary dependencies.
If you want to reuse this "unneccessary query" extract it to some module or create a custom hook (id read/used/saved once during initialization) - probably the best solution for this scenario.
Another solutions:
make login process providing user data - for some inspiration take a look at apollo-universal-starter-kit - but this is for initial data only (login/avatar changing during session??) - further user querying still needs an id parameter - it must be stored and read somewhere in the app.
make id optional parameter (for getUser query - if you can change backend) - if not provided then return data for current user (id read from session/token)