ive created M0 Cluster Sandbox via Mongo Atlas. It is working pretty nice. But I want to use transactions with it. And I've read that to use transactions I need to have a replica set.
In the Atlas it seems like my DB have a replicaSet already (i didn't do anything). So how i can connect to that replica set?
My current connection link is mongodb+srv://admin:password#de.xxx.mongodb.net/db?retryWrites=true&w=majority
Thanks in advance!
It should be enough with passing the connection string when you create your MongoClient object:
const { MongoClient, ServerApiVersion } = require('mongodb');
const uri = "your_string_connection";
const client = new MongoClient(uri, { options-you-need });
client.connect(err => {
const collection = client.db("test").collection("devices");
// perform actions on the collection object
client.close();
});
This code was copy-pasted from atlas cluster instructions to connect to the cluster: Connect your applications -> check Include full driver code example.
Related
I am querying my database using graphQL. I am stuck on writing the graphQL resolvers to my database, using knexjs.
My problem is I want that a query or mutation to use only 1 database connection (correct me if this is wrong, but I really think this is true).
For instance, the call to the server
query {
post {
author
}
}
should use 2 database calls for the post and author fields, done in a single connection to the database.
I think transactions are the way to go, and I implemented resolvers using transactions (here is a toy example):
const trxProvider = knex.transactionProvider();
const resolvers = {
Query: {
post: async () => {
const trx = await trxProvider();
let res = await trx('posts')
return res
},
author: async () => {
const trx = await trxProvider();
let res = await trx('authors')
return res
}
}
}
How do I properly resolve this transaction? For instance, how would I call trx.commit() when a query/mutation has completed so the connection does not idle?
Are transactions the correct approach / What knex functionality should I use so that a single database connection is used for a query +mutation?
Answering any of these questions is great. Thanks!
Connection pooling is the preferred approach. Transactions are best used to wrap multiple database writes so that all the writes can be committed or rolled back together. This avoids inconsistent writes. I've found no advantage to using transactions for reads.
I've published a tutorial that covers a lot of what you need to do with knex and GraphQL. Hopefully you'll find it helpful.
const Moralis = require('moralis').default
const {EvmChain} = require('#moralisweb3/evm-utils')
const runApp = async () => {
// WithOut Moralis initialising - I want to skip these ------(https://i.stack.imgur.com/u4cGM.jpg)
await Moralis.start({
apiKey: 'api_key_secret'
})
// ------------------
const address = '0xbf820316675F3F96beb7a47Cec34c5aEdf07BD0e'
const chain = EvmChain.GOERLI
const response = await Moralis.EvmApi.token.getWalletTokenBalances({
address,
chain
})
console.log(response.toJSON())
}
runApp()
As every detail of a smartContract is public. I don't want to use the API of a third party like Moralis as it slows the app.
Yes, you are right all the smartContract data on the blockchain is public. But it is not always easy to read this data. To read data from the blockchain you would need to run your own local rpc node or you may have to rely on another 3rd node provider or API provider to read the blockchain data.
Moralis provides the data to users through the API and it is one of the fastest ways to read real-time blockchain data.
If you don't want to use any third-party providers for reading blockchain data, one option is to run your own full RPC node. This requires setting up a server and syncing the entire blockchain to your machine. It gives you the ability to read the data directly from the blockchain. This can be a good option if you have the technical expertise and the resources to set up and maintain a full node.
But this is not an easy option nor the fastest option to choose if you are only looking to get the ERC20 token wallet balances.
i'm learning MongoDB and i'm sorry to bother you but i'm getting this error:
MongoServerSelectionError: connection <monitor> to xx.xxx.xxx.xxx:27017 closed
at Timeout._onTimeout (C:\...\node_modules\mongodb\lib\sdam\topology.js:305:38)
at listOnTimeout (node:internal/timers:559:17)
at processTimers (node:internal/timers:502:7) {
reason: TopologyDescription {
type: 'ReplicaSetNoPrimary',
servers: Map(3) {
'ac-c9obg9r-shard-00-00.onq7cwz.mongodb.net:27017' => [ServerDescription],
'ac-c9obg9r-shard-00-02.onq7cwz.mongodb.net:27017' => [ServerDescription],
'ac-c9obg9r-shard-00-01.onq7cwz.mongodb.net:27017' => [ServerDescription]
},
stale: false,
compatible: true,
heartbeatFrequencyMS: 10000,
localThresholdMS: 15,
setName: 'atlas-up12ch-shard-0',
logicalSessionTimeoutMinutes: undefined
},
code: undefined,
[Symbol(errorLabels)]: Set(0) {}
}
I have tried to enable the port 27017 and resetting the ip in the network access tab (was white listed already), but no luck, error persists. Reinstalled the modules I used, and nothing.
My code was working yesterday, but after a Windows update i can't connect (that's why i thought it was the port).
The digits I replaced xx.xxx.xxx.xxx:27017 are not my ip number, i don't know if that helps.
If u have any ideas, I apreciate your input.
MongoServerSelectionError: connection <monitor> to xx.xxx.xxx.xxx:27017 closed
at Timeout._onTimeout (C:\...\node_modules\mongodb\lib\sdam\topology.js:305:38)
at listOnTimeout (node:internal/timers:559:17)
at processTimers (node:internal/timers:502:7)
I had this same error while trying to connect to MongoDB with a new Node server.
See if you have changed your network connection to a diferent network than the one you have whitelisted in MongoDB.
If so, whitelist the current IP adress in mongoDB.
(PS: I see you have already tried resetting the IP address in the Network Access tab, but check again. This was how i fixed it.)
Also, check and see if the .env file variables are correctly declared with MongoDB link and also change the password and database name.
Still if the issue is not solved, I would suggest you to delete the old cluster and create a new one in MongoDB. Also re-initialise node and the packages.
Hope this solves your problem.
For me this error was occurring because my connection string was wrong.To be very specific - I copied the sample connection string from a course I was learning and just replaced the username and password with my credentials. So, the credentials were right but not the rest of the connection string.
Just for the sake of understanding. Please see below :
mongodb+srv://myusername:mypassword#courseproject.h1mpg.mongodb.net/?retryWrites=true&w=majority"
myusername and mypassword are correct i.e belong to the cluster in my atlas account but the rest of the string is wrong as I copied it from somewhere instead of copying it from my own MongoDB atlas account.
So please make sure to double check if your entire connection string is correct.
import { MongoClient } from 'mongodb'
const uri = process.env.MONGODB_URI
const options = {
useNewUrlParser: true,
useUnifiedTopology: true,
}
client = new MongoClient(uri, options)
clientPromise = client.connect();
export default clientPromise
Inside your .env you could insert something like this:
MONGODB_URI=mongodb+srv://username:password#name-of-cluster.i43pl8d.mongodb.net/DatabaseName?retryWrites=true&w=majority
Code snippet for connnecting mongodb will be available at https://cloud.mongodb.com/, navigate to your cluster and click connect then click Connect your application.
Finally copy code snippet whatever you get at your time of mongodb version after choosing include full driver code example and implement it into your application:
Solr has a Admin UI where we can check each and every Collections that were deployed to Solr Cloud. For example, I can see a Slice/Shard in a collection up or not as mentioned in the below URL.
Our production environment doesn't provide access to this Admin UI due to security reasons. I need to provide an API to get the status of each and every collection, and its shards and each shard's replica. I am using Solr APIs to do that
http://lucene.apache.org/solr/4_7_2/solr-solrj/index.html
CloudSolrServer server = new CloudSolrServer(<zk quorum>);
ZkStateReader reader = server.getZkStateReader();
Collection<Slice> slices = reader.getClusterState().getSlices(collection);
Iterator<Slice> iter = slices.iterator();
while (iter.hasNext()) {
Slice slice = iter.next();
System.out.println(slice.getName());
System.out.println(slice.getState());
}
The above piece of code is always returning Active only as the state of shard, even its replica is showing down in the UI. I assume this returns only the state of a shard, not the state of shard's leader or replica.
How can I get the replicas status through Solr APIs? is there any API for this?
And what is the API being using by Solr Admin UI for getting shard's replicas/leader status?
Thanks
The code is not looking at replica status. Here is one that prints out replica status:
CloudSolrServer server = new CloudSolrServer(zknodesurlstring);
server.setDefaultCollection("mycollection");
server.connect();
ZkStateReader reader = server.getZkStateReader();
Collection<Slice> slices = reader.getClusterState().getSlices("mycollection");
Iterator<Slice> iter = slices.iterator();
while (iter.hasNext()) {
Slice slice = iter.next();
for(Replica replica:slice.getReplicas()) {
System.out.println("replica state for " + replica.getStr("core") + " : "+ replica.getStr( "state" ));
System.out.println(slice.getName());
System.out.println(slice.getState());
}
}
check http://{ipaddress}:{port}/solr/admin/info/system
Look at the Solr log when browsing the web interface. Since the web interface is purely a client side application, you can see which endpoints on the Solr server it queries to retrieve information about the current state of the cluster.
The response format used to create the graph is probably pretty straight forward (since it's parsed in the web interface).
This also works for the other information displayed in the Admin interface.
You can use Solr's Ping API to check health status of all replicas for a given collection.
Request format: http://localhost:8983/solr/Collection-Name/admin/ping?distrib=true&wt=xml
This command will ping all replicas of the given collection name
In Java:
public boolean isActive(final String collectionName) {
SolrPing ping = new SolrPing();
ping.getParams().add("distrib", "true"); //To make it a distributed request against a collection
SolrPingResponse response = ping.process(solrClient, collectionName);
return response.getStatus() == 0;
}
I have experienced some issues while setting up Slick 2.0.2. Any configuration that I do in one session is lost in the next. For example, in the first session, I create the table and add three people:
// H2 in-memory database
lazy val db = Database.forURL("jdbc:h2:mem:contacts", driver="org.h2.Driver")
// Contacts table
lazy val contacts = TableQuery[ContactsSchema]
// Initial session
db withSession { implicit session =>
contacts.ddl.create
// Inserts sample data
contacts += Person("John", "123 Main street", 29)
contacts += Person("Greg", "Neither here nor there", 40)
contacts += Person("Michael", "Continental U.S.", 34)
// Successfully retrieves data
contacts foreach { person =>
println(person)
}
}
All is well up to this point. The output repeats the three people whom I added. When I start a new session, I start to experience issues.
// New session in which the previous data is lost
db withSession { implicit session =>
contacts foreach { person =>
println(person)
}
}
The above block creates a org.h2.jdbc.JdbcSQLException: Table "CONTACTS" not found exception. If I edit as follows
db withSession { implicit session =>
contacts.ddl.create
contacts foreach { person =>
println(person)
}
}
then all the data is erased.
I see that the Scalatra guide to Slick uses a similar configuration to mine. What am I doing wrong? How should I get the data to persist between sessions? Does the fact that I am using an in-memory database have anything to do with it?
Two choices.
Either create a session and keep it open. That can be done with a withSession scope lower on the call stack or db.createSession.
Or add ;DB_CLOSE_DELAY=-1 to the database url. That keeps the db alive as long as the vm runs.
See http://www.h2database.com/html/features.html#in_memory_databases