We're currently making the NFT marketplace and want to find out best architecture for serving multiple chains. I think there're two options but I don't know what's the pros and cons between them
one database with sharding per chain(ethereum, polygon and etc)
one database per chain, means multiple databases.
Requirements would be
query data across multiple chains
search/filter queries are needed to shows the items the user would be interested in.
We're using postgres, anyone have experience on this? Please shed light on this!
Thanks in advance!
Related
I have a situation where I'm currently building a multitenant application where each tenant has its own database. The problem that I'm struggling with is that each database could have a vastly different structure. There are approximately 150 different tenants with each database being different. Is there a method of being able to cleanly manage this?
I'm aware of how to handle a single application with multiple databases but I'm lost on how I can implement this over several different schemas without the code base becoming unmaintainable. An initial idea would be separating the data layer so each tenant has it's own repositories and entities as an individual micro service. This isn't really scalable long term but would provide the separation of keeping the core logic separated from the individual databases but feels a bit hacky.
Are there any patterns, advice or examples where I could be pointed to so I can read further on this? Or is the method going forward just completely unfeasible?
I'm not asking anybody to write code for me or to provide a solution. I would just like a bit of advice for someone who may have worked with this kind of situation and potentially recommend steps forward.
I think creating a common data layer is your best bet. You absolutely don't have a scalable solution because you don't have a scalable problem. Hopefully they have enough in common that you can have a common mapping layer with plugins/overrides for particular entities that differ from that common core.
I'm a student and I have a question about architecture.
Is it common to use multiple database connections in a Java application when being in the first stage of the developing process?
Best regards ,
Erik Student
Hello Erik and welcome to StackOverflow.
To answer your question:
That very much depends on the architecture/usecases of the application. A couple of examples that could motivate the use of multiple database connections are;
Needed data is stored/owned on different locations
Microservice architecture (https://smartbear.com/learn/api-design/what-are-microservices/)
Parts of data are used by multiple applications (splitting into multiple databases for load distribution)
Do note that the distribution of data comes with some disadvantages, such as syncing data between databases (foreign keys could be hard to manage), and data mismatch between applications/application states.
Further, you can always start with a single database and later split them, as long as your data schema allows some flexibility between tables, for example, don't mash all data in a single table.
To give a definite answer to your question we would need to know more about the environment/architecture of the application.
I hope this helps you somewhat :)
I have a question in regards to some performance monitoring.
I am going to connect to the Postgres database and what I want to do is extract the relevant information from tableau server database to my very own database.
I am following a document at the moment to perform the relevant steps needed to retrieve the performance information from Postgres, but what I really require to do is set up a data model for our very own database.
I’m not a strong DBA so may require help in designing the data model but the requirement is:
We want a model in place so that we can see how long workbooks take to load and if any of them take let’s say longer than 5 seconds, we are alerted of this so we can go in and investigate.
My current idea for the data model in very basic terms is having the following tables:
Users – Projects – Workbooks – Views – Performance
Virtually we have our users who access various projects that contain their very own workbooks. The views table is simply for workbook view so that we can see how many times a workbook has been viewed and when. Finally performance table is required for the load time.
This is a very basic description we require but my question is simply is there anyone who has knowledge of tableau and data models to help design a very basic model and schema for this? Will need it salable so that it can perform for as many tableau servers as it can.
Thank you very much,
I've found an article from the blog of a monitoring tool that I like and maybe it could help you with your monitoring. I'm not an expert in PostgreSQL but is worth to take a look:
http://blog.pandorafms.org/how-to-monitor-postgress/
Hope this can help!
We have a system that enables users to create applications and store data on their application. We want to separate the index of each application. We create a core for each application and search on the given application when user make query. Since there isn't any relation between the applications, this solution could perform better than the storing all index together.
I have two questions related to this.
Is this a good solution? If not could you please suggest any better solution?
Is there a limit on the number of core that I can create on Solr? There will be thousands maybe more application on the system.
Yes, it COULD be a good solution, as always depends on the specific use case\
Look at this jira issue where Erick mentions a 10k core system...so
it seems it could work for you, should need to assess the hardware etc
I am intrested to know a little bit more about databases then i currently know. I know how to setup a database backend for any webapp that i happen to be creating but that is all. For example if i was creating three different apps i would simply create three different databases and then configure each database for the particular app. This is all simple knowledge and i would now like to have a deeper understanding of how databases actually work.
Lets say that I developed an application for example that needed lot of space and processing power.This database would then have to be spread over numerous machines. How exactly would a database be spread across numerous machines and still be able to write records and then retreieve them. Would each table get their own machine and what software is needed to make sure that the different machines have all performed their transactions successfully.
As you can see i am quite a database ignoramus lol.
Any help in clearing this up would be greatly appreciated.
I don't know what RDBMS you're using but I have two book suggestions.
For theory (which should come first, in my opinion): Database in Depth: Relational Theory for Practitioners
For implementation: High Performance MySQL: Optimization, Backups, Replication, and More
I own both these books and they are both pretty great, especially the first one.
That's quite a broad topic... You might want to start with Multi-master replication, High-availability clustering and Massively parallel processing.
If you want to know about how to keep databases running with ever increasing load, then it's not a basic question. Several well known web companies are struggling to find the right way to make their database scalable.
Using memcached to cache database information is one way to decrease load on your database if your application is read-intensive. If you application is write-intensive then may be you would want to consider using a NOSQL datastore like MongoDB or Redis.
Database Design for Mere Mortals
This is the best book about the subject if you don't have any experience with databases. It's got historical background and practical examples. Most books often skip the historical stuff because they assume you know what a db is, or it doesn't matter, and jump right to the practical. This book gives you the complete picture.