I have a single server at my disposal, and on this same server I need to employ two seperate Neo4J instances (a test instance and a production instance). I know I can have a single instance with several databases as mentioned in this answer, however, I would like Chinese walls between my test and production database (and they may not be configured exactly the same), and so two seperate instances are required. I know I can use different ports for the the two instances so bolt://neo4j.mydomain.com:7687 maps to the one instance, and bolt://neo4j.mydomain.com:7688 maps to the other. This works fine - but is it possible to have the two instances on the same port but at a different URI? So for example bolt://neo4j-prod.mydomain.com:7687 maps to one instance and bolt://neo4j-test.mydomain.com:7687 maps to the other? This is very common for web servers by using the hostname requested, so I would think it should be simple to do the same for other ressources even if they do not use the HTTP protocol. What I don't like about the double port solution is that it is very easy to forget which is test and which is production, and explicit is better than implicit.
I do not believe you can do this directly with Neo4j, but I'd be interested in hearing about it, if it is.
It should be possible to have a server in front of neo4j that makes it "appear to users" as you suggested by remapping/redirecting incoming connections... (Note: and neo4j still runs on different ports on it's server)
Related
I'm curious how you would handle following Database access.
Let's suggest you have a Computer which Hosts your database as part of his server work and multiple client PC's which has some client-side-software on it that need to get information from this database
AFAIK there are 2 way's to do this
each client-side-software connects directly to the Database
each client-side-software connects to a server-side-software which connects to the Database as some sort of data access layer.
so what i like to know is:
What are the pro and contra's of each solution?
And are other solutions out there which maybe "better" to do this work
I would DEFINITELY go with suggestion number 2. No client application should talk to a datastore without a broker ie:
ClientApp -> WebApi -> DatabaseBroker.class -> MySQL
This is the sound way to do it as you separate concerns and define an organized throughput to the datastore.
Some benefits are:
decouple the client from the database
you can centralize all upgrades, additions and operability in one location (DatabaseBroker.class) for all clients
it's very scaleable
its safe in regards to business logic
Think of it like this with this laymans example:
Marines are not allowed to bring their own weapons to battle (client apps talking directly to DB). instead they checkout the weapon from the armory (API). The armory has control over all weapons, repairs and upgrades (data from database) and determines who gets what.
What you have described sounds like two different kind of multitier architectures.
The first point matches with a two-tier and the second one could be a three-tier.
AFAIK there are 2 way's to do this
You can divide your application in several physical tiers, therefore, you will find more cases suitable to this architecture (n-tier) than the described above.
What are the pro and contra's of each solution?
Usually the motivation for splitting your application in tiers is to achieve some kind of non-functional requirements (maintainability, availability, security, etc.), the problem is that when you add extra tiers you also add complexity,e.g.: your app components need to communicate with each other and this is more difficult when they are distributed among several machines.
And are other solutions out there which maybe "better" to do this work.
I'm not sure what you mean with "work" here, but notice that you don't need to add extra tiers to access a database. If you have a desktop application installed in a few machines a classical client/server (two-tier) model should be enough. However, a web-based application needs an extra tier for interacting with the browser. In this case the database access is not the motivation for adding this extra tier.
I'm designing a service that will serve some business entites. Logically it will be divided into two parts:
Frontend - bells and whistels like Wiki, Pricing, Landing Page, maybe account information (billing, account status, and so on).
Service itself, where business entity's empoyers will do theirs work.
It is play 2.x framework, planning to host in heroku.
It is not clear for now how to decompose intstances and DB stuff.
Should I decompose DB for clients: business entity - one database? Or should I store all data in one database, but add for all tables id of business entity that ownes some row? What issues (performance, administrative, scaling) may come up with this decision?
If I will choose to divide databases, how can I do this? For that I need to launch app instance with DB for client that instance belongs to. Thus we have non-uniform instances that can be obstacle for scaling. And as I know, heroku doesn't support non-uniform (web)instances.
Please help, i'm totally stucked here.
Expected stack:
Scala
Play 2.0
Anorm
JDBC
PostgresSQL
Heroku
All (except Scala, and may be Play 2.0) of this are interchangeable.
This is a pretty classic problem. You have many clients and you wonder if you should create separate databases for each client - or if they should share a database.
I would recommend starting with one shared database and then use that until you out grow it. Think of some of the disadvantages to having each client with their own database instance:
Like you mention the schema management can be tough. You'd need to write tools to maintain all databases across all servers.
If you tell clients you have structured your system this way, some of them might push you to fork the database. In other words they might argue, "I have my own database! I want a new table just for me."
It's a bit harder to run queries across servers/databases. If you wanted to count how many items all clients have, you'd have to think about that a bit.
But if you want to start by sharding based on client (http://en.wikipedia.org/wiki/Shard_(database_architecture)), you might consider:
As mentioned previously, you'll need some tools/scripts to launch a new database instance for a client. Often those tools will need to "seed" the database with configuration information - like populating a states table for addresses.
You'll want to have a tool to monitor/maintain the databases. Start one, stop another, see if one has high CPU usage etc.
You'll need some kind of system to aggregate statistics across all clients.
You'll need a tool to roll out schema changes and a plan on how you can gracefully upgrade the database while their web application is running.
Overall I would advise to start small and simple and only start worrying about scale when you get there.
I have made a django web app (postgresql backend) for internal use for one of my clients in New Zealand.
They have told me that they would also like it to be used by one of their branches in Malaysia (it will need to be connected to the same database). The problem is that apparently in Malaysia the internet is really unpredictable and there is a lot of downtime.
So here is the question, what would be the best way for keeping the Malaysian branch running when their internet is down and having their version of the database synchronised with the main database back here in NZ?
What you are trying to do is to synchronize your data and schema across multiple (2 in your case) postgresql databases.
There are a variety of solutions to do that depending on exactly what you want to achieve. This is a good place to start - http://www.postgresql.org/docs/devel/static/high-availability.html
and the summary of the different solutions and each solution's pros and cons are listed here -
http://www.postgresql.org/docs/devel/static/different-replication-solutions.html#HIGH-AVAILABILITY-MATRIX
Please I need help.
I have project in which I need application which communicates with local DB server and simultaneously with central remote DB server to complete some task(read stock quotas from local server create order and then write order to central orders DB,...).
So, I don`t know which architecture and technology do this.
Web application, .NET WinForms client applications on each computer, or web services based central application with client applications?
What are general differences between this approaches?
Thanks
If you don't want to expose your database directly to the clients, I'd recommend having a web service layer in between. Depending on the sensitivity of your data and the security level of your network, I'd recommend either a web service approach (where you can manage the encryption of data yourself, and without need for expensive ssl certificates) or a web interface (which might be easier to construct, but with limitations in security).
I agree with Tomas that a web service layer might be good. However, when it comes to choosing between webforms or winforms I don't think your question includes enough information to make the choice.
I'd say that if you want a powerful and feature rich user interface and want to make development easy, Winforms is probably the way to go. But if you need it to be usuable from a varied array of clients and want easier maintenance and deployment, a web app might be best.
First, focus on the exact relationship between these databases. What does "local" mean. Right there on the user's desktop? Shared between all the users in their office? Presumably the local quotes (you do mean stock quotes and not quotas?) could potentiually be a little out of date relative to the central order server's view of the world. Does that matter? I place an order for 100 X at price 78.34, real price may be different. What is the intended behaviour.
My guess is that there is at least some business logic and so we need to decide where that runs. One (thick client) approach is to put that logic on the desktop, the desktop app then might write directly to the central DB. I don't tend to do this for several reasons:
Every client desktop gets a database connection. Scaling is not good, eventually the database gets unhappy when the number of users gets very large.
If we need a slightly different app, perhaps exposed to a different set of users via the Web or whatever, we end up reproducing that business logic.
An alternative approach (thin or browser based) keeps the UI on the desktop, but puts the logic on the server. The client can then invoke some kind of service. Now there's lots of possible ways of doing that, a simple Web Service or Rest Service will do the job. I hope it's clear that this service-based appraoch addressed my two points above.
By symmetry I would treat the local databases in the same way, wrap them in services. However it's possible that some more complex relationship between the databases exists and in which case you might need the local service layer to interact with the central service layer.
I'm touting the general pronciple of Do Not Repeat Yourself, implement each piece of business logic once.
I want to install a desktop application (on many stations - about 10-20) should access the SQL Server directly, no Services, and no server-DALs.
The application will be installed on a local network on about 10 machines, while one of them is a server.
When I will install the program I will set the connection string, and the applications will talk directly to the SQL server.
Is this a bad idea?
If yes, then how bad?
It is not necessarily a bad idea. If you won't need to scale then it's a valid approach.
What you are describing is often called a 2-tier client-server architecture.
You should probably encrypt the connection string in the config file (but this will only stop prying eyes, not someone intent on recovering your password). The other option is to use Windows authentication via a trusted connection, but you do lose the ability to connection pool, but that should not be an issue with 10 - 50 clients (ballpark).
Of course not.
What your describing is classic Client Server architecture, and around 50% of apps are still
built this way, according to a survey I saw recently.
Bob.
I have built plenty apps like that. I would suggest you build a DAL that is in the app itself so that if you ever need to separate data access and presentation layers, you can do so easily (plus there are other benefits like standardization of code, single place to change things, etc).
I don't see an issue with it as long as you are consistent and follow best practices.
If it's on your local network, go for it.
If it's over the internet, I'd create a web service.
Also note that there are plenty of off-the-shelf DALs (NHibernate, Entity Framework) so you don't need to roll your own, and the work just as well in client server architectures.