About Heroku DB Basic plan - database

I have a question about the "connections" limit set to 20 on the basic and hobby heroku DB plan, I'm new to the matter and i dont get if connections limit refer to connections from external databases, or does it mean that only 20 queries would be processed at a time? Some light on this would be really helpfull.
And what if I'm using somethign like ClearDB instead?

Related

mysql - Too Many database connections on Amazon RDS

db connection screenshot
What is the reason to this issue?
I see 3-4 connections steadily but in every 6 hours i see more than 100 database connections which is making my rds slow. Please let me know what may be the reasons behind this and what can be the solution.
It is impossible to tell the reason for this without looking through your code.
But there are a few things to look at when you investigate the issue:
Make sure that you use some sort of caching mechanism in front of your DB (Redis, Memcache etc.)
Verify that you only make DB write operations when it is absolutely necessary, unneeded write operations can have a dramatic impact on your DB performance.

Microsoft Access database - queries run on server or client?

I have a Microsoft Access .accdb database on a company server. If someone opens the database over the network, and runs a query, where does the query run? Does it:
run on the server (as it should, and as I thought it did), and only the results are passed over to the client through the slow network connection
or run on the client, which means the full 1.5 GB database is loaded over the network to the client's machine, where the query runs, and produces the result
If it is the latter (which would be truly horrible and baffling), is there a way around this? The weak link is always the network, can I have queries run at the server somehow?
(Reason for asking is the database is unbelievably slow when used over network.)
The query is processed on the client, but that does not mean that the entire 1.5 GB database needs to be pulled over the network before a particular query can be processed. Even a given table will not necessarily be retrieved in its entirety if the query can use indexes to determine the relevant rows in that table.
For more information, see the answers to the related questions:
ODBC access over network to *.mdb
C# program querying an Access database in a network folder takes longer than querying a local copy
It is the latter, the 1.5 GB database is loaded over the network
The "server" in your case is a server only in the sense that it serves the file, it is not a database engine.
You're in a bad spot:
The good thing about access is that it's easy to create forms and reports and things by people who are not developers. The bad is everything else about it. Particularly 2 things:
People wind up using it for small projects that grow and grow and grow, and wind up in your shoes.
It sucks for multiple users, and it really sucks over a network when it gets big
I always convert them to a web-based app with SQL server or something, but I'm a developer. That costs money to do, but that's what happens when you use a tool that does not scale.

MSSQL Replication: If Publisher/Distributor server goes down

Background
To make the story short, our company is facing the task of making our application redudant and more resilent to heavy loads by load-balancing. The task is on my desk and I've been doing some research as I've never done it before.
Fact
Today we host our application on 1 server and the goal is to have another one to even out serverload with load-balacing.
Issue
I've been doing some research and got stuck on how to setup the MSSQL Replication. If one server goes down the other one must be in sync as the users will be redirected there insted by the load-balancer
The tenthousand view to the solution goes something like: Have a Publisher/Distributor on same server and then add subscriber databases and they will sync between eachother.
Question
What happens if the Publisher/Distrubtor server goes down? Suddenly the system isn't redudant at all. Do we have to setup a Publisher/Distrubtor on each subscriber server to take over the role? I've been search around and haven't found a good answer.
Just hint if the explanation is confusing and I'll fill in the blanks..
Thanks in advance!

Theory of how to sync. (two) databases

What steps should I pass if I like to synchronize two databases ex.: in every 15 minutes?
What practical advices can you give me if I'd like to sync. a MYSQL and an MSSQL databases?
The theory behind true replication (something like MySQL to MySQL) is very complicated and difficult. I wouldn't recommend trying to implement something like that for MySQL to SQL Server.
Some things to look at:
Look at Mule ESB (http://www.mulesoft.org/) You can get off the ground pretty fast with JDBC connections to MySQL and SQL Server. Then it's just a matter of how often you want to poll one endpoint to push to another endpoint. (For example, poll MySQL every 15 minutes and take the results and write to SQL Server.)
You can write your own syncing program. Maybe export data from one system every 15 minutes and write to the file system. Have another program watch that directory and import anything it sees. (Disadvantage is you have to touch the disk.)
To be really creative, you can write triggers in MySQL and SQL Server that fire an external process to send data. That way when a record gets touched, it will send off a message in near real time to the other database.
Try to make the schemas the same. MySQL and SQL Server share many of the same data types, so definitely try to not use data types that are specific to one of the two databases. (For example, I don't believe MySQL supports the "xml" data type. But maybe I'm wrong?)

how many datasources can coldfusion handle

We have a coldfusion enterprise server with 2 instances. Each instance has 200+ data-sources to databases on one MSSQL server. This number will keep on growing. Now it seems that requests to a single data-source are getting slower even though the database is small. It is possible that requests get slower when CF has more data-sources?
Are the datasources partitioned for a reason (e.g. different clients/customers, etc)? If this is really just a big application with a bunch of databases, you may be able reduce the number of DSNs through cross-database queries through a single CF datasource.
If the account CF is using to connect to SQL Server has read access to both databases on the server, you can do something like this:
SELECT field1, field2, field3...
FROM [databaseA].[dbo].Table1 T1
JOIN [databaseB].[dbo].Table2 T2 ON ...
I've done this with State and Country tables that are shared across multiple DBs. Set the permissions carefully to prevent damage or errant updates.
Of course it's possible, I doubt there are many people with this kind of experience so we could just guess.
Personally I'd never make that many databases in SQL server, and that many datasources in CF. IMHO using db schemas would be much better solution, easier to maintain, administer and so on.
How's situation with memory? Could happen that huge amount of JDBC connections is choking the server. I'd check memory consumption first, SQL stats after to see data through-output and maybe later even SQL Severs performance settings, CF settings to see concurent possible JDBC connections, network settings and so on.
Again, just guessing and trying to give you a hint where to look.
There's more too it than just coldfusion. Each connection is about 4k, and each datasource can use multiple connections. So 200 DSN's might equal 300 or 400 connections (or 800 or 1000 when aggregated). The DB server itself uses the "tempdb" as a work space for handling requests. It expands this workspace to handle the traffic - but it is a shared resource in a way. So one DB can have an impact on another DB on the server.
I would:
Check the total number of connections on the SQL server (perfmon has some good counters for this)
Use server monitor to get a sense of the total number of connections on each instance.
Use network monitoring to determine what capacity the network connection on each server is using...
Of course it goes without saying that your databases need to be fine tuned to perform as well (indexed and optimized - with a good schema and backstopped by good query code). Creating a scalable solution requires all of these things :)
PS - it goes without saying you can contact me for more "formal" help. I'll be glad to chat about your problem.

Resources