In my postgres database configuration all roles have a "-1" value for the rolconnlimit column, which means unlimited connections are allowed for that role. But, there's also a max_connections setting with the value 100.
So this means that the sum of all the connections of any user bounded to any role may never surpass 100?
Related
Is TDengine able to change replica number of a super(or chile table) table online? For example, change replica number from 3 to 5. Will the data be copied into new replica automatically?
In tdengine you can change the number of replica using the follow command:
ALTER DATABASE db_name REPLICA X;
The "x" represent the number of replica. However the X is an integer among[1,3], beside you also need to ensure the replica must less or equal to the number of dnodes.
Using the command below to check the num of dnode in tdengine.
show dnodes;
I am running the MS Access database on SQL Server Management 18. For some reason, when I created a new entry, 994 index numbers just skipped. My last indexed number was 19311, and then it suddenly jumped to 20305 when captured. What can I do to let it run from 19311 onward again?
This is pretty usual.
An identity seed is allocated before a query is committed. This means, if you run a query that inserts 100 records, but when getting the prompt if you actually want to add 100 records you press cancel, the identity seed is still incremented by 100. The same counts for copy-pasting records and many, many other operations.
You shouldn't need to prevent this from happening. Identity values are not meant to convey any meaning, and there shouldn't be a real need for changing them. If you've set your identity column to an Int(8) or Long Integer, you still have plenty of numbers to use.
SQL server explicitly blocks updating an identity column, and you also can't reseed an unique column below the initially set seed. This means: as soon as you've inserted number 20305, you can't reset it to a lower number than 20305.
You can work around that limitation by deleting all records higher than 20305, and then running DBCC CHECKIDENT ( table_name ) on SQL server with your table name to reset the seed to the highest occurring value. You can then re-add the deleted records.
See more on this Q&A for reclaiming the lost numbers, though I certainly advise against it.
In Oracle Golden Gate, I'm unable to replicate production sequence to replicate database, since as sequence increased by 1 in production, the count of sequence in target increasing by 2.
Let me elaborate, suppose I have sequence with currval 190, assume after initail load, target sequence also have currval 190.
Now I booked a deal and sequence no get increased by 1 in production, currval is 191 but when i checked in target db, sequence currval showing 192. This creating issue. Need help in resolving this...
Did you follow below procedure for your replicate:
1. running sequence.sql in oracle sqlplus.
2.ALTER TABLE sys.seq$ ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
There are a couple of scenarios when this can happen.
Scenario 1: If the replication setup is a bi-directional replication then sequences are kept at sequnce+1 value on target database. This is done so that just in case a failover or switchover has to happen from source to target database then there will be no need to reset the sequence number to a higher value. Check with your Golden Gate DBA to get more details on how sequences are being maintained.
Scenario 2: In Bi-directional replication with conflict detection and resolution-sequences are maintained so that they can be uniquely identified.
Eg:
Primary site will have sequences which are always ODD and standby site will always have sequences with even numbers. By doing this you will be able to clearly identify on which database sequence has increased.
Here is my scenario why I need a row lock across transactions..
change the columns value to 5 (in SQL Server)
change the columns value to 5 (in another resource, this can be a file or etc.)
Of course it's the case when everything is gone well. but if any problem occurs while doing the second change operation, I need to rollback the first change. And also while doing the second change, nobody should be allowed to read or to write this row in SQL Server.
So I need to do that
lock the column
change the columns value to 5 (in SQL Server)
change the columns value to 5 (in another resource)
if the above change is successfully done
commit the column
else
rollback the column
unlock the column
And I also need something for the murphy case. If I cannot reach the database after locking the row (in order to unlock or to rollback), it should be unlocked in a few seconds.
Is it possible to have something to do that in SQL Server or what ?
Read up on distributed transactions and a compensating ressource manager. THen you realize you can do all that in ONE transaction, managed by your transaction coordinator.
I need to find out if a connection limit has been set on a Postgresql database on a per user basis.
I know you can set such a limit using:
ALTER USER johndoe WITH CONNECTION LIMIT 2;
Can you check this in the pg_users table?
Whilst connected to the database you want to get this information
SELECT rolname, rolconnlimit
FROM pg_roles
WHERE rolconnlimit <> -1;
More details are available at http://www.postgresql.org/docs/current/static/view-pg-roles.html
This information is available in the column rolconnlimit in the view pg_roles
http://www.postgresql.org/docs/current/static/view-pg-roles.html
For roles that can log in, this sets maximum number of concurrent connections this role can make. -1 means no limit.