there!
Actually, I've been trying to set v$session machine field. The reason is about containers. We're working with a pod that enable oracle db connection, the session sets pod's name within machine, but when pod dies the next session's machine changes.
We apply a validation over every log in to db. We use the machine field to allow interaction with tables, views, etc.
So, we need to set into machine something like: app-1.apps.openshift.com. That's the container's hostname, so if pod dies, the machine field won't change and It will continue with flow quietly.
Someone knows How to set session's machine?
I tried from oracle driver with golang & python, but I don't get the results I want.
I hope you can help me.
With love, Bran. <3
Related
The MaxScale distributes the requests to the MariaDB database -> master/slave server on which the database is located.
What i need is a script running as a cron or something similar which verifies the GTID from master and slaves. If the slaves GTID differs from the masters GTID i want to be informed/alarmed via email.
Unfortunately i have no idea if this is possible somehow and how to do it
You can enable gtid_strict_mode to automatically stop the replication if GTIDs from the same domain conflict with what is already in the binlogs. If you are using MaxScale, it will automatically detect this and stop using it.
Note that this will not prevent transactions from other GTID domains from causing problems with your data. This just means you'll have to pay some attention if you're using multi-domain replication.
If you want to be notified of this, you can use the script option in MaxScale to trigger a custom script to be launched whenever the server stops replicating.
I am trying to install MediaWiki version 1.31 localy and I have run into some issues that I cant get past by. Mainly when I input datatabe connection (I am trying to connect to PostgreSQL database) information it returns this error.
Thing is the port I am trying to connect is 5433 not 5432, also the names "template1" and "postgres" are not included in my input trough the dialogue screen - I dont know where they came from. "test1" is the name of the database I am trying to connect to.
Any help or advice how to get trough this error would be greatly appreciated. Thank you.
That the port you specify is not used while setting up the database schema in the first place is a long-standing known bug. One workaround is to run your database on the default port until you have wiki set up, then change it back to the port you want.
In order to create a new database, you need to connect to an existing database in the same cluster. 'template1' and 'postgres' are pre-existing databases (usually created at the time the cluster was created) commonly used to connect to in order to create a new database. These names are "well-known", you don't need to specify them.
I'm trying to set up mirroring between two sql 2008 databases on different servers in my internal network, as a test run before doing the same thing with two live servers in different locations.
When I actually try and switch the mirroring on the target DB (with
ALTER DATABASE testdb SET PARTNER = N'TCP://myNetworkAddress:5022') I'm getting an error telling me that the server network address can not be reached or does not exist. A little research suggests this is a fairly unhelpful message that pops up due to a number of possible causes, some of which are not directly related to the server existing or otherwise.
So far I've checked and tried the following to solve this problem:
On the target server, I've verified that in SQL Configuration Manager that "Protocols for SQLEXPRESS" (my local installation is labelled SQLEXPRESS for some reason, even though querying SERVERPROPERTY('Edition') reveals that it's 64-bit Enterprise), and Client Protocols for SQL Native Client 10 all have TCP/IP enabled
I'm using a utility program called CurrPorts to verify that there is a TCP/IP port with the same number specified by the mirroring setup (5022) is open and listening on my machine. Netstat verifies that both machines are listening on this port.
I've run SELECT type_desc, port FROM sys.tcp_endpoints; and
SELECT state_desc, role FROM sys.database_mirroring_endpoints to ensure that everything is set up as it should be. The only thing that confused me was the "role" returns 1 .. not entirely sure what that means.
I've tried to prepare the DB correctly. I've taken backups of the database and the log file from the master DB and restored them on the target database with NORESTORE. I've tried turning mirroring on both while leaving them in the NORESTORE state and running an empty RESTORE ... neither seems to make much difference. Just as a test I also tried to mirror an inactive, nearly empty database that I created but that didn't work either.
I've verified that neither server is behind a firewall (they're both on the same network, although on different machines)
I've no idea where to turn next. I've seen these two troubleshooting help pages:
http://msdn.microsoft.com/en-us/library/ms189127.aspx
http://msdn.microsoft.com/en-us/library/aa337361.aspx
And as far as I can tell I've run through all the points to no avail.
One other thing I'm unsure of is the service accounts box in the wizard. For both databases I've been putting in our high-level access account name which should have full admin permissions on the database - I assumed this was the right thing to do.
I'm not sure where to turn next to try and troubleshoot this problem. Suggestions gratefully received.
Cheers,
Matt
I think that SQL Express can only act as a witness server with this SQL feature, you might get better mileage on ServerFault though.
Mike.
Your network settings might be OK. We got quite non-informative error messages in MS SQL - the problem might be an authorization issue and the server still will be saying "network address can not be reached".
By the way, how the authentication is performed? A MSSQL service (on server1) itself must be runned as a valid db user (on server2, and vice versa) in order to make the mirroring work.
I'm getting this error when running an SSIS package through SQL Agent
Failed to acquire connection "ORACLE ADO.NET". Connection may not be configured correctly or you may not have the right permissions on this connection.
When I log on as the SQL Agent User and run the ssis package directly it is fine. When I then execute it through the SQL agent job, it fails.
I've read around extensively on this topic, and it seems a lot of the advise concerns how you are logged in, configuring of proxy accounts, etc, etc, etc, none of which has been helpful.
I am logging onto an Oracle database with an ADO.NET conncetion. The connection string is as follows (datasource, userid and password have been changed):
Data Source=DATASOURCE;User ID=userid;Password=password;Persist Security Info=True;Unicode=True;
I'm loading this from a registry setting using package configuration. To check that I am getting the correct string, I am writing it into a temporary log table. I am definately getting the string I need from the correct registry setting.
I've tested the oracle login credentials though PL/SQL developer, and it lets me login just fine.
As far as I can tell, as I'm using an explicit user name and password for the Oracle connection it just shouldn't matter who the SSIs pacakge is run as. The only point of failure that Ican see would be the reading of the information from the registry, but that seems fine.
I'm really quite baffled, I must confess, and would appreciate any help some of the splendid experts here can offer.
Many thanks,
James
Ok, tracked this one down after quite a lot of pain.
It was working fine on one environment, but not another, so I fired up Process Monitor (http://technet.microsoft.com/en-us/sysinternals/bb896645.aspx) and ran a package through the SQL Agent job, comparing which system entities were hit on each enviroment.
On the failing environment, at the point of the bulk transfer operation, the package attempted to get the Oracle 11 client DLL, and then hung.
I knew that this was installed, and, moreoever, the DLL path was a system environment setting. After further investigation it was revealed that the server had not been rebooted since the Oracle Client install and the SQL Server Agent process had not bee recycled.
Yes, can you believe it, the old helpdesk fix "Can you reboot your computer?" worked.
Sigh!
We had issues at a client with running packages connecting to Oracle before stored on our sql server instance. The work around we found was to change the package property, protection level, to "Dont save Sensitive Data" and for security purposes, we encrypted the username and password in the package configuration that was decrypted by a udf in sql server. Of course, before you try the whole encryption part, I would recommend putting the username and password in the package configuration without encrypting the values to see if changing the protection level setting is the solution to your specific problem. I hope this helps.
I was getting this error when tnsnames.ora file did not have a valid entry for the environment
Suppose the following:
I have a database set up on database.mywebsite.com, which resolves to IP 111.111.1.1, running from a local DNS server on our network.
I have countless ASP, ASP.NET and WinForms applications that use a connection string utilising database.mywebsite.com as the server name, all running from the internal network.
Then the box running the database dies, and I switch over to a new box with an IP of 222.222.2.2.
So, I update the DNS for database.mywebsite.com to point to 222.222.2.2.
Will all the applications and computers running them have cached the old resolved IP address?
I'm assuming they will have.
Any suggestions along the lines of "don't have your IP change each time you switch box" are not too welcome as I cannot control this aspect of the situation, unfortunately. We are currently using the machine name of the box, which changes every time it dies and all apps etc. have to be updated with the new machine name. It hurts.
Even if the DNS is not cached local to the machine, it will likely be cached somewhere along the DNS chain between the machine and the name servers, at least for a short while. My understanding is this situation would usually be handled with IP takeover where you just make the new machine 111.111.1.1.
Probably a question for serverfault.
You're looking for DNS TTL (Time To Live) I guess.. In my opinion applications may cache the IP for at most the value of the TTL. I'm afraid however that some applications/technologies might actually cache it longer (agian in my opinion completely wrong)
Each machine will cache the ip address.
The length of time it is cached is the TTL (Time To Live). This is a setting on your DNS server, if you set it very low say 5 mins, then you show be up and running fairly quikly. A bit of a hack but it should work.
Yes, the other comments are correct in that what controls this is the DNS TTL set for the hostname database.mywebsite.com.
You'll have to decide what the maximum amount of time you're willing to wait for if you have a failure on your primary address (111.111.1.1) after you make the switch to the secondary address. Lower settings will give you a quicker recovery time, but will also increase the load and bandwidth to your DNS server because clients will have to re-query it to refresh their cache more often.
You can use nslookup using the -d option from your cmd prompt to see what your default TTL times and remaining TTL times are for the DNS server you are querying.
%> nslookup -d google.com
You should assume that they are cashed for two reasons not clearly mentioned before:
1- Many "modern" versions of OS families do DNS caching.
2- Many applications do DNS caching or have poor error/failure detection on live connections and/or opening new connections. This would possibly include your database client.
Also, this is probably not well documented. I did some googling, and found this for MySQL:
http://dev.mysql.com/doc/refman/5.0/en/connector-net-programming-connecting-connection-string.html#connector-net-programming-connecting-errors
It does not clearly explain its behavior in this regard.
I had a similar issue with a web site that disables the application pool recycling features and runs for weeks on end. Sometimes, a clustered SQL Server box would restart and for some reason, my SqlConnection's were not reconnecting. I was getting the error:
A network-related or instance-specific
error occurred while establishing a
connection to SQL Server. The server
was not found or was not accessible.
Verify that the instance name is
correct and that SQL Server is
configured to allow remote
connections. (provider: Named Pipes
Provider, error: 40 - Could not open a
connection to SQL Server)
The server was there - and running - in fact, if I just recycled the app pool, the app would work fine - but I don't like recycling app pools!
The connections that were being held in the connection pool were somehow using old connection information, and that could have been old IP addresses. This is what seems so similar to the poster's question, that it appears to be cached DNS information, because as soon as some sort of a cache is cleared, the app works fine.
This is how I solved it - by forcing all of the connections in the pool to be re-created:
Try
' Example: SqlDependency, but this could also be any SqlConnection.Open call
Dim result As Boolean = SqlClient.SqlDependency.Start(ConnStr)
Catch sqlex As SqlClient.SqlException
SqlClient.SqlConnection.ClearAllPools()
End Try
The code sample is just the boiled-down basics - it should be tweaked for your situation!
The DNS gets cached, but for any server that resolves to the wrong ip address, you can update the HOSTS file of the server and the ip should be updated immediately. This could be a solution if you have a limited amount of servers accessing your database server.