HI all,
I tried posted on amazon forum for which I didn't get a response. TCPS is needed for my oracle database server to be ssl enabled. Looks like there is no options to open the port 2484 or any other for TCPS. Is this true on amazon instances please confirm.
Thanks,
SR
Unless you're using EC2 security groups, or you have a local firewall (iptables) the port should already be open. Try running iptables -L -v to check for local firewall rules that came with the AMI you are using.
As a point of fact, it's worth noting that by default, ports on any system are "open" until they are blocked by a firewall. "open" effectively means "not blocked." That doesn't mean that they are in use; a system without a firewall can be quite secure if it does not have any programs that are bound/listening to the network, although this is not practical. (The words 'bound' and 'listen' come from the system calls bind(2) and listen(2) which are called by a program to start accepting connections on a given port.)
In short, if there's no firewall in the way, you may not have to do anything at all to "open" a port. Once Oracle has been configured to use TCPS, it will begin using the port automatically.
Related
I am really a beginner, however I learn fast and I thought that PostgreSQL was interesting, so I am studying it, but it does not look like this question has been asked. The default database port seems to be 5432, but what that number means? Can I put any number?
Sorry if this is out of place, really new to this world!
Yes, you can specify any port you wish, subject to the limits of your host OS (Unix-oriented OSes restrict access to ports under 1024) and your sysadmins’ rules. Generally best to choose a port number not already claimed by an app that might be in use on your servers.
Postgres does not care about which port you choose. But you must communicate to Postgres if you want it to listen for incoming connections on any port other than the default 5432. Do so by changing its configuration settings.
The client app connecting to the Postgres server must also be configured to use the correct port if you are not using the default 5432.
If you build your own Postgres by compiling from source, you can specify a different default port as discussed here.
If you install more than one Postgres cluster on a machine, as happens when testing multiple versions of Postgres, then you must change the port number. Each cluster must be listening on its own port number.
I have hosted my WebApp on server 1 and my database on server 2
But I'm getting following error
Communication with the underlying transaction manager has failed.
I googled and found a post which mentioned that it is the issue of DTC(Distributed Transaction)
I enabled DTC on server2(DB server) and made an exception of it in Firewall.
But still same error.
Here is the full stack trace
Message: System.Transactions.TransactionManagerCommunicationException: Communication with the underlying transaction manager has failed. ---> System.Runtime.InteropServices.COMException: The MSDTC transaction manager was unable to pull the transaction from the source transaction manager due to communication problems. Possible causes are: a firewall is present and it doesn't have an exception for the MSDTC process, the two machines cannot find each other by their NetBIOS names, or the support for network transactions is not enabled for one of the two transaction managers. (Exception from HRESULT: 0x8004D02B)
at System.Transactions.Oletx.IDtcProxyShimFactory.ReceiveTransaction(UInt32 propgationTokenSize, Byte[] propgationToken, IntPtr managedIdentifier, Guid& transactionIdentifier, OletxTransactionIsolationLevel& isolationLevel, ITransactionShim& transactionShim)
at System.Transactions.TransactionInterop.GetOletxTransactionFromTransmitterPropigationToken(Byte[] propagationToken)
Kindly advice
We had the exact same situation, and more than once. Each time, it was one of the following:
The IP address in the DNS for the server is outdated (as said in error message: "two machines cannot find each other by their NetBIOS names"). You can check if this is the case by trying ping servername from one server to another in the command prompt. If the ping by name fails and ping by IP succeeds (or ping by name returns the wrong IP), than you should talk to the System Admins to take a look at DNS/DHCP.
The servers are created as an image of preconfigured server (for example, if you are working with virtual machines, and instead of doing a fresh install for each of the servers, you simply clone the image). This is a problem because DTC has an internal "Identifier" - and in case of image cloning both your installations now have same DTC ID, and won't be able to communicate with each other. The solution is to simply uninstall and install the DTC again.
Hope it helps.
Things to check:
Have you done this configuration on both servers?
Are both servers members of the same domain?
Have you checked the event log?
I had the same problem while connecting to a remote SQl Server.
The solution in my case was to add "enlist=false" to the connection string.
I was missing quite a lot of things:
No authentication (as DB server and APP server and not within same AD domain)
Rule to Windows Firewall enabling msdtc.exe
Rule to firewall between DMZ and internal zone TCP 135,1024-65535 in both directions. The link tell you how to restrict the firewall policy to few ports only.
short / long server names to hosts or a shared DNS server. Eg. 192.168.1.1 app1 as well as 192.168.1.1 app1.domain.local
On the other hand based on this link my setup doesn't require:
Allow Remote Clients
Allow Remote Administration
Enable XA Transactions (required prior Windows Server 2003 SP1)
Solved after adding remote IP\machine name to files on server:
hosts, lmhosts
in folder
C:\Windows\System32\drivers\etc
One of our servers displayed this error after the Virtual Machine (VM) controlling our Domain Controller froze. Several related communication problems also started to pop up (like failed password resets). Resetting the frozen VM fixed the issue.
Lots of helpful answers already given.
One problem for me was the presence of invalid (cyrillic) characters in the computer name.
And there is also a way to validate the connection between two servers (or between a server and a computer) using a small tool from Microsoft called DTCPing.
Is there any way to detect if I am connected to a VPN using standard windows APIs in C?
Basically I have a client that has to sync with a server but only if the VPN is connected. This could be a standard windows VPN client or a Citrix.
Is RAS helpful here?
thank you, code is appreciated.
EDIT:
to make it clearer.
This is a client that will run on our customer's computer and they set the VPN and server however they want. So I wanted to know if windows keeps a setting somewhere that I can read via an API or registry or WMI or whatever that can tell me VPN: no or yes and if yes the info.
With the VPN I suspect you able to access resources that don't exists otherwise. So you could PING test a server on the VPN network. ICMP is the protocol for ping.
Here is some examples: http://www.alhem.net/project/ex10/index.html
Your IP space should be different if you're on VPN or not - if the VPN is set up right, the server shouldn't even be accessible unless you're on the VPN. You could try to ping the server, and only try to perform the sync if you get a response?
I'm fairly certain that one of the selling points of VPN is that userland applications should be, on the whole, entirely unaware of its existence. Your best course of action is likely to query, using COM or some other form of IPC, known VPN provider services, or just see if they are alive and/or active, and infer the situation based on this evidence.
I have looked for vendor specific registry settings to determine if the tunnel is active. This works well with Nortel and Cisco VPN clients.
Can your app lookup the IP of a domain name that's only available through the VPN? If the name lookup fails, you're not on the VPN. If the general Internet can't query the DNS server on the VPN, this may be a workable solution (but maybe not generalized enough for your needs?). You can then try connecting to that IP -- something that will only succeed if you're on the VPN.
You could even have a public DNS server provide the IP address. Just use a special hostname that never resolves to a public IP. If the VPN isn't up, you won't be able to reach that address.
Suppose the following:
I have a database set up on database.mywebsite.com, which resolves to IP 111.111.1.1, running from a local DNS server on our network.
I have countless ASP, ASP.NET and WinForms applications that use a connection string utilising database.mywebsite.com as the server name, all running from the internal network.
Then the box running the database dies, and I switch over to a new box with an IP of 222.222.2.2.
So, I update the DNS for database.mywebsite.com to point to 222.222.2.2.
Will all the applications and computers running them have cached the old resolved IP address?
I'm assuming they will have.
Any suggestions along the lines of "don't have your IP change each time you switch box" are not too welcome as I cannot control this aspect of the situation, unfortunately. We are currently using the machine name of the box, which changes every time it dies and all apps etc. have to be updated with the new machine name. It hurts.
Even if the DNS is not cached local to the machine, it will likely be cached somewhere along the DNS chain between the machine and the name servers, at least for a short while. My understanding is this situation would usually be handled with IP takeover where you just make the new machine 111.111.1.1.
Probably a question for serverfault.
You're looking for DNS TTL (Time To Live) I guess.. In my opinion applications may cache the IP for at most the value of the TTL. I'm afraid however that some applications/technologies might actually cache it longer (agian in my opinion completely wrong)
Each machine will cache the ip address.
The length of time it is cached is the TTL (Time To Live). This is a setting on your DNS server, if you set it very low say 5 mins, then you show be up and running fairly quikly. A bit of a hack but it should work.
Yes, the other comments are correct in that what controls this is the DNS TTL set for the hostname database.mywebsite.com.
You'll have to decide what the maximum amount of time you're willing to wait for if you have a failure on your primary address (111.111.1.1) after you make the switch to the secondary address. Lower settings will give you a quicker recovery time, but will also increase the load and bandwidth to your DNS server because clients will have to re-query it to refresh their cache more often.
You can use nslookup using the -d option from your cmd prompt to see what your default TTL times and remaining TTL times are for the DNS server you are querying.
%> nslookup -d google.com
You should assume that they are cashed for two reasons not clearly mentioned before:
1- Many "modern" versions of OS families do DNS caching.
2- Many applications do DNS caching or have poor error/failure detection on live connections and/or opening new connections. This would possibly include your database client.
Also, this is probably not well documented. I did some googling, and found this for MySQL:
http://dev.mysql.com/doc/refman/5.0/en/connector-net-programming-connecting-connection-string.html#connector-net-programming-connecting-errors
It does not clearly explain its behavior in this regard.
I had a similar issue with a web site that disables the application pool recycling features and runs for weeks on end. Sometimes, a clustered SQL Server box would restart and for some reason, my SqlConnection's were not reconnecting. I was getting the error:
A network-related or instance-specific
error occurred while establishing a
connection to SQL Server. The server
was not found or was not accessible.
Verify that the instance name is
correct and that SQL Server is
configured to allow remote
connections. (provider: Named Pipes
Provider, error: 40 - Could not open a
connection to SQL Server)
The server was there - and running - in fact, if I just recycled the app pool, the app would work fine - but I don't like recycling app pools!
The connections that were being held in the connection pool were somehow using old connection information, and that could have been old IP addresses. This is what seems so similar to the poster's question, that it appears to be cached DNS information, because as soon as some sort of a cache is cleared, the app works fine.
This is how I solved it - by forcing all of the connections in the pool to be re-created:
Try
' Example: SqlDependency, but this could also be any SqlConnection.Open call
Dim result As Boolean = SqlClient.SqlDependency.Start(ConnStr)
Catch sqlex As SqlClient.SqlException
SqlClient.SqlConnection.ClearAllPools()
End Try
The code sample is just the boiled-down basics - it should be tweaked for your situation!
The DNS gets cached, but for any server that resolves to the wrong ip address, you can update the HOSTS file of the server and the ip should be updated immediately. This could be a solution if you have a limited amount of servers accessing your database server.
I've got a SQL Server 2000 box that I'd like to put on "the Internet" so that developers could connect remotely without VPN access.
What's the safest way to do so? It might be temporary, e.g. every once in a while, but it's definitely necessary.
Thanks,
Rob
Short answer - don't do this.
Long answer:
Install good firewall on the box.
Install and run ssh server on it.
Open only the ssh port.
Your devs can use PuTTY or any other ssh client to "tunnel" the sql port over the ssh connection.
The SAFE thing to do is put it behind a VPN.
Seriously, why would you even consider such a risk?
Read DannySmurf's answer. If security threat is not your highest concern, then try LogMeIn at least.
First option, I agree, "don't".
Second option, create a web front end on the exposed box and leave sql non-exposed.
Third option, if you must expose the sql box then mandate asymetric key encryption with all clients, deny all other connections, log clients and review connectivity logs with alerts for clients not matching allowed connection specs (stored in an encrypted table on an internally non-exposed server). Be prepared for some enlightening hacker techniques sure to surprise.
-Alek
I accidentally left an SQl Server (port 1433) open on the net for a while, and once I realized it, I was getting something like 100,000 hits per hour with some sort of automated programs (coming from an army of IP's I believe), trying to break into the server.
Luckily I used very long and complicated passwords...and don't believe I was ever compromised.