Is there a simple way to determine what database is in use behind a website from an external HTTP request? i.e., I make an HTTP request, get back whatever data is going to come from the webserver - can I inspect any of that and reliably determine that DB in use? I am thinking not, but figured I would ask this group.
No. The same answer could come from a static file, a SQL database, or a martian telepath.
No and for a good reason. If there were it would be a security hole. Unless it is a part of the application functionality.
For most websites, the answer is no, however, you may find security holes which reveal this information. For example, it's possible to get this information if the site isn't coded against sql injection attacks. For example, try entering the following as your user name:
'; select version();
On shared hosting system, they often don't have a firewall protecting the database from external connections.
Try the following:
telnet localhost 3306
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
5
5.0.51a—Bjb-W
This tells you that the server is running MySQL version 5.0.51a. MSSQL and Sybase also identify their version number before the client attempts to login.
Probably the easiest way is just to ask the webmaster. If your not a hacker, and the site isn't a bank, they will likely tell you.
Related
I'm trying to diagnose some performance issues, so I have a the Datomic transactor running locally backed by a local instance of DynamoDB. What I can't figure out is how to populate it from a backup of our primary Datomic environment. I know the basic command is:
>datomic restore-db s3://<BUCKET> datomic:ddb://<REGION>/<DB-NAME>
but how to I tell datomic to use the local dynamodb? It seems to only accept the valid AWS regions for REGION. I've also tried using datomic:ddb-local as the protocol but no luck there either.
How do I form the target URI? Or is this even possible?
You should be able to use a ddb-local URI as indicated here: http://docs.datomic.com/storage.html#dynamodb-local
It will be something like: datomic:ddb-local://localhost:8000/my-table/my-db-name?aws_access_key_id=ABC&aws_secret_key=DEF, assuming you're running ddb-local at localhost on port 8000.
Note that the ddb-local protocol does require an access key and secret, even though they are ignored.
Best,
Marshall
I've been tasked with implementing a Single Sign-On solution in an environment which uses Kerberos with an Active Directory server for the actual storing of the users and their groups. I understand that Kerberos does not support privileges/groups and that this is the reason to be forced to use a backing server like, for example, LDAP, or Active Directory. This is all fine and clear, but what I don't quite understand is why you would still be using Kerberos, when you could simply be connecting to LDAP or Active Directory directly instead and dropping the whole overhead of yet another server.
What am I missing here...? Please advise! Many thanks in advance!
Their is no overhead for another server. Active Directory combines all necessary services in one product.
Kerberos has tremendous benenfits:
One login for all systems
Transparent subsequent login
Ciphered ticket exchange, even full transport encryption is possible
Delegation of credential is supported out of the box
Implemented and well documented in Unix and Windows for almost two decades
I use Kerberos via AD for years in Java and C on Unix and Windows with great success. I wouldn't use anything else in a corporate environment.
I am trying to deploy my WPF application to some users who are outside of our corporate network. Everything works great on our LAN but I can't get the updates working when I turn on security as the user is never prompted for their login details?
Does anyone know of a way to secure my ClickOnce files so that only my users can access it? I am not allowed to put this software up without it being secure.
Any help much appreciated.
There is no way to secure your files as the ClickOnce runtime will blindly return to it's deployment point and never keep hold of the users original credentials. I have heard of ways of getting round this using various techniques but its a fair bit of work.
This might be of use www.clickoncerevolution.com.
You could also always consider an MSI installer but you won't get the automatic updates.
Marty
Internally, you can restrict access to the files on the webserver. Externally, there's not much you can do easily.
We handle this by having our customers log in when they run the application, and we verify their credentials against backend services (running on Azure). So they can't run it unless they can log in.
If you don't want to do that, I'll share this article with you. It shows how to serve up your ClickOnce files from a SQL Server database by intercepting the requests to the webserver and responding. If you were smarter with web applications than I am (not a high bar, mind you), maybe you could figure out how to intercept and ask for authentication credentials at that point.
And here's an article from CodeProject where they show one solution for what you're trying to do.
I've got a SOLR instance running behind a firewall. I'm about to put up another instance which will not be firewalled. Howevever, SOLR appears to only support pull replication and not push replication.
What are my options with regard to maintaining the same level of security? I'd rather not open too many ports in the firewall. Would HTTP over a SSH tunnel be the best option? Would it also be possible to just replicate the index files using plain old rsync (not using any SOLR specific features) or would this break something?
Would it also be possible to just replicate the index files using plain old rsync
Solr actually supports this kind of distribution with its snappuller mechanism, documented here: http://wiki.apache.org/solr/CollectionDistribution
I would open a port and specify the IP address of the slave, and just use ordinary HTTP-based replication; that would be quite secure, I think, and easier to maintain probably. I know it's not exactly where you were angling, but it's what I'd recommend.
I'm answering my own question as the solution i went for is different than what the two other answers suggested. I ended up using a SSH tunnel for HTTP traffic. Thus, i used SSH to redirect all traffic to port 8080 on the HostA to port 8080 on hostB through a SSH tunnel.
The solution appears to be working fine. I'm using a script which validates the tunnel every 5 minutes or so.
You could use HTTP basic authentication (see https://wiki.apache.org/solr/SolrReplication#Slave) but since the password will be passed in plain text, an SSH tunnel or secure VPN would also be required in order to deter more determined attackers.
I'll be going for a VPN solution to start with and consider an SSH tunnel before moving to production if we feel we are unable to place sufficient trust in our internal networks.
I've got a website that runs on a shared hosting environment, using ASP.net 2.0 (C#) and MS SQL Server 2005. I've recently been asked if I can integrate my website with a piece of third party desktop software that uses the Access runtime as its database (transparent to the end user).
Primarily I want to be able to offer users of my website the option of exporting their data into the Access database on their local machine. The data schema's match sufficiently, the question is how to actually do this, and in the simplest way possible for the user.
Simply having a webpage update the local Access database isn't possible due to the obvious security restrictions. I've considered asking them to upload the Access database to the server, so I can migrate the data then allow them to download it again, however the competency of the users of this software is such that even locating the Access database, let alone uploading and downloading it from the website might be too complicated.
I've also considered if Adobe Air or Silverlight could help here, but don't know them well enough to know for sure. Similarly I'm assuming another exe could be written to perform this task that the user could simply download and run, however my experience is in web development, not program development, so this isn't a 100% certainty for me, or an ideal development option for me.
So, can this be done, and if so what technique can achieve this, with the stated aims being ease of use for the end user, followed by ease of development by someone with web development as their main skill. Many thanks!
You may find this answer of interest: Best way to stream files in ASP.NET
It is about transferring a file from the server. You could save Excel or CSV and use that to update Access.
Instead of trying to do this in a web page you might just expose some views from your sql server to some client specific logins.
Then within the Access application, allow them to tie to your sql server. You might even provide an access application for getting the data from your site and stuffing it in their local access database.
In my work we have done something similar that is transparent to the user by creating an ActiveX control. The problem is that you are limiting the users to use only Internet Explorer.
I think that the best way to achieve what you are trying to do is by installing a service in the client's computer. If creating a service is beyond your experience you can post a project in a place like oDesk and find somebody that can help you with the development for the money that you are willing to pay to complete your project.
Good Luck.