SOLR/Lucene Server Location - solr

I am pondering the question of proper location of the SOLR server.
This is usually what we have today:
Server Side:
Node or RoR or IIS
Client:
Singe Page App or rendered by a server.
DB:
RDMBS - MsSQO, Postgre, MySQL or some other Relational database.
Thinking where to put a SOLR server. However I think, I am positive it should be not placed to be accessible from the internet, let alone accessible from the client. I think it should be behind main server, and the main server should send queries to SOLR and return to the client. Additionally, place SOLR behind the firewall and white list the server.
Is this good thinking or there is something else entirely that I am not seeing?

As the docs say:
First and foremost, Solr does not concern itself with security either at the document level or the communication level.
You are right: you should never have a publicly-visible Solr server for this reason. In our setup at work we have it firewalled so only our main webserver can access it (i.e. using whitelisting). As part of our API, requests for data therefore must go through the webserver, allowing us to authenticate users, as well as not give users free reign to execute whatever they want.
If you want to use the web client, you can always temporarily whitelist your IP and remove it afterwards. While it is possible for an attacker to spoof your IP and thus gain access, a hacker has to be very determined and explicitly targeting your application, has to know both the whitelisted IP and the Solr IP, and has to know all this for the short time it is whitelisted. Such a setup is therefore secure enough for your needs.

Related

Is it possible to hijack a result of a query from a app that come from public network?

In short,
We made an app that interacts with a server to fetch some data.
But now we think about security and here is our question :
Can a man in the middle attack happen ? Can someone use something as burpsuite or wireshark to analyze queries that come and go ?
Any suggestions will be greatly appreciated,
thanks.
It depends on who you mean.
If the app uses https to communicate with the server, a 3rd party ("somebody else" other than the user and the server) will not be able to see or modify traffic in a MitM attack. That's why you need https.
However, the user itself (ie. anybody having administrative access to the client) will be able to do so. If the question is about protecting the app from its legitimate users by hiding the traffic in some way, that's not possible. Even if using https, the user of the client device can trust any server certificate, for example the one presented by a proxy like Burp, so they will be able to see and analyze their own traffic.

Limit the usage of Kerberos TGTs

I'm pretty new to Kerberos. I'm testing the Single Sign On feature using Kerberos. The environment: Windows clients (with Active Directory authentication) connecting to an Apache server running on Linux machine. The called cgi script (in Perl) connects to a DB server using the forwarded user TGT. Everything works fine (I have the principals, the keytab files, config files and the result from the DB server :) ). So, if as win_usr_a on Windows side I launch my CGI request, the CGI script connects to the remote DB and queries select user from dual and it gets back win_usr_a#EXAMPLE.COM.
I have only one issue I'd like to solve. Currently the credential cache stored as FILE:.... On the intermediate Apache server, the user running the Apache server gets the forwarded TGTs of all authenticated users (as it can see all the credential caches) and while the TGTs lifetime are not expired it can requests any service principals for those users.
I know that the hosts are considered as trusted in Kerberos by definition, but I would be happy if I could limit the usability of the forwarded TGTs. For example can I set the Active Directory to limit the forwarded TGT to be valid only to request a given service principal? And/Or is there a way to define the forwarded TGT to make it able to be used only once, namely after requesting any service principal, become invalid. Or is there a way the cgi script could detect if the forwarded TGT was used by someone else (maybe check a usage counter?).
Now I have only one solution. I can define the lifetime of the forwarded TGT to 2 sec and initiate a kdestroy in the CGI script after the DB connection is established (I set that the CGI script can be executed by the apache-user, but it cannot modify the code). Can I do a bit more?
The credential caches should be hidden somehow. I think defining the credential cache as API: would be nice, but this is only defined for Windows. On Linux maybe the KEYRING:process:name or MEMORY: could be a better solution as this is local to the current process and destroyed when the process is exited. As I know apache create a new process for a new connection, so this may work. Maybe KEYRING:thread:name is the solution? But - according to the thread-keyring(7) man page - it is not inherited by clone and cleared by execve sys call. So, if e.g. Perl is called by execve it will not get the credential cache. Maybe using mod_perl + KEYRING:thread:name?
Any idea would be appreciated! Thanks in advance!
The short answer is that Kerberos itself does not provide any mechanism to limit the scope of who can use it if the client happens to have all the necessary bits at a given point in time. Once you have a usable TGT, you have a usable TGT, and can do with it what you like. This is a fundamentally flawed design as far as security concerns go.
Windows refers to this as unconstrained delegation, and specifically has a solution for this through a Kerberos extension called [MS-SFU] which is more broadly referred to as Constrained Delegation.
The gist of the protocol is that you send a regular service ticket (without attached TGT) to the server (Apache) and the server is enlightened enough to know that it can exchange that service ticket to itself for a service ticket to a delegated server (DB) from Active Directory. The server then uses the new service ticket to authenticate to the DB, and the DB see's it's a service ticket for win_usr_a despite being sent by Apache.
The trick of course is that enlightenment bit. Without knowing more about the specifics of how the authentication is happening in your CGI, it's impossible to say whether whatever you're doing supports [MS-SFU].
Quoting a previous answer of mine (to a different question, focused on "race conditions" when updating the cache)
If multiple processes create tickets independently, then they have no
reason to use the same credentials cache. In the worst case they would
even use different principals, and the side effects would be...
interesting.
Solution: change the environment of each process so that KRB5CCNAME
points to a specific file -- and preferably, in an
application-specific directory.
If your focus in on securing the credentials, then go one step further and don't use a cache. Modify your client app so that it creates the TGT and service tickets on-the-fly and keeps it private.
Note that Java never publishes anything to the Kerberos cache; it may either read from the cache or bypass it altogether, depending on the JAAS config. Too bad the Java implementation of Kerberos is limited and rather brittle, cf. https://steveloughran.gitbooks.io/kerberos_and_hadoop/content/sections/jdk_versions.html and https://steveloughran.gitbooks.io/kerberos_and_hadoop/content/sections/jaas.html

How to make MongoDb (local database) also be accessible via internet the same time

my plan of setup is to make the local database accessible to the local computers because of heavy manipulation of data and it needs a fast response, but at the same time, I wanted to access it via internet when I'm away from the local network. is this possible?
Currently Using MERN stack
I tried MLAB but the response of data is pretty slow
Thank you in advance
it looks like you're going down a dangerous path. MongoDB comes with local authentication and a standard 27017 port. To make it available online you need to
remove authentication (which is not on by default)
change or remove the bindIp option
ensure the port is not blocked by firewall
This can be done in the config file.
https://docs.mongodb.com/manual/reference/configuration-options/
However, what you really want to do, probably to create routes within express so that users can communicate with your mongo is a structured and safe manner. More information in this here http://mean.io/2017/10/31/getting-started-mean-io/

expose local webserver behind dynamic IP

I've a simple webserver bound on 0.0.0.0:3000 on my machine which works as intended on local networks. By local network I mean to say, if my mobile or any other device is on the same network, it can access local webserver by going to the IP assigned to my machine and adding port 3000 to it. Eg 192.168.1.4:3000.
Now I've to expose it to the internet but not through some sort of 3rd party application like ngrok, localtunnel or browserSync. I know that these applications work perfectly, but since I've my own pet project of controlling home appliances, I don't want to rely on availability of 3rd party services. So the current state is, I cannot control it through the internet. Keeping in mind I don't have a static IP otherwise this would've been easier.
I already have a vps and a domain name assigned to it. I can send my currently allocated ip address (since it is dynamic), by using getifaddrs, to my server and keep track of it. But how do I expose my local server to the internet through it? Those 3rd party applications assign some sort of subdomains to each exposed server, and I'll be able to assign subdomains too, but I'm still not getting any way to expose the local webserver. Any help would be appreciated, thankyou :)
Step one, you need to expose your webserver at your internet access router.
Typically this requires you to configure port forwarding for (in your case) port 3000.
With this done, any client could access your service via (current external dynamic ip):3000
Step two, you need to dynamically map a fixed DNS name to your current dynamic ip. There are of course third party services (such as DynDNS) that would help you map yourfavoritename.dyndns.org to that ever-changing ip address.
If you want to do the latter without 3rd party, you need to have some static (web) server somewhere and could proceed as follows:
Clients visit http://www.yourstaticserver.example/ and that server redirects them to (current dynamic ip):3000.
Of course, for this to happen, your static server needs to know the dynamic ip and needs no find out about changes to it.
To this end, you could have your internal server contact the static server on a regular interval (such as once a minute), say, have it access http://www.yourstaticserver.example/some-secret-special-page and the static server always stores the REMOTE_ADDR of such a request (preferably with some authorization!) for its future redirections.
Actually, there is a step zero before step one: Be aware that exposing your server to the Internet means that you expose your server to the Internet. So I hope you have invested enough thought into security.

silverlight accept invalid certificate

I'm doing https web requests in silverlight using "WebRequest"/"WebResponse" framework classes.
Problem is: I do a request to an url like: https://12.34.56.78
I receive back a versign signed certificate which has as subject a domain name like: www.mydomain.com.
Hence this results in a remote certificate mismatch error.
First question: Can I somehow accept the invalid certificate, and get the WebBresponse content ? (even if it involves using other libraries, I'm open to it)
Additional details: (for those interested on why I need this scenario)
I'm trying to give a client access to a silverlight app deployed on a test server.
Client accesses the silverlight app at: www.mydomain.com/app
Then I do some rest requests to: https://xx.mydomain.com
Problem is I don't want to do requests on https://xx.mydomain.com, since that is on our productive server. For this reason I use https://12.34.56.78 instead of https://xx.mydomain.com.
Client has some firewalls/proxies and if I simply change his hosts file and map https://xx.mydomain.com to 12.34.56.78, web requests don't resolve to the mapped IP.
I say this because on his network webrequests fail if I try that, on my network I can use the hosts changing without problems.
UPDATE: Fixed the problem by deploying test releases to an alternative: https://yy.domain.com and allowing the user to configure for test purposes, the base url to which I do requests to be: https://yy.domain.com.
Using an certificate that contained the IP in the subject or an alternative subject would've probably worked too, but would have cost some money to be issued by a certified provider and would not be so good because IP's might change.
After doing more research looks like Microsoft won't add this feature too soon, unless there's a scenario for non-testing/debugging uses.
See: http://connect.microsoft.com/VisualStudio/feedback/details/368047/add-system-net-servicepointmanager-servercertificatevalidationcallback-property

Resources