Restrict number of requests from an IP - database

I am writing an application wherein its a requirements to restrict the number of logins a user can have from a single IP address (as a way to stop spam).
We can't use captcha for some reason!
The only 2 ways I could think of to make this work was to either store in the database, the number of requests coming in from each IP.
OR
To store a tracking cookie which has the information regarding the same.
Now, the downside of the first mode is that there would be too much of db traffic - the application is going to be used by a ton of people.
The downside of storing this info as a cookie is that users can clear them up ad start fresh again.
I need suggestions, if there could be a way wherein the high db traffic and the loose bond with cookie based tracking can be handled.

You're talking about "logins" and a web-application therefore you have some sort of a session persisted somwhere. When creating those sessions you need to keep track of the number of active sessions per IP and not allocate new sessions when that threshold is reached.
Without more specific information about your framework / environment, that's about the best answer anyone can provide.
Also be aware that this approach fails in numerous ways because of NAT (network address translation). For example, our office has exactly one public IP address for X hundred people. The internal network is on private IP space.

if you want to get the IP and store somewhere, you could use $_SERVER['REMOTE_ADDR'] to get the IP of the user, make a field like "ip" in your database and you make a query in your SQL to check if the IP was used.
There are also other ways of tracking, like Flash Cookie, people usually don't know the existance of it, so most people wouldn't know how to clear it.

Related

Fastest way to store tiny data on the server

I'm looking for a better and faster way to store data on my webserver at the best speed possible.
My idea is to log the IP address of every incoming request to any website on the server and if it reaches a certain number within a set time then users will be redirected to a page where they need to enter a code to regain access.
I created an apache module that does just that. It attempts to create files on a ramdisk however I constantly run into permission problems since there is another module that switches users before my module has a chance to run.
Using a physical disk is an option that is too slow.
So my only options are as follows:
Either create folders on the ramdrive for each website so IP addresses can be logged independently.
Somehow figure out how to make my apache module execute its functionality before all other modules.
OR
Allocate a huge amount of ram and store everything in it.
If I choose option #2, then I'll continue to beat around the bush as I have already attempted that.
If I choose option #1, then I might need lots more ram as tons of duplicate IP addresses are expected to be stored across several folders on the ramdrive.
If I choose option #3, then the apache module will have to constantly seek through the ram space allocated in order to find the IP address, and seeking takes time.
People say that memory access is faster than file access but I'm not sure if just a direct memory access via malloc is faster than storing data to a ram drive.
The reason why I expect to collect alot of IP addresses is to block script-kiddies from constantly accessing my server at a very high rate.
So what I'm asking is what is the best way I should store my data and why?
You can use hashmap instead of huge amount of ram. It will be pretty fast.
May be several hashmaps for each websits. Or use composite like string_hash(website_name) + (int_hash(ip) << 32)
If the problem is with permissions, why not solve it at that level? Use a common user account or group. Or make everything on the RAM disk world readable/writable.
If you want to solve it at the Apache level, you might want to look into mod_security and mod_evasive.

Warn certain users about host down, but not service states

My boss is starting to complain that he gets too many emails from my icinga/nagios instance and so am i to be honest, and he doesn't need to know everything, neither does development.
So what i want to do, is to limit the amount of email is sent out.
I started by removing contact_groups from hosts, and instead apply these to individual services that they care about, all well and good.
But, for certain hosts i would like notifications be sent out on a host down, but not services defined. How would i go on about doing that?
TLDR: how to make nagios email user about host down, but not services on that host
For these users who don't need all of the details, look at service_notifications_enabled as the contact config option. This will only give them host notifications. You can do ALL kinds of things to fine tune your notifications with Nagios, so don't waste them if people don't need to get them, otherwise they'll just get filtered into a mailbox that no one ever reads.
http://nagios.sourceforge.net/docs/3_0/objectdefinitions.html#contact

Approaches to scan and fetch all DNS entries

I'd like to conduct a experiemnt and I need a full database of all DNS entries on the Internet.
Is it practical to scan the Internet and fetch all DNS entries?
What is the limitation: storage, time or network bandwidth?
Any good approaches to start with?
(I can always bruteforcely scan the IP space and do a reverse DNS lookup, but I guess that not the efficient way to do so)
Downloading databases like RIPE's or ARIN's will not get you the reverse DNS entries you want. In fact, you'll only get the Autonomous Systems and the DNS servers resolving these ranges. Nothing else. Check this one: ftp://ftp.ripe.net/ripe/dbase/ripe.db.gz
Reverse DNS queries will get you only a fraction of all the DNS entries. In fact, no one can have them, as most domain names don't accept AXFR requests, and it could be considered illegal in some countries. To get access to the complete list of .com/.net/.org domain names you might be ICANN or maybe an ICANN reseller, but you'll never get other TLDs which aren't publicly available (several countries).
Then, the best possible approach would be to bruteforce all reverse-ip-resolution + become an Internet giant like google to set up your own public DNS's + try to perform an AXFR request on every domain name you're able to detect.
Mixing all these options are the only way to get a significative portion of all the DNS entries, but never the 100%, and probably not more than 5 to 10% . Forget about bruteforcing whois servers to get the list of domain names. It's forbidden by their terms and conditions.
We're bruteforcing reverse-ipv4-resolution right now, because it's the only legal way to do it without being Google. We started 2 weeks ago.
After two weeks of tunning, we've completed a 20% of the Internet. We've developed a python script launching thousands of threads scanning /24 ranges in parallel, from several different nodes.
It's way faster than nmap -sL, however it's not as reliable as nmap, so we'll need a "second pass" to fill-in the gaps we got (arond 85% of the IPs got resolved on the first attempt). Regular rescanning must be performed to obtain a complete and consistent database.
Right now we've several servers running at 2mbps of DNS queries on every node (from 300 to 4000 queries/second on every node, mostly depending on the RTT between our servers and the remote DNS's).
We expect to complete the first pass of all the IPV4 entries in around 30 days.
The text files where we store the preeliminary results have an average of 3M entries for every "A class" range (i.e. 111.0.0.0/8). These files are just "IP\tname\n", and we only store resolved IP's.
We needed to configure a DNS on every server, because we were affecting the DNS service of our provider and it blocked us. In fact we performed a bit of benchmarking on different DNS servers. Forget about Bind, it's to heavy and you'll hardly get more than 300 resolutions/second.
Once we'll finish the scan we'll publish an article and share the database :)
Follow me on Twitter: #kaperuzito
One conclusion we have already got is that people might think twice about the names they put in their DNS PTR entries. You can't name an IP "payroll", "ldap", "intranet", "test", "sql", "VPN" and so on... and there's millions of those :(

Is it a good idea to use Database Mail as an email relay server?

One of our problems is that our outbound email server sucks sometimes. Users will trigger an email in our application, and the application can take on the order of 30 seconds to actually send it. Let's make it even worse and admit that we're not even doing this on a background thread, so the user is completely blocked during this time. SQL Server Database Mail has been proposed as a solution to this problem, since it basically implements a message queue and is physically closer and far more responsive than our third party email host. It's also admittedly really easy to implement for us, since it's just replacing one call to SmtpClient.Send with the execution of a stored procedure. Most of our application email contains PDFs, XLSs, and so forth, and I've seen the size of these attachments reach as high as 20MB.
Using Database Mail to handle all of our application email smells bad to me, but I'm having a hard time talking anyone out of it given the extremely low cost of implementation. Our production database server is way too powerful, so I'm not sure that it couldn't handle the load, either. Any ideas or safer alternatives?
All you have to do is run it through an SMTP server and if you're planning on sending large amounts of mail out then you'll have to not only load balance the servers (and DNS servers if you're planning on sending out 100K + mails at a time) but make sure your outbound Email servers have the proper A records registered in DNS to prevent bounce backs.
It's a cheap solution (minus the load balancer costs).
Yes, dual home the server for your internal lan and the internet and make sure it's an outbound only server. Start out with one SMTP server and if you get bottle necks right off the bat, look to see if it's memory, disk, network, or load related. If its load related then it may be time to look at load balancing. If it's memory related, throw more memory at it. If it's disk related throw a raid 0+1 array at it. If it's network related use a bigger pipe.

Crypting mail addresses - funny design problem

In my web project, I am storing mail addresses. These addresses may be used by the system to throw mails to the recipients. It is also important to say that these mail addresses have expiration time.
But the critical point is trustness: for this very service, people must be sure that the mail addresses wil not be given to somebody else (especially to authorities, for example).
To resume:
the system has to "know" the mail address.
the webmaster (or somebody else) has to be unable to find the true mail addresses.
By doing this, the webmaster will not be able to give information (even by force :)).
Intermediate solution: I already know how to do this as soon as the information has expired. E.g. The mail address is encrypted with gnupg (GPG / PGP algorithms). The system (or anybody) can decrypt if he/it has the password. But as soon as the mail address has expired, let's revoke the secret key :arrow: one cannot decrypt the mail address anymore.
But this raises a performance problem (to create the private key)...
Any help would be most appreciated !
What you're asking for is impossible. Even supposing you could devise a system whereby the system can send emails without being able to reveal them to an administrator (and you can't), an attacker could simply start a mail run and capture the outgoing emails and extract the addresses before they're sent.
If you want to 'expire' email addresses, you should simply delete the records, then (if you're paranoid), compact the database and erase the free space on the disk.

Resources