How to control a web application through email? Or how to run php script by sending an email? - cakephp

I want to run a web application on php and mysql, using the CakePHP framework. And to keep the threshold of using the site at a very low place, I want to not use the standard login with username/password. (And I don't want to hassle my users with something like OpenID either. Goes to user type.)
So I'm thinking that the users shall be able to log in by sending an email to login#domain.com with no subject or content required. And they will get, in reply, an email with a link that will log them in (it will contain a hash). Also I will let the users do some actions without even visiting the site at all, just send an email with command#domain.com and the command will be carried out. I will assume that the users and their email providers takes care of their email account security and as such there is no need for it on my site.
Now, how do I go from an email is sent to an account that is not read by humans to there being fired off some script (basically a "dummy browser client" calls an url( and the cakephp will take care of the rest)?
I have never used a cron job before, but I do think I understand their purpose or how they generally work. I can not have the script be called by random people visiting the site, as that solution won't work for several reasons. I think I would like to hear more about the possibility of having the script be run as response to an email coming in, if anyone has any input at all on that. If it's run as a cron job it would only check every X minutes and users would get a lag in their response (if i understand it correctly).
Since there will be different email addresses for different commands, like login#domain.com and I know what to do and how to do it to based on the sender email, i dont even need the content, subject or any other headers from the email.
There is a lot of worry about security of this application, I understand the issues, but without giving away my concept, I dont think it is a big issue for what I am doing. Also about the usability issue, there really isnt any. It's just gonna be login to provide changes on a users profile if/when they need that and one other command. And this is the main email and is very easy to remember and the outset of this whole concept.

I have used the pop3 php class with great success (there is also a Pear POP3 module).
Using the pop3 class looks something like this:
require ('pop3.php');
$pop3 = new pop3_class();
$pop3->hostname = MAILHOST;
$pop3->Open();
$pop3->Login('myemailaddress#mydomain.com', 'mypassword');
foreach($pop3->ListMessages("","") as $msgidx => $msgsize)
{
$headers = "";
$body = "";
$pop3->RetrieveMessage($msgidx, $headers, $body, -1);
}
I use it to monitor a POP3 mailbox which feeds into a database.
It gets called by a cronjob which uses wget to call the url to my php script.
*/5 * * * * "wget -q --http-user=me --http-passwd=pass 'http://mydomain.com/mail.php'" >> /dev/null 2>&1
Edit
I've been thinking about your need to have users send certain site commands by email.
Wouldn't it be easier to have a single address that multiple commands can be sent to rather than having multiple addresses?
I think the security concerns are pretty valid too. Unless the commands are non-destructive or aren't doing anything user-specific, the system will be wide open to anyone who knows how to spoof an email address (which would be everyone :) ).

You'll need some sort of CronJob/Timer Service that checks the Mailbox regularly and then acts on it. Alternatively, you should check the mailserver if it can run a script when a mail arrives (i.e. see if it's possible to put a spamfilter-script in and "abuse" that functionality to call your script instead).
With pure PHP, you're mostly out of luck as something needs to trigger the script. On a Pagewith a LOT of traffic, you could have your index.php or whatever do the check, but when no one visits your site for quite some time, then the mail will not be sent, and you have to be careful of "race conditions" when multiple people are accessing the script at the same time.
Edit: Just keep one usability flaw in mind: People with Multiple PCs and without an e-Mail Client on every one. For example, I use 4 PCs, but only 1 (my main one) has a Mail Client installed, and I use Webmail to check the other ones. Now, logging in and sending a mail through Webmail is not the greatest usability - in order to use YOUR site, I first have to log in to ANOTHER site, compose a mail through the crappy interface most Webmail tools have and wait for answer. Could as well use OpenID there :-)

If your server allows it you can use a .forward file or Procmail to start a process (php or anything) when a mail arrives to a certain address.

You don't want to hassle users with OpenID, but you want them to deal with this email scheme. Firstly, email can take a long time to go through. There isn't any guaranteed time that an email will be delivered in. It's not even guaranteed that the email will get there at all. I know things usually are quick, but it's not uncommon to take up to 10 minutes for a round trip to be completed. Also, unless you're encrypting the email, the link you are sending back is sent in the open. That means anybody can use that link to log in. Depending one how secure you want to be, this may or may not be an issue, but it's definitely something to think about. Using a non-standard login method like this is going to be a lot more work than it is probably worth, and I can't really see any advantages to the whole process.

I was also thinking using procmail to start some script. There is also formail, which might come in handy to change or extract headers. If you have admin access to the mail server, you could also use /etc/aliases and just pipe to your script.
Besides usability issues, you should really think about security - it's actually quite simple to send email with a fake sender address, so I would not rely on it for anything critical.

I agree with all the security concerns. Your assumption that "the users and their email providers takes care of their email account security" is not correct when it comes to the sender's e-mail address.
But since you specifically asked "how do I go from an email is sent to an account that is not read by humans to there being fired off some script", I recommend using procmail to deliver the incoming e-mail to a script you write.
I would not call a URL. I would have the script perform the work by reading the message sent in on stdin. That way, the script is not acessible to anyone on the web site.
To set this up, the e-mail address you provide to your users will have to be associated with a real user
on the system. In that user's home directory, create a file called ".procmailrc"
In that file, add these two lines:
:0 hb:
| /path/to/program
Where /path/to/program is the full path to the script or program for handling
the incoming message. Then create the script with code something like this:
#!/usr/bin/php
<?php
$fp=fopen('php://stdin','r');
while($line = fgets($fp)) {
[do something with each $line of input here]
}
?>
The e-mail message will not remain in the mailbox, so if you want to save or log it, have the script do it.
--
Bruce

I would seriously reconsider this approach. E-mail hasn't got very high reliability. There's all kinds of spamfilters that might intercept e-mails with links thereby rendering the "command" half-finished, not to mention the security risks.
It's very easy to spoof the sender-address on an e-mail. You are basically opening up your system to anyone.
Also instead of a username/password combination you're suddenly requiring the users to remember a list of commands to put in front of an email-address. It would be better to provide them with a username/password and then giving access to a help page.
In other words the usability and security of this scheme scores very low.
I can't really find any advantages to this approach that even comes close to outweighing the massive disadvantages.

One solution to prevent spam, make sure the first line, last line or a specific line contains a certain string, almost like a password, but a full sentence is better.
Only you have the word or words, pretty secure, just remember to delete the mails after use and those that do not have the secret line.

Apart from the security and usability email delivery can be another problem. Depending on the user's email provider, email delivery can be delayed from a few minutes to few hours.

There is a realy nice educational story on thedailywtf.com on designing software. The posed question should be solved by a proper design, not by techo-woopla.
Alexander, please read the linked story and think gloves, not email-driven webpage browsing.
PHP is not a hammer.

Related

How a hacker can perform an xss attack if he does not have access to the user computer? [duplicate]

This question already has answers here:
What does it mean when they say React is XSS protected?
(2 answers)
Closed 2 years ago.
I am reading some articles about security in React applications. Indeed, I use localstorage to store the user's infos and I've seen that an xss attack could easily allow a hacker to steal them.
However, I understand that in React, an xss attack can only be performed through a setDangerouslyInnerHtml tag that displays a content written in an input. This way, you can steal his infos, cookies session, ect. and send them to your website.
But a hacker could only do this if he has the chance to write his script on the user's computer right? So, if I don't use any setDangerouslyInnerHtml tag, is the localstorage safe in a React app? If not, how a hacker could run such an attack on the website?
If the user uses a public computer it might be possible.
If you have some functionality which allows external users to post content on your site, for example comments or reactions then someone might write a script which sends localstorage data to a hacker.
There are a lot of ways to exploit this, check owasp for more detailed explanation.
https://owasp.org/www-project-top-ten/
Developers must accept what attackers can do:
They can retheme an entire site,
They can too make "bot" scripts to automate tasks and in other words flood your server if that was the task.
All limits defined in JS/HTML can and will be bypassed, (e.g: character lengths in forms/etc)
The entire page can be re-written to not talk to your server-right, in other words crashing it and more if not handled/detected.
The list goes on but accept it's pretty much all off the table if someone wants to pry hard enough.
There's not a whole lot you can do to prevent this, to explain! You can add an external script from randomxyxsite.com and though trusted could under-go an attack where that script now runs "loggers or some type of analytic grabbing bot", this in my opinion is easily avoided by not adding external scripts if you can.
Though I said what I said originally, here's where you're stuck...
Any user can open console/build extensions or use a third-party loader like Tampermonkey and other alternatives and execute script at their will. This too can become "shared" and comparable to botnet behavior.
So what can you do to stop clients from mis-behaving or "super-modding" their content for malicious server-use?
Some ways to safe-guard:
Server-sided requests should pass through some form of check/sanitization to ensure that whatever any of the clients pass-to it is absolutely safe to absorb.
Never let the user tell you who they beyond login, define these users by sessionid; know these users by their session and when user<>user, get between them and follow the above point.
Keep as much as possible private. Public variables/classes/functions are easily re-written during run-time leaving some features you maybe intended on to fall apart.
window.PayFeature = function(){};
ALLOW XSS:
If feared, a developer should study it more. As much as a user can distort/change their end it's only an issue if the traffic changes or the data received from them starts becoming attack like. So for a developer your best bet is to actually rate-limit, set rules and more for users so that abuse is detected and stopped. As long as you do that, you should never fear it but welcome it, when server is secured it becomes a matter of spam (potential botnet)

will gatling actually perform the operation or will it check only the urls' response time?

I have a gatling test for an application that will answer a survey and upon answering this survey, the application will identify possible answers that may pose a risk and create what we call riskareas. These riskareas are normally created in the background as soon as the survey answering is finished. My question is I have a gatling test with ten users who will go and answer the survey and logout, I used recorder to record the test; now after these ten users are finished I do not see any riskareas being created in the application. Am I missing something--should the survey be really answered by gatling (like it does in selenium) user or is it just the urls that the gatling test will touch ?
I am new to gatling please help.
Gatling should be indistinguishable from a user in a web browser (or Selenium) as far as the server is concerned, so the end result should be exactly the same as if you'd gone through the process yourself. However, writing a Gatling script is a little more work than writing a Selenium script.
For performance reasons, Gatling operates at a lower level than Selenium. Gatling works with the actual data that is sent and received from the server (i.e, the actual GETs and POSTs sent to the server), rather than with user-level interactions (such as clicking links and filling forms).
The recorder will generally produce a relaitvely "dumb" script. It records the exact data that was sent to the server, and makes no attempt to account for things that may change from run to run. For example, the web application you are testing might have hidden form fields that contain session information, or the link addresses might contain a unique identifier or a session id.
This means that your script may not be doing what you think it's doing.
To debug the script, the first thing to do is to add checks on each of the requests, to validate that you are getting the response you expect (for example, check that when you submit page 1 of the survey, you are taken to page 2 - check for something that you'd only expect to find on page 2, like a specific question).
Once you know which requests are failing, look at what data was sent with the request, and try to figure out where it came from. You will probably find that there are session ids, view state, or similar, that must be extracted from the previous page.
It will help to enable request and response logging, as per the documentation.
To simplify testing of web apps, we wrote some helper functions to allow tests to be written in a more Selenium-like way. Once you understand what your application is doing, you may find that it simplifies scripting for you too. However, understanding why your current script doesn't work the way you expect should be your first step.

SQL Server Data has script tags in a Memo Field with Intranet IP Address

We have a ColdFusion page where admins can insert/update some real estate records after logging in. We are noticing that in one table's Memo field called 'description', there are sometimes tags which hacking/junk info occasionally. I have introduced Captcha. The page is password protected and no linked from any pages--no accessible by search engines unless someone gave out the url accidentally. We are now also tracking the IP address of the person who is doing the inserts/updates. But still we just saw that all the data in the description field had the [junk text] -appended' to the end of the valid text , with an internal ip address of 192.168.0.101. This IP is someone's personal computer. One of our theory is that the person's computer is compromised. But what kind of virus would do that? Also, what I would like to do is to have a field called 'approved' which is 'no' by default but anytime an Insert/Update happens it goes to 'no' and 'triggers' an email to admins about the change. What could be the syntax of that inside SSMS? Thanks!
The most likely cause is SQL injection. It could be that your internal PC is compromised by malware or a viris that is attacking your site using one of many dozen attacks. The most common of them do exactly what you are describing...append content to the end of text or character fields in the DB. Here's a description of one common attack that does just that.
I would also check the following
Make sure handler scripts are "locked down" too - not just root urls. Sometimes a script you include is accessible via url and is used in hacking attempts.
Look for old code elswhere in your site that might not be password protected. if you have a legacy code base chances are there's some old code lying about that needs clean up :)
Look in the web logs for URL params with values that begin with EXEC( - this is a common approach to injection.
Scan the PC in question rigorously. Install charles or wireshark and watch HTTP traffic to see what's going on.
Finally, check all your cod for vulnerability to SQLi. Make sure all your variables use cfqueryparam and you have other controls in place. Passwords are not the only level of protection you need :)

Reading mail spool in C

I am designing a program that will run on a mailserver. It is intended to monitor email sent to a particular username and act on input received through the email messages.
My idea is to run this program from a cron job every X minutes, check for new email, act on the email if it's present, and delete the email.
Of course, I could easily open and read /var/spool/mail/username directly as a regular text file, then truncate the file once I've read through it. But what's the proper way deal with the situation without stepping on sendmail? Another email might show up for that user either while I'm still reading the file or while I'm truncating it.
Generally what you're trying to do is better accomplished through server-side filtering as the mail arrives rather than trying to search through a mailbox every so often. It's complex and if you get it wrong, you end up losing mail.
Instead, look to server side filtering like procmail or similar to accomplish what you want.

How to parse emails and transfer to DB

I have seen some web apps that allow me to email stuff to a special ID and it magically turns up in my account. How exactly do they do this?
Without you giving an example of the specific service you're thinking of, it's hard to know exactly, but one way could be:
you give your email address to one of these sites, e.e. magic-mail.com
they insert this into their db, and take the db id value for this entry (12345)
they give you an address using this id (12345#magic-mail.com)
when mail is received by magic-mail.com, they look up the part before the #, pull out the associated email address for that ID, and relay the message on to the address you gave initially (toby#example.com)
There are many other ways of doing this, likely more simple than the above, but again, without examples it's hard to tell you exactly how the site you're thinking of is operating.
Edit
On reading the question, I assumed "my account" meant your mail account. If you meant an account you have on this company's system, then the process would be the same as the above, but changing the last step to:
when mail is received by magic-mail.com, they look up the part before the #, pull out the associated email address for that ID, and copy the contents of the message to the account associated with that id.
You can write a simple script on python/php or any language your know. Make it recieve a pop mail for account 'myName#mySite.com' and put the content to db.
It is quite easy.
You would need a way to monitor the mail account for new messages, read the message format, parse out the parts that are important to you and then perform the insert.
Monitoring the mail account would require having a script running on the server in a specified interval, otherwise the only other way would be to access a certain URL manually which would access the mail account and do all the necessary processing.
Depending on your hosting provider and the amount of freedom they give you, this may or may not be possible.
Short way:
You need to write a script that will connect to mail server, fetch mails, parse them and then put to database.
Run this in a cron job and you're set.

Resources