I'm writing a small c program which connects to the google api via Oauth2.
Therefore I need to send a client secret to google.
I store this secret in my code, which I want to push to github, but how can I avoid to show my client secret to everybody who looks at my code?
use a configuration file where you'll store the API key... you have many options, the simplest being writing the key directly into the file, more sophisticated being using some kind of serializers (like json, xml, inifile etc...), the right option is up to you (usually, you'll want to serialize if you want to store several options in the file).
You can also set the key as a program argument, if you don't mind the key to be visible in the process list of your host.
And be sure not to push your already existing git history to git hub, but create a new repository, or all your previous patches (with the key) will be public ;)
Storing secret (and ideally any string literals) in code is wrong - store it in a resource (text) file and don't push it to Git.
If you are searching for where to find out your Client Secret for your Google Drive apss. then follow this step.
Go to your project
Click Credential.
After that you will get all the details about your project like client
id, redirect uri etc. But there you will click on button "Download
Jason" and after downloading a file you will get your CLIENT SECRET.
Please look at the picture.
Related
Sorry if this might be a bit of a trivial question, but I wanna be sure and couldn't exactly find a definitive answer online.
I am writing a small app that uses Mapbox, and I am using react-map-gl for it. They require the access token on the client side, so they suggest using an environment variable. My question is would it be okay to simply create a .env file in the front-end folder and put the variable there?
Thanks!
You can't get away from revealing API keys on the front end. If someone wants to dig around in your source code, they will find them.
However, you should always configure any API key that is visible on the Internet to be restricted to specific referrers, i.e. the domain of your website.
Usually this is done during creation of an API key through your provider's dashboard.
For Mapbox, you can read the documentation on restricting API tokens here. It states:
You can make your access tokens for web maps more secure by adding URL restrictions. When you add a URL restriction to a token, that token will only work for requests that originate from the URLs you specify. Tokens without restrictions will work for requests originating from any URL.
(emphasis my own)
They require the access token on the client side, so they suggest using an environment variable. My question is would it be okay to simply create a .env file in the front-end folder and put the variable there?
There are two reasons one uses environment variables in front-end development:
As a convenience, to keep environment-specific configuration removed from source code.
To keep sensitive information out of source code. You shouldn't commit API tokens or other similarly sensitive details to your version control.
Using environment variables in front-end code will not to keep their values secret from the end user. Whatever the value of an environment variable is at build time will be visible in the compiled output.
I'm trying to scheme how I'm going to accomplish this and so far I have the following:
I grab a file in the front end and on submit send the file name and type to the back end where it generates a presigned URL. I send that to the FE. I then send the file on the front end.
The issue here is that when I generate the presign, I want to commit my UUID filename going to S3 in my database via the back end. I don't know if the front end will successfully complete this task. I can think of some janky ways to garbage collect this - but I'm wondering, is there a typically prescribed way to do this that doesn't introduce the possibility of failures the BE isn't aware of?
Yes there's an alternate way. You can configure your bucket so that it sends an event whenever an object is created/updated. You can either send this event to a SNS topic or AWS Lambda.
From there you can make a request to your Phoenix app webhook, that can insert it into the database.
The advantage is that the event will come only when the file has been created.
For more info, you can read the following: https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html
The way I'm currently handling this is as such:
Compress the image client side.
Send the image to the backend application server.
Create a UUID on the backend.
Send the image from s3 to the backend, using the UUID as the key.
On success, put the UUID into the database.
Respond to the client with the UUID so it can display the image.
By following these steps, you don't introduce error into your database.
My GAE app publishes some APIs in GCP and uses the following structure:
# Replace the following lines with client IDs obtained from the APIs
# Console or Cloud Console.
WEB_CLIENT_ID = '????????????.apps.googleusercontent.com'
ALLOWED_CLIENT_IDS = [WEB_CLIENT_ID, endpoints.API_EXPLORER_CLIENT_ID]
SCOPES = [endpoints.EMAIL_SCOPE]
#endpoints.api(name=API_NAME,
version=API_VERSION,
description='An API to manage languages',
allowed_client_ids=ALLOWED_CLIENT_IDS,
scopes=SCOPES)
My doubt is if someone picks this source code from my machine or GitHub project. He or she can access the APIs using the discovered web client id.
What’s the best practice in this case?
I acknowledge that the client can expose the ID and someone have access to it. But I believe that is another matter.
There are many ways you can do this. One way is to always check in a default value for the client ID, so that when people check out your code, they have to modify it to deploy it. You can also move the client ID to its own module and not check it in at all, and make the expectation that they create their own module with their own client ID. This avoids having a modified state for a checked in file all of the time.
The client ID itself is not sufficient information to generate a valid token. The cryptography involved will prevent such a person from accessing your API.
This seems like something that should be easy to find, but I've tried every combination of search terms I could think of and all I could find were answers that were "close but no cigar". After spending over a half an hour looking, I finally decided to ask.
What I am trying to do, explicitly worded, is to ensure that the files my users upload to or download from my web pages are encrypted during the transfer. I am not satisfied with just throwing https:// onto the beginnings of the file's links because these files need to be password protected. In order to password protect them, of course, I have set the directory permissions such that the files inside cannot be accessed via URLs at all. I am using a PHP script to manage the uploads and downloads.
I have tried checking the php.net pages on topics like headers() and mcrypt_encrypt() and have come up empty-handed. The page on headers() appears to apply to HTTP only and doesn't tell me how to use an encrypted protocol for a file download (if that's the way one does it) and I can't use mcrypt_encrypt() relying on the assumption that mcrypt_decrypt() can just be run later to make the files usable because obviously mcrypt_decrypt() can't be run client side after a download (nor can mcrypt_encrypt() be run client-side before an upload), so I am left wondering what method I would use to ensure that the user's browsers will be able to encrypt and decrypt these files in a way that requires no action from the user - the same way everything else is encrypted and decrypted.
I'd like to assume that the fact that I am enforcing https on these web page URLs will automatically take care of it the way it takes care of the web page output. However, I do observe that files with separate file paths like images and CSS are not automatically encrypted, and that the code I'm using to trigger those file download boxes contains header information, implying that it's a separate transaction, and perhaps not encrypted.
I have really, really thought about this from a whole bunch of angles and I'm just not seeing the solution. Anyone want to help me?
Use HTTPS for secure (encrypted) delivery of data. Store the files in each user's folder as you're doing, and only allow access after authentication (over HTTPS).
The reason you're having a hard time finding another solution is because HTTPS is the solution.
If you want to store the files encrypted on the disk, you can encrypt them with a symmetric block (stream) cipher as they're uploaded and do the reverse as they're downloaded. You could use a secret key that's unique per user as the symmetric key.
I'm working on a side project right now for an email client. I'm using a library to handle the retrieval of the messages from the server. However, I have a question on caching.
I don't want to fetch the entire list of headers everytime I load the client. Ideally, what I'd like to do is cache them and then update the list with what is on the server.
What's the best way to go about this? Should I store all the header information (including the server's message ID #) in a database, load the headers from that DB. Then as a background task sync up with the server...
Or is there a better way?
Look at the webmail sample of this open source project that use local caching:
http://mailsystem.codeplex.com/
If I remember well, he used a combination of local RFC822 plain text email storing with the message id as the filename and an index file with high level data.
Maybe the message itself where zipped to save disc space.
That's just a sample for the library, so don't expect code art there, but that's a start.