I have an app engine app which imports a jar. In that jar I am using
GoogleClientSecrets.load()
to load client_secrets.json file for authentication with BigQuery. Apparently, App Engine does not like me reading a file from some location on my disk when I deploy the app on localhost. I am assuming if I put the credentials in WEBINF folder it will work but haven't tested it but then it would be easy for anyone to access the file. Where is the best place to put credentials and how would one access them from App Engine app?
Thank you for your help!
The suggestions helped to solve the problem when it comes to reading a file. What about writing to a file? I am using FileCredentialStore which stores credential file.
I believe this line is causing a problem:
FileCredentialStore variantStoreCredentialManager = new FileCredentialStore(expectedClientFile,jsonFactory);
and the error is
java.security.AccessControlException: access denied ("java.io.FilePermission" file path "write")
public Bigquery createAuthorizedClient() throws IOException {
Credential authorization = new GoogleCredential();
if ( clientID == null ) {
authorization = createWebAuthenticatedClientCredential();
} else {
String expectedFileLocation = CREDENTIAL_FILE_PATH;
File expectedClientFile = new File(expectedFileLocation);
if ( ! expectedClientFile.exists() ) {
// this is a known issue, the credential store will blow up if the file doesn't exist. So create it with an
// empty json ( { } )
createClientFile(expectedClientFile);
}
FileCredentialStore variantStoreCredentialManager = new FileCredentialStore(expectedClientFile,jsonFactory);
GoogleCredential.Builder credentialBuilder = new GoogleCredential.Builder();
credentialBuilder.setJsonFactory(jsonFactory);
credentialBuilder.setClientSecrets(clientSecrets);
credentialBuilder.setTransport(transport);
authorization = credentialBuilder.build();
boolean loadedSuccessfully = variantStoreCredentialManager.load(clientID,authorization);
if ( ! loadedSuccessfully ) {
authorization = createWebAuthenticatedClientCredential();
variantStoreCredentialManager.store(clientID, authorization);
}
}
return new Bigquery(transport, jsonFactory, authorization);
}
No, contents of /WEB-INF folder is private to application code and is not accessible via HTTP (= servlet container does not honour requests that try to access data in WEB-INF folder).
Use this snippet to read contents of a file inside a /WEB-INF folder:
InputStream is = getServletContext().getResourceAsStream("/WEB-INF/"+filename);
Then read a stream using one of the methods for reading InputStreams.
Related
How can I save the output file from Run query and list results in a .PARQUET file format.
This is my current workflow.
My Logic App is working, But the file .parquet created are not valid every time I view it on Apache Parquet Viewer
Can someone help me on this matter. Thank you!
Output:
I see that you are trying to add .parquet to the csv file you are receiving but that's not how it will be converted to parquet file.
One of the workarounds that you can try is to get the csv file and then add Azure function which can convert into parquet file and then adding the azure function to logic app.
Here is the function that worked for me:
BlobServiceClient blobServiceClient = new BlobServiceClient("<YOUR CONNECTION STRING>");
BlobContainerClient containerClient = blobServiceClient.GetBlobContainerClient("<YOUR CONTAINER NAME>");
BlobClient blobClient = containerClient.GetBlobClient("sample.csv");
//Download the blob
Stream file = File.OpenWrite(#"C:\Users\<USER NAME>\source\repos\ParquetConsoleApp\ParquetConsoleApp\bin\Debug\netcoreapp3.1\" + blobClient.Name);
await blobClient.DownloadToAsync(file);
Console.WriteLine("Download completed!");
file.Close();
//Read the downloaded blob
Stream file1 = new FileStream(blobClient.Name, FileMode.Open);
Console.WriteLine(file1.ReadToEnd());
file1.Close();
//Convert to parquet
ChoParquetRecordConfiguration csv = new ChoParquetRecordConfiguration();
using (var r = new ChoCSVReader(#"C:\Users\<USER NAME>\source\repos\ParquetConsoleApp\ParquetConsoleApp\bin\Debug\netcoreapp3.1\" + blobClient.Name))
{
using (var w = new ChoParquetWriter(#"C:\Users\<USER NAME>\source\repos\ParquetConsoleApp\ParquetConsoleApp\bin\Debug\netcoreapp3.1\convertedParquet.parquet"))
{
w.Write(r);
w.Close();
}
}
after this step you can publish to your azure function and add the Azure function connector to your logic app
You can skip the first 2 steps (i.e.. Read and Download the blob) and get the blob directly from logic app and send it to your azure function and follow the same method as above. The generated parquet file will be in this path.
C:\Users\<USERNAME>\source\repos\ParquetConsoleApp\ParquetConsoleApp\bin\Debug\netcoreapp3.1\convertedParquet.parquet
Here convertedParquet.parquet is the name of the parquet file. Now you can read the converted parquet file in Apache Parquet reader.
Here is the output
I want to access a folder other than root folders in UWP. The user will select the file with a file picker and then this file will be used for software update.
I am able to provide file selection as below. Then when I read this file, I get the error that I do not have permission to access it.
var picker = new FileOpenPicker();
picker.ViewMode = PickerViewMode.Thumbnail;
picker.SuggestedStartLocation = PickerLocationId.PicturesLibrary;
picker.FileTypeFilter.Add(".bin");
file = await picker.PickSingleFileAsync();
StorageFolder folder = await StorageFolder.GetFolderFromPathAsync(file.Path);
StorageFile storageFile = await storageFolder.GetFileAsync(file.Name);
var fileStream = System.IO.File.Open(storageFile.Path, FileMode.Open);
If I manually copy the folder to the root folder and it works fine with the code below.
StorageFolder storageFolder = ApplicationData.Current.LocalFolder;
StorageFile storageFile = await storageFolder.GetFileAsync(file.Name);
var fileStream = System.IO.File.Open(storageFile.Path, FileMode.Open);
Solution for me in two ways. The first is to copy files from the outside to the root folder, the second way is to eliminate the access permission problem.Since it is a constantly changing file, I cannot import it into the project.
Uwp has strong file access restriction, File.open is not uwp api, it can only access two location, which are Application install directory and Application data locations, so your code works well when you copy this file to Application data location. Therefore, I suggest you could use StorageFile.OpenAsync method instead of File.open() to get the stream.
The front end enables people to upload their photos, so i was sending the base64 to the server and working with it initially, but there are problems with firewall which blocks the request which contains base64. As an alternative solution I was trying to upload the image to azure blob get the file name and then send that to the server for processing where I generate a sas token for the blob validation and processing.
This works perfectly fine when I work locally and the front end connection works with #azure/storage-blob
and uploadBrowserData() when I send the arrayBuffer as the param
export const uploadSelfieToBlob = async arrayBuffer => {
try {
const blobURL = `https://${accountName}.blob.core.windows.net${sasString}`;
const blobServiceClient = new BlobServiceClient(blobURL, anonymousCredential);
const containerClient = blobServiceClient.getContainerClient(containerName);
let randomString = Math.random().toString(36).substring(7);
const blobName = `${randomString}_${new Date().getTime()}.jpg`;
const blockBlobClient = containerClient.getBlockBlobClient(blobName);
const uploadBlobResponse = await blockBlobClient.uploadBrowserData(arrayBuffer);
return { blobName, blobId: uploadBlobResponse.requestId };
} catch (error) {
console.log('error when uploading to blob', error);
throw new Error('Error Uploading the selfie to blob');
}
};
When I deploy this is not working, the front is deployed in the EastUs2 location and the local development location is different.
I thought the sasString generated for anonymous access had the timezone option so I generated 2 different one's one for local and one for hosted server with the same location selected.
Failed to send request to https://xxxx.blob.core.windows.net/contanainer-name/26pcie_1582087489288.jpg?sv=2019-02-02&ss=b&srt=c&sp=rwdlac&se=2023-09-11T07:57:29Z&st=2020-02-18T00:57:29Z&spr=https&sig=9IWhXo5i%2B951%2F8%2BTDqIY5MRXbumQasOnY4%2Bju%2BqF3gw%3D
What am I missing any lead would be helpful thanks
First, as mentioned in the comments there was an issue with the CORS Settings because of which you're getting the initial error.
AuthorizationResourceTypeMismatchThis
request is not authorized to perform this operation using this
resource type. RequestId:7ec96c83-101e-0001-4ef1-e63864000000
Time:2020-02-19T06:57:31.2867563Z
I looked up this error code here and then closely looked at your SAS URL.
One thing I noticed in your SAS URL is that you have set the signed resource type (srt) as c (container) and trying to upload the blob. If you look at the description of the kind of operations you can do using srt=c here, you will notice that blob related operations are not supported.
In order to perform blob related operations (like blob upload), you would need to set signed resource type value to o (for object).
Please regenerate your SAS Token and include signed resource type as object (you can also include container and/or service in there as well) and then your request should work. So essentially your srt in your SAS URL should be something like srt=o or srt=co or srt=sco.
I couldn't notice anything wrong with the code you mentioned about, but I have been using a different method to upload files to Azure Blog Storage using React, the method is exactly the same as in this blog article which works perfectly for me.
https://medium.com/#stuarttottle/upload-to-azure-blob-storage-with-react-34f37805fdfc
I have a Flask app where a user can upload an image and the image is saved on a static folder on the filesystem.
Currently, I'm using Google App Engine for hosting and found that it's not possible to save to the static folder on the standard environment. Here is the code
def save_picture(form_picture,name):
picture_fn = name + '.jpg'
picture_path = os.path.join(app.instance_path, 'static/image/'+ picture_fn)
output_size = (1000,1000)
i = Image.open(form_picture)
i.thumbnail(output_size)
i.save(picture_path)
return picture_path
#app.route('/image/add', methods=['GET', 'POST'])
def addimage():
form = Form()
if form.validate_on_submit():
name = 'randomname'
try:
picture_file = save_picture(form.image.data,name)
return redirect(url_for('addimage'))
except:
flash("unsuccess")
return redirect(url_for('addimage'))
My question is if I change from standard to flex environment would it be possible to save to a static folder? If not what are the other hosting options that I should consider? Do you have any suggestions?
Thanks in advance.
following your's advice I'm changing to use Cloud Storage. i'm wondering what should i use from upload_from_file(), upload_from_filename() or upload_from_string(). the source_file takes data from form.photo.data from flask-wtform. i'm not successfully saving on the cloud storage yet. this is my code:
def upload_blob(bucket_name, source_file, destination_blob_name):
storage_client = storage.Client()
bucket = storage_client.get_bucket(bucket_name)
blob = bucket.blob(destination_blob_name)
blob.upload_from_filename(source_file)
return destination_blob_name
#app.route('/image/add', methods=['GET', 'POST'])
def addimage():
form = Form()
if form.validate_on_submit():
name = 'randomname'
try:
filename = 'foldername/'+ name + '.jpg'
picture_file = upload_blob('mybucketname', form.photo.data, filename)
return redirect(url_for('addimage'))
except:
flash("unsuccess")
return redirect(url_for('addimage'))
I have successfully able to save file on google cloud storage by changing the save_picture function just in case anyone have trouble with this in the future:
app.config['BUCKET'] = 'yourbucket'
app.config['UPLOAD_FOLDER'] = '/tmp'
def save_picture(form_picture,name):
picture_fn = secure_filename(name + '.jpg')
picture_path = os.path.join(app.config['UPLOAD_FOLDER'], picture_fn)
output_size = (1000,1000)
i = Image.open(form_picture)
i.thumbnail(output_size)
i.save(picture_path)
storage_client = storage.Client()
bucket = storage_client.get_bucket(app.config['BUCKET'])
blob = bucket.blob('static/image/'+ picture_fn)
blob.upload_from_filename(picture_path)
return picture_path
The problem with storing it to some folder is that it would live on that one instance and other instances would not be able to access it. Furthermore, instances in GAE come and go, so you would lose the image eventually.
You should use Google Cloud Storage for this:
from google.cloud import storage
client = storage.Client()
bucket = client.get_bucket('bucket-id-here')
blob = bucket.get_blob('remote/path/to/file.txt')
blob.upload_from_string('New contents!')
https://googleapis.dev/python/storage/latest/index.html
With Flask and Appengine, Python3.7, I save files to a bucket in the following way, because I want to loop it for many files:
for key, upload in request.files.items():
file_storage = upload
content_type = None
identity = str(uuid.uuid4()) # or uuid.uuid4().hex
try:
upload_blob("f00b4r42.appspot.com", request.files[key], identity, content_type=upload.content_type)
The helper function:
from google.cloud import storage
def upload_blob(bucket_name, source_file_name, destination_blob_name, content_type="application/octet-stream"):
"""Uploads a file to the bucket."""
storage_client = storage.Client()
bucket = storage_client.get_bucket(bucket_name)
blob = bucket.blob(destination_blob_name)
blob.upload_from_file(source_file_name, content_type=content_type)
blob.make_public()
print('File {} uploaded to {}.'.format(
source_file_name,
destination_blob_name))
Changing from Google App Engine Standard Environment to Google App Engine Flexible Environment will allow you to write to disk, as well as to choose a Compute Engine machine type with more memory for your specific application [1]. If you are interested on following this path find all the relevant documentation from migrating a Python app here.
Nonetheless, as it was explained by user #Alex on his provided answer as instances are created (the number of instances is scaled up) or deleted (the number of instances is scaled down) according to your load, the better option in your particular case would be to use Cloud Storage. Find an example for uploading objects to Cloud Storage with Python here.
I have been trying to parse a multipart request by using apache commons file upload over JBOSS 5.1 .
The problem is when request is parsed, FileItem list is not being filled .(FileItem list is empty) Here is the code block that is working on windows but not on Unix :
DiskFileItemFactory factory = new DiskFileItemFactory();
factory.setSizeThreshold(1024*1024*3);
factory.setRepository(new File("/root/loads/temp"));
// Create a new file upload handler
ServletFileUpload upload = new ServletFileUpload(factory);
upload.setFileSizeMax(100000);
upload.setSizeMax(100000);
boolean isMulti =upload.isMultipartContent(request);
// Parse the request
try {
List<FileItem> items = upload.parseRequest(request);
Note : I am reaching the HTTPServletRequest via HttpEvent.getHTTPServletRequest().Also request has not being handled before.java version = 1.6_021
I found the solution, jboss security and our project's platform rules does not allow to access any file which are not in the specified directory.
I used jboss temp directory and can access the items in the request.