I need to do bulk upload data from CSV file to Datastore. Although the data in the CSV file is also having a field which should be URL to a file.
Each row(person) is mapped to an associated file. which either i can upload in Google Cloud Storage. Although at runtime how can i upload the file and then get the URL and update the CSV file. Then use the CSV file to do Bulk upload.
Need to have a solution for this.
THanks for Help
Two ways of doing this
Write stuff in your request handler and perform the task, raw data can be uploaded to gae as a project resources, there are some size limits obviously
The better way is to enable remote api , then use remote api python script to batch upload stuff or write some code in python which points to your remote datasource.
Related
I'm busy setting up a load test for file upload in moodle and I am struggling with the file upload. It seems to be losing the sesskey when it gets to the uploading of the file.
Here is the error message from the response data:
"{"error":"A required parameter (sesskey) was missing","errorcode":"missingparam","stacktrace":null,"debuginfo":null,"reproductionlink":null}".
Please help, this test needs to be done in the next 2 days.
Thank you in advance.
I extracted the sesskey using the Regular Expression Extractor and this worked for downloading a file and taking a quiz in moodle but for file upload it loses the session.
Just record the file upload event using HTTP(S) Test Script Recorder, the only thing you will need to do is to copy the file(s) you will be uploading to "bin" folder of your JMeter installation prior to starting the file upload in the browser, this way JMeter will be able to properly capture the request and generate the relevant HTTP Request sampler and HTTP Header Manager
See Recording File Uploads with JMeter article for more details.
Once you have recorded "skeleton" you can correlate the dynamic values like sesskey
I have a Web activity that loads data into Salesforce using Bulk API and gets the response back for the failed records. I want to save the failed response as a csv file in the blob storage. I have tried to use another Web activity and copy activity but nothing seems to work. Any tips on how the response can be converted to csv file and stored in the blob storage?
Below is Response from Web Activity
This is the Web Activity where I am trying to get the response in the body
Please check out this link if it helps
Last web activity put one more webactivity.
In the web activity body put #activity('Web1').output
or
#activity('Web1').output.data
https://learn.microsoft.com/en-us/answers/questions/129351/azure-data-factory-web-activity-save-output.html
URL -- you will get from the storage account property section
https://azadlsgen2eim.blob.core.windows.net/
Try one if it works I am not sure on extension.
I have a zipped file containing images which I am sending as response to a python REST API call. I want to create a rest application which consumes the python rest api in this manner: The response's content should be extracted without downloading (in browser side) and all the images should be displayed to the user. Is this possible? If yes, could you please help me in the implementation? I am unable to find help anywhere.
I think what you are trying to do is have a backend server (python) where zip files of images are hosted. You need to create an application (that could be in react) that
Send HTTP calls to the server get those .zip files.
Unzip them. How to unzip file on javascript
Display the images to the user. https://medium.com/better-programming/how-to-display-images-in-react-dfe22a66d5e7
I'm not sure what utf-8 has to do with this, but this is possible. A quick google gave me the results above.
One Gateway application is going to send a GET request to my Vert.x application. For this request, I need to read a large zip file from Amazon S3 server which I am able to read in BufferedInputStream. I don't want to download this file but rather I need to send this stream data to the gateway application (NOT a downloadable file with application/zip content type but stream data or byte chunks) which it will further send to end application where this stream data will be downloaded as a zip file. So the sending the zip file in form of stream to the gateway application I need to achieve using Vert.x. Already gone through lot of documentation and blogs but everywhere it is given to download the zip file which is not my intention. Could anyone please suggest how I could achieve this streaming of zip file in http response of calling request using Vert.x? Do I need to use Java NIO? If yes, could you please put details? Sorry, but I have nothing to put here as part of code. Thanks in advance!
I'm trying to upload to GCS using the Blobstore. I have set the GCS bucket name while generating the upload url, and the file gets uploaded successfully.
In the upload handler, blobInfo.getFilename() returns the right file name. But the file actually got saved in the GCS bucket in some different file name. Each time, the file name is some random hash like this one:
L2FwcGhvc3RpbmdfcHJvZC9ibG9icy9BRW5CMlVvbi1XNFEyWEJkNGlKZHNZRlJvTC0wZGlXVS13WTF2c0g0LXdzcEVkaUNEbEEyc3daS3Vham1MVlZzNXlCSk05ZnpKc1RudDJpajF1TmxwdWhTd2VySVFLdUw3US56ZXFHTEZSLVoxT3lablBI
Is this how it will work? Is this an anomaly?
I store the file name to the datastore based on the data returned from blobInfo.getFilename(), which is the correct value of file name. But I'm unable to access the file using the GcsFilename since the file is stored in GCS with that random hash as file name.
Any pointers would be greatly helpful.
Thanks!
PS: The blobstore page says that BlobInfo is currently not available for GCS objects. But BlobInfo.getFilename returns the right value for me. Is that something wrong from my end?
It's how it works, see https://cloud.google.com/appengine/docs/python/blobstore/fileinfoclas ...:
FileInfo metadata is not persisted to datastore [...] You must save
the gs_object_name yourself in your upload handler or this data will
be lost
I personally recommend that new applications use https://cloud.google.com/appengine/docs/python/googlecloudstorageclient/ directly, rather than the blobstore emulation on top of it.
The latter is currently provided essentially only for (limited, partial) backwards compatibility: it's not really all that suitable for new applications.